Category Archives: Imported from BeyondRelational

These posts are imported from my old blog page: http://beyondrelational.com/modules/2/blogs/77/nakuls-blog.aspx

#0173-SQL Server-SSMS-Productivity improvement feature-Open new queries in SQLCMD mode


Most of our deployment queries (including the build & deployment output) from the Visual Studio 2010 database solution (SQL Server Data Tools (SSDT) for SQL 2012 Denali) generate their output in SQL queries designed to be executed under the CMD mode. Many of our pre-designed debugging scripts are also written to be executed in the SQLCMD mode.


(*If you are new to SQLCMD mode, you may want to explore my post: http://beyondrelational.com/modules/2/blogs/77/posts/11317/sqlcmd-mode-in-ssms-final-part-in-the-series-underappreciated-features-of-microsoft-sql-server.aspx)


To simply the day and to save a couple of steps of going to the Query menu and choosing the SQLCMD mode, the SSMS on the each team members’ workstation has the “Open new queries in SQLCMD mode” switch set by default. To the best of my knowledge, this feature is available from SSMS for SQL 2008 and above (I have not checked SSMS for SQL Server 2005). Once set, any new query editor window that opens (after restarting SSMS) will open in the SQLCMD mode.


To set this switch, one needs to follow the following simple steps:



  1. Launch SSMS
  2. Go to Tools –> Options
  3. image
  4. Navigate out to the options for “Query Execution”
  5. Check the switch “Open new queries in SQLCMD mode”
  6. image
  7. Click OK
  8. Exit out of SSMS

If you use SQLCMD mode a lot, I recommend that you keep this switch checked in your environment too.


Until we meet next time,


Be courteous. Drive responsibly.

#0172-SQL Server-Changing compatibility level of a database causes recompilation of cached plans


It is a universal expectation that depending upon the nature of the product, organizations support at least one or two prior releases of their product. Microsoft SQL Server allows users to be able to use older databases with newer, latest releases of the server system through a property called as compatibility level.

Compatibility levels provide partial backward compatibility with earlier versions of SQL Server. Compatibility levels affects the behavior of a specific database, not the entire server – that way, most databases can continue to leverage the newly introduced features of SQL Server, while only those databases that have not been made compatible can use the relevant compatibility level setting for functioning.

Here’s a quick table showing the compatibility levels supported by SQL Server 2012 (code named “Denali”):

Compatibility Level Corresponding SQL Server Version supported
90 SQL Server 2005
100 SQL Server 2008/R2
110 SQL Server 2012

The default compatibility level for SQL Server 2012 is 110. All databases created in SQL Server 2012 have this compatibility level, unless the model database has a lower compatibility level (because all databases, including the tempdb are copies of the model database).

Please note that support for compatibility level 80 has been discontinued from SQL Server 2012. You can refer my post: http://beyondrelational.com/modules/2/blogs/77/Posts/14429/0156-sql-server-2012-deprecated-features-valid-compatibility-levels-compatibilitylevel-80-support-ms.aspx for more details.

Now, when upgrading a database from a prior version of SQL Server to the latest version, one of the things that need to be changed is the compatibility level. Today, I would like to draw your attention to the fact that changing of the compatibility level will cause all cached plans to be recompiled when the related queries/batches are executed again.

For a quick demonstration, run through the following set of queries step-by-step.

-- 0. Create the DB
CREATE DATABASE CompatLevelTest
GO

-- 1. Set the compatibility level
ALTER DATABASE CompatLevelTest SET COMPATIBILITY_LEVEL = 90
GO

-- 2. Clean the buffers
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO

-- 3. Create the stored procedure
USE CompatLevelTest
GO
CREATE PROCEDURE proc_AddNumbers (@num1 INT, @num2 INT)
AS 
BEGIN
    SELECT (@num1 + @num2) AS Summation
END
GO

--4. Execute the procedure
USE CompatLevelTest
GO
EXEC proc_AddNumbers 2, 5
GO

-- 5. Check the plan_generation_num
USE CompatLevelTest
GO
--Check the plan generation 
SELECT execution_count, sql_handle, plan_handle, *
FROM sys.dm_exec_procedure_stats
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
WHERE text like '%proc_AddNumbers%'
  AND database_id = DB_ID('CompatLevelTest')

-- 6. Change the compatibility level
ALTER DATABASE CompatLevelTest SET COMPATIBILITY_LEVEL = 100
GO

--7. Execute the procedure
USE CompatLevelTest
GO
EXEC proc_AddNumbers 2, 5
GO

-- 8. Check the plan_generation_num
USE CompatLevelTest
GO
--Check the plan generation 
SELECT execution_count, sql_handle, plan_handle, *
FROM sys.dm_exec_procedure_stats
CROSS APPLY sys.dm_exec_sql_text(sql_handle)
WHERE text like '%proc_AddNumbers%'
  AND database_id = DB_ID('CompatLevelTest')
  
  
--Repeat steps 7 & 8 again to verify the outputs
-- 9. Cleanup!
USE master
GO
DROP DATABASE CompatLevelTest
GO

The points of interest are the outputs of the DMV: sys.dm_exec_procedure_stats. This is same as the sys.dm_exec_query_stats DMV, except that this returns aggregate performance statistics for cached stored procedures. The view returns one row for each cached stored procedure plan, and the lifetime of the row is as long as the stored procedure remains cached. When a stored procedure is removed from the cache, the corresponding row is eliminated from this view. If a plan is being re-used the value in the execution_count column will go up.

Here is the abridged result of the above test.

Compatibility Level Iteration Execution Count Type Type_Desc
90 1 1 P SQL_STORED_PROCEDURE
100 2 1 P SQL_STORED_PROCEDURE
100 3 2 P SQL_STORED_PROCEDURE

The above table shows us that when the procedure was executed for the 2nd time, i.e. after changing the compatibility level, the procedure was recompiled, because the execution_count value remains 1.

Why did I write about it?

The reason this observation attracted me is that many ISVs fail to convey this to the end customers. They would upgrade the database and change compatibility level of their database (the customer would already be using a higher version of SQL Server), and then complaints start pouring in about unusually slow response times from the server, when in their view, the response time should have been faster than the prior release. This is quite obvious because the SQL Server has to redo all the cached plans. If the end users are educated to expect a performance degradation, then a lot of customer support calls can be reduced.

References:

Until we meet next time,

Be courteous. Drive responsibly.

#0171-SQL Server-Scripts-Backup and Restore a database over a network using UNC paths


I am sure this post would ring a bell with most development teams. Development servers are generally constrained for space. Whenever a development team requests more hardware, be it in terms of a new machine or even additional storage space, I am sure that most IT departments would have told them “Why do you need all these servers? You already have n number of servers, which are more than the number of people on the team!”

Ultimately, this results in lack of disk space to store both a database and it’s backup on the same drive. In fact, we had this issue some time ago, wherein we had to restore a reasonably large database 50GB+. Both the backup and the database simply could not reside on the same server due to disk space issues.

So, we decided to place the backup file on a network share, map it to a drive on the server and restore the database from there. However, that just wouldn’t work. Attempting to restore through the SSMS results in the following error:

image

But, as they say – “Where there is a will, there is a way.” The core issue here is that mapped network drives are not supported by SQL Server. UNC paths however, are a different story.

SQL Server fully supports backups & restores over UNC paths. There are 2 methods that you can use:

  1. Directly type the UNC path where the backup needs to be taken/restored from
  2. Create a backup device pointing to the UNC path and then backup to/restore from this device

After looking at how easy and convenient it is to use UNC paths with SQL Server, our entire team has started using them. Here are a some scripts that you may be interested in:

  1. Backup Databases across the network: http://beyondrelational.com/modules/30/scripts/485/scripts/15042/backup-database-across-the-network.aspx
  2. Restore Databases across the network: http://beyondrelational.com/modules/30/scripts/485/scripts/15043/restore-databases-from-a-backup-file-across-a-network.aspx

There are a couple of security considerations that you need to take care about. They are available in the following MS KB article: http://support.microsoft.com/kb/207187

These scripts are available in the Scripts module on BeyondRelational.com (http://beyondrelational.com/modules/30/default.aspx?s=stream&tab=scripts). If you haven’t been to the Scripts module page, do pay it a visit – you will find a lot of useful scripts that you can customize for your use.

Until we meet next time,

Be courteous. Drive responsibly.

#0170-SQL Server-Deprecated Features-Column Alias defined by a string enclosed in quotation marks


I recently wrote a piece about the impact of table and column aliases on query performance and plan cache size. As I was writing the post, I recalled that there are multiple ways in which column aliases can be defined in a T-SQL query, depending upon the developer’s preference. However, one of these methods is marked for deprecation in one of the future releases of Microsoft SQL Server (it is still a valid method for SQL Server 2012, so there is no cause of worry in the immediate future).

The available methods for column aliasing and their status is shown in the table below:

Format Status
‘column_alias’ = expression Deprecated
expression AS column_alias Active
expression AS ”column_alias” Active
expression AS [column_alias] Active
[column_alias] = expression Active
expression AS “column_alias” Active

All of these methods are demonstrated in the query below:

USE AdventureWorks2012;
GO

SELECT                                            --           Format            ;   Status
    "Employee Birth Date" = Employee.BirthDate,   -- "column_alias" = expression ; Deprecated
    Employee.HireDate AS JoiningDate,             -- expression AS column_alias  ; Active
    Employee.BusinessEntityID AS "EmployeeId",    -- expression AS "column_alias"; Active
    Employee.OrganizationLevel AS [Org. Level],   -- expression AS [column_alias]; Active
    [Salary Flag] = Employee.SalariedFlag,        -- [column_alias] = expression ; Active
    Employee.SickLeaveHours AS "SickHours",       -- expression AS “column_alias"; Active
    "VacationHours" = Employee.VacationHours      -- "column_alias" = expression ; Deprecated
FROM HumanResources.Employee AS Employee;
GO

My recommendations

My recommendations around any such feature is:

  • To have one standard for the entire organization/product
  • The element adopted as the standard should not have been marked for deprecation for at least the next 2 major releases of Microsoft SQL Server
  • Finally, and the most important consideration is that the feature should help the team become more productive and efficient

I personally use either expression AS [column_alias] OR [column_alias] = expression methods.

Before you leave, do share your preferred format for column aliasing.

Until we meet next time,

Be courteous. Drive responsibly.

#0169-SQL Server-Table/Column Aliasing does not affect query performance-Part 2


Earlier this week, I wrote a post on whether table and column aliasing affects query performance or not (http://beyondrelational.com/modules/2/blogs/77/Posts/14948/0168-sql-server-tablecolumn-aliasing-does-not-affect-query-performance.aspx) . We were able to determine that overall time-wise, both queries were performing equally well. However, one of the comments received as part of the feedback was to also include results of the following in addition to the use of simply the execution plan.

  • Statistics IO
  • Statistics Profile
  • Actual CPU times

I therefore repeated the tests after modifying the query a little bit to include the necessary SET statements and setting the buffer to a clean state.

IMPORTANT NOTE: In this script, we use 2 DBCC commands – FREEPROCCACHE and DROPCLEANBUFFERS to simulate a cold buffer condition, which is generally achieved when SQL Server restarts. Doing so indiscriminately can have severe performance implications. Please do not execute the scripts shown in this example in your production or quality assurance environments.

USE AdventureWorks2012
GO

/*************************************************************************************************************
                                      IMPORTANT WARNING
--------------------------------------------------------------------------------------------------------------
DBCC FREEPROCCACHE and DROPCLEANBUFFERS are used to simulate a cold buffer condition, 
that is generally achieved when SQL Server restarts. 

Doing so indiscriminately can have severe performance implications. 

Please do not execute the scripts shown in this example in your production or quality assurance environments.

The author, this site or Microsoft are not responsible
for any damage caused by indiscriminate use of these DBCC commands
*************************************************************************************************************/
DBCC FREEPROCCACHE WITH NO_INFOMSGS
GO
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS
GO

SET STATISTICS IO ON
SET STATISTICS PROFILE ON

SELECT 'Non-Aliased Query',
       HRE.BusinessEntityID,
       HRE.LoginID,
       HRE.NationalIDNumber,
       HRE.OrganizationNode,
       HRE.OrganizationLevel,
       HRE.JobTitle,
       HRE.BirthDate,
       HRE.MaritalStatus,
       HRE.Gender,
       HRE.HireDate,
       HRE.SalariedFlag,
       HRE.VacationHours,
       HRE.SickLeaveHours,
       HRE.CurrentFlag,
       HRE.rowguid,
       HRE.ModifiedDate,
       Dept.DepartmentID,
       Dept.ShiftID,
       Dept.StartDate,
       Dept.EndDate,
       Dept.ModifiedDate,
       HumanResources.Department.Name,
       HumanResources.Department.GroupName,
       HumanResources.Department.ModifiedDate,
       HumanResources.Shift.Name,
       HumanResources.Shift.ShiftID,
       HumanResources.Shift.StartTime,
       HumanResources.Shift.EndTime,
       HumanResources.Shift.ModifiedDate
FROM HumanResources.Employee AS HRE 
INNER JOIN HumanResources.EmployeeDepartmentHistory AS Dept ON HRE.BusinessEntityID = Dept.BusinessEntityID
INNER JOIN HumanResources.Department ON Dept.DepartmentID = HumanResources.Department.DepartmentID 
INNER JOIN HumanResources.Shift ON Dept.ShiftID = HumanResources.Shift.ShiftID
GO

DBCC FREEPROCCACHE WITH NO_INFOMSGS
GO
DBCC DROPCLEANBUFFERS WITH NO_INFOMSGS
GO

SELECT 'Aliased Query',
       EmployeeID = HRE.BusinessEntityID,
       LoginID = HRE.LoginID,
       NationalIDNumber = HRE.NationalIDNumber,
       OrganizationNode = HRE.OrganizationNode,
       OrganizationLevel = HRE.OrganizationLevel,
       JobTitle = HRE.JobTitle,
       BirthDate = HRE.BirthDate,
       MaritalStatus = HRE.MaritalStatus,
       Gender = HRE.Gender,
       HireDate = HRE.HireDate,
       SalariedFlag = HRE.SalariedFlag,
       VacationHours = HRE.VacationHours,
       SickLeaveHours = HRE.SickLeaveHours,
       CurrentFlag = HRE.CurrentFlag,
       rowguid = HRE.rowguid,
       EmployeeModifiedDate = HRE.ModifiedDate,
       DepartmentID = Dept.DepartmentID,
       ShiftID = Dept.ShiftID,
       StartDate = Dept.StartDate,
       EndDate = Dept.EndDate,
       DepartmentModifiedDate = Dept.ModifiedDate,
       DepartmentName = HumanResources.Department.Name,
       GroupName = HumanResources.Department.GroupName,
       DepartmentModifiedDate = HumanResources.Department.ModifiedDate,
       ShiftName = HumanResources.Shift.Name,
       ShiftID = HumanResources.Shift.ShiftID,
       StartTime = HumanResources.Shift.StartTime,
       EndTime = HumanResources.Shift.EndTime,
       ShiftModifiedDate = HumanResources.Shift.ModifiedDate
FROM HumanResources.Employee AS HRE 
INNER JOIN HumanResources.EmployeeDepartmentHistory AS Dept ON HRE.BusinessEntityID = Dept.BusinessEntityID
INNER JOIN HumanResources.Department ON Dept.DepartmentID = HumanResources.Department.DepartmentID 
INNER JOIN HumanResources.Shift ON Dept.ShiftID = HumanResources.Shift.ShiftID
GO

SET STATISTICS IO OFF
SET STATISTICS PROFILE OFF

Running the query gives us the following results:

  • Query output
  • After each query’s output, we receive a result set containing the performance statistics for the given query
  • In the “Messages” tab, the IO statistics for the query execution are available

Let us evaluate each one-by-one:

Statistics I/O

We started with clean buffers for both queries to avoid any impact to the I/O statistics arising out of data already being available in the buffer (i.e. a “logical” read). The statistics I/O for both queries yields the same output, which is:

Table ‘Worktable’. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘Employee’. Scan count 1, logical reads 9, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘EmployeeDepartmentHistory’. Scan count 1, logical reads 4, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘Shift’. Scan count 1, logical reads 2, physical reads 1, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Table ‘Department’. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Here’s the screenshot:

Output of STATISTICS IO is the same for aliased & non-alised queries

Statistics Profile

The statistics Profile is nothing but a textual representation of the query profile information that can be obtained from the execution plan also. Besides many others, the output contains the following values of interest:

  1. The number of rows processed by each operation
  2. The number of times a particular operation was executed
  3. Operator used
  4. Estimated Rows
  5. Estimated CPU
  6. Average Row Size
  7. Total Subtree cost

To compare the result sets, I simply used Excel. You can use any file comparison tool of your choice.

Comparison of the Profiler statistics for aliased v/s non-alised queries shows that apart from a slight increase in average row size, the values are the same.

Comparing the output reveals that the two result sets are identical except for:

  1. The obvious difference in the query text
  2. A slight increase in the Average Row Size is observed for the aliased query (which can be attributed to the increased number of characters used for aliasing)
  3. The Estimated CPU time and the Total Subtree Cost remains the same

Summarizing

Combining the results of the experiments above and the observations from my previous post, we can summarize that:

  Affected By Aliasing?
Query Performance – IO No
Query Performance – CPU No
Query Performance – Avg. Row Size Possible
Query Performance – Time No
Plan Cache Size No
Plan reuse Possible

What have been your observations related to aliasing? Do leave a note as you go.

Until we meet next time,

Be courteous. Drive responsibly.