It is a universal expectation that depending upon the nature of the product, organizations support at least one or two prior releases of their product. Microsoft SQL Server allows users to be able to use older databases with newer, latest releases of the server system through a property called as compatibility level.
Compatibility levels provide partial backward compatibility with earlier versions of SQL Server. Compatibility levels affects the behavior of a specific database, not the entire server – that way, most databases can continue to leverage the newly introduced features of SQL Server, while only those databases that have not been made compatible can use the relevant compatibility level setting for functioning.
Here’s a quick table showing the compatibility levels supported by SQL Server 2012 (code named “Denali”):
|Compatibility Level||Corresponding SQL Server Version supported|
|90||SQL Server 2005|
|100||SQL Server 2008/R2|
|110||SQL Server 2012|
The default compatibility level for SQL Server 2012 is 110. All databases created in SQL Server 2012 have this compatibility level, unless the model database has a lower compatibility level (because all databases, including the tempdb are copies of the model database).
Please note that support for compatibility level 80 has been discontinued from SQL Server 2012. You can refer my post: http://beyondrelational.com/modules/2/blogs/77/Posts/14429/0156-sql-server-2012-deprecated-features-valid-compatibility-levels-compatibilitylevel-80-support-ms.aspx for more details.
Now, when upgrading a database from a prior version of SQL Server to the latest version, one of the things that need to be changed is the compatibility level. Today, I would like to draw your attention to the fact that changing of the compatibility level will cause all cached plans to be recompiled when the related queries/batches are executed again.
For a quick demonstration, run through the following set of queries step-by-step.
-- 0. Create the DB CREATE DATABASE CompatLevelTest GO -- 1. Set the compatibility level ALTER DATABASE CompatLevelTest SET COMPATIBILITY_LEVEL = 90 GO -- 2. Clean the buffers DBCC DROPCLEANBUFFERS DBCC FREEPROCCACHE GO -- 3. Create the stored procedure USE CompatLevelTest GO CREATE PROCEDURE proc_AddNumbers (@num1 INT, @num2 INT) AS BEGIN SELECT (@num1 + @num2) AS Summation END GO --4. Execute the procedure USE CompatLevelTest GO EXEC proc_AddNumbers 2, 5 GO -- 5. Check the plan_generation_num USE CompatLevelTest GO --Check the plan generation SELECT execution_count, sql_handle, plan_handle, * FROM sys.dm_exec_procedure_stats CROSS APPLY sys.dm_exec_sql_text(sql_handle) WHERE text like '%proc_AddNumbers%' AND database_id = DB_ID('CompatLevelTest') -- 6. Change the compatibility level ALTER DATABASE CompatLevelTest SET COMPATIBILITY_LEVEL = 100 GO --7. Execute the procedure USE CompatLevelTest GO EXEC proc_AddNumbers 2, 5 GO -- 8. Check the plan_generation_num USE CompatLevelTest GO --Check the plan generation SELECT execution_count, sql_handle, plan_handle, * FROM sys.dm_exec_procedure_stats CROSS APPLY sys.dm_exec_sql_text(sql_handle) WHERE text like '%proc_AddNumbers%' AND database_id = DB_ID('CompatLevelTest') --Repeat steps 7 & 8 again to verify the outputs
-- 9. Cleanup! USE master GO DROP DATABASE CompatLevelTest GO
The points of interest are the outputs of the DMV: sys.dm_exec_procedure_stats. This is same as the sys.dm_exec_query_stats DMV, except that this returns aggregate performance statistics for cached stored procedures. The view returns one row for each cached stored procedure plan, and the lifetime of the row is as long as the stored procedure remains cached. When a stored procedure is removed from the cache, the corresponding row is eliminated from this view. If a plan is being re-used the value in the execution_count column will go up.
Here is the abridged result of the above test.
|Compatibility Level||Iteration||Execution Count||Type||Type_Desc|
The above table shows us that when the procedure was executed for the 2nd time, i.e. after changing the compatibility level, the procedure was recompiled, because the execution_count value remains 1.
Why did I write about it?
The reason this observation attracted me is that many ISVs fail to convey this to the end customers. They would upgrade the database and change compatibility level of their database (the customer would already be using a higher version of SQL Server), and then complaints start pouring in about unusually slow response times from the server, when in their view, the response time should have been faster than the prior release. This is quite obvious because the SQL Server has to redo all the cached plans. If the end users are educated to expect a performance degradation, then a lot of customer support calls can be reduced.
- sys.dm_exec_procedure_stats: http://msdn.microsoft.com/en-us/library/cc280701.aspx
Until we meet next time,