Tag Archives: Performance Tuning

Experiences, ideas and tips around Microsoft SQL Server performance tuning.

#0138 – SQL Server – Performance – Best Practice – Create Index Before or After a data insert – I/O, Fragmentation, CPU and Elap


In November 2011, I wrote a post titled SQL Server – Performance – Best Practice – Create Index Before or After a data insert?. The post dealt with answering the question – which method was better – whether to create an index before or after a bulk-insert? The post was essentially targeted to code that generated temporary tables for SSRS reporting or bulk processing.

Erik Eckhardt provided some great feedback on the post. Essentially, his feedback was:

  1. He wanted to see a Profiler trace comparison between the two approaches
  2. A fragmentation study was in order – which method produced the most efficient indexes?

I had intended to carry out this study a long time ago (during the Christmas week-end), but it’s better late than never. So, here’s the results of the study.

Creating the test data

The approach that I will be taking today is:

  1. Create two identical tables in the TempDB
    • On one table, indexes will be created before the insert of test data is done
    • On the other, I will first insert some data, and then create indexes

WARNING: The scripts provided in this post are provided “as-is” and without warranty for understanding purposes only. Using the DBCC scripts provided in the post on your QA or production environments can have serious side effects

The script that I will be using today is same as that in my earlier post. However, please find it below for your kind reference. First up is the query that creates the indices before inserting the data.

USE tempdb
GO

SET NOCOUNT ON

/*****************************************************************************************************************
   STEP 01: Create a table, complete with a primary key (to create a clustered index), and a non-clustered index
*****************************************************************************************************************/
USE tempdb
GO
CREATE TABLE DontUpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT',
  CONSTRAINT PK_DontUpdateStatistics_StatId PRIMARY KEY CLUSTERED 
  (
    StatId
  )
)
GO

CREATE NONCLUSTERED INDEX NC_DontUpdateStatistics_RandomId ON DontUpdateStatistics(RandomId)
GO

/*****************************************************************************************************************
   STEP 02: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO DontUpdateStatistics DEFAULT VALUES
GO 100000

Next up is the script that I recommend – wherein the indexes are created after the data is inserted.

USE tempdb
GO

SET NOCOUNT ON

/*****************************************************************************************************************
   STEP 04: Create a table - this time, do not create any indexes
*****************************************************************************************************************/
CREATE TABLE UpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT'
)
GO

/*****************************************************************************************************************
   STEP 05: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO UpdateStatistics DEFAULT VALUES
GO 100000

/*****************************************************************************************************************
   STEP 06: Now create the primary key clustered index and the non-clustered index
*****************************************************************************************************************/
ALTER TABLE UpdateStatistics
    ADD CONSTRAINT PK_UpdateStatistics_StatId PRIMARY KEY CLUSTERED 
    (
        StatId
    )
GO
CREATE NONCLUSTERED INDEX NC_UpdateStatistics_RandomId ON UpdateStatistics(RandomId)
GO

I will now start by answering the 2nd question first – which method produced an index that was more efficient from a fragmentation perspective?

Fragmentation study

Please note that in both the queries above, the clustered index is created on the column – StatId, which is an identity column. The inserts are therefore in the order we would want them to be in both cases, causing a “write-once” operation as Erik put it.

Once the data has been inserted into the temporary tables, I ran the following query that uses the sys.dm_db_index_physical_stats DMV to take a look at the index physical statistics. NOTE: I will not use the DBCC SHOWCONTIG because it is marked for deprecation (BOL reference here).

USE tempdb
GO
SELECT 'Table: DontUpdateStatistics' AS TableName,
       index_id AS IndexId,
       index_type_desc AS IndexType,
       index_depth AS IndexDepth,
       avg_fragment_size_in_pages AS 'AverageFragmentation(Pages)',
       avg_fragmentation_in_percent AS 'AverageFragmentation(Percentage)',
       fragment_count AS Fragments,
       page_count AS 'PageCount',
       partition_number AS PartitionNumber,
       alloc_unit_type_desc AS AllocationUnitType,
       index_level AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('DontUpdateStatistics'),-1,0,NULL)

SELECT 'Table: UpdateStatistics'AS TableName,
       index_id AS IndexId,
       index_type_desc AS IndexType,
       index_depth AS IndexDepth,
       avg_fragment_size_in_pages AS 'AverageFragmentation(Pages)',
       avg_fragmentation_in_percent AS 'AverageFragmentation(Percentage)',
       fragment_count AS Fragments,
       page_count AS 'PageCount',
       partition_number AS PartitionNumber,
       alloc_unit_type_desc AS AllocationUnitType,
       index_level AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('UpdateStatistics'),-1,0,NULL)

I looked at the following values:

  • avg_fragmentation_in_percent – The percent of logical fragmentation (out-of-order pages in the index)
  • fragment_count – The number of fragments (physically consecutive leaf pages) in the index
  • avg_fragment_size_in_pages – Average number of pages in one fragment in an index

Essentially, The avg_fragment_size_in_pages and avg_fragmentation_in_percent values are inversely proportional to each other. Therefore, we are looking for a higher value of avg_fragment_size_in_pages, and a lower value of avg_fragmentation_in_percent.

Below is the output that I received when the above queries were executed against my test instance.

Fragmentation Results

In case you are not able to see the screenshot above, here’s the same data in tabular format:

TableName Index

Id
IndexType Index

Depth
Average

Fragmentation(Pages)
Average

Fragmentation(Percentage)
Fragments Page

Count
Partition

Number
Allocation

Unit

Type

Index

Depth
Table: DontUpdateStatistics 1 CLUSTERED INDEX 2 11.5666666666667 1.44092219020173 30 347 1 IN_ROW_DATA 0
Table: DontUpdateStatistics 2 NONCLUSTERED INDEX 2 1 99.6108949416342 257 257 1 IN_ROW_DATA 0
Table: UpdateStatistics 1 CLUSTERED INDEX 2 173.5 0 2 347 1 IN_ROW_DATA 0
Table: UpdateStatistics 2 NONCLUSTERED INDEX 2 58 0.574712643678161 3 174 1 IN_ROW_DATA 0

Results

Contrary to expectations, the results above clearly indicate that we have higher fragmentation in cases when indexes are created before data is inserted into the tables. It does not matter when these values (i.e. the results of the DMV) are fetched – before or after a SELECT is executed against the table, i.e. the index fragmentation remains the same and are not affected by when the statistics on the table are updated.

CPU Utilization and Elapsed Time study

I would address this part of the study in 2 parts:

  1. In my original post (here), we already saw that when running the SELECT statements against the table, the Profiler was able to detect that the table: DontUpdateStatistics – did not have the expected statistics
    • We have also seen that when STATISTICS IO was turned ON, the SQL Server clearly showed a performance difference between the two SELECT statements
  2. In this post, I will try to see whether the performance gain in the second SELECT has been redistributed during the index creation or data load phases

To see the overall CPU time and elapsed time statistics, I used the SET STATISTICS TIME ON option wrapped around the Index (both clustered & non-clustered) creation and data insert statements. The result of the execution based on the data returned for the test data of 100,000 records is summarized below:

Data Insert (ms) Index (clustered & non-clustered) creation (ms) Totals (ms)
CPU Time
Table: DontUpdateStatistics 4687 0 4687
Table: UpdateStatistics 4533 400 4933
Elapsed Time
Table: DontUpdateStatistics 24264 7 24271
Table: UpdateStatistics 22364 23872 46236

Summary

It is now time to summarize the findings.

Data can be loaded to a table in multiple ways. We have been comparing the pros & cons of 2 such methods:

  1. Create necessary supporting indexes first, insert data afterwards
  2. Insert data first, create necessary supporting indexes afterwards

My original post compared these approaches in terms of the number of read-ahead reads (number of pages placed into the cache) required before the SQL server can serve our request to fetch data from the tables depending upon when and whether statistics were updated for the tables. In this post, we studied the level of index fragmentation and time statistics.

The following chart is a summary of my findings based on which approach “wins” (Wins represented as “W”, whereas a loss by “L”):

Most up-to-date statistics Read-ahead reads required to service a query Index fragmentation level Time (CPU + Elapsed time) statistics
Index before, insert after

L

L

L

W

Insert before, index after

W

W

W

L

Further reading

Previous post

Previous post

This post

This post

In short, if execution time is of utmost importance for a given scenario, the approach #1 is preferred wherein indexes are created before inserting the data (which is the most common approach). However, if fragmentation, up-to-date statistics and reduced number of read-ahead reads are of importance (in case of read-only data), the approach #2 is preferable wherein data is inserted before creating the indexes.

I trust you found the study conducted above useful & interesting. If you have any comments, or would like to know more about it, please do feel free to contact me via the associated discussion or twitter. As always, I really appreciate the feedback that you, the community, provides.

References

Until we meet next time,

Be courteous. Drive responsibly.

SQL Server – Performance – Best Practice – Create Index Before or After a data insert?


As a part database developer myself, I am sure that what I am about to document today will be useful to almost everyone who has ever written a stored procedure for with indexed tables for SSRS reporting or massive batch processing. What I will be trying today is to find an answer to the dilemma – Should indexes be created before or after the insert to the table?

The moment of index creation impacts table statistics, and therefore should impact performance. The point that got me thinking towards this test is the following line from Books On Line (http://msdn.microsoft.com/en-us/library/ms190397.aspx), Section: “Determining When to Create Statistics”, which quite clearly mentions that: “The query optimizer creates statistics for indexes on tables or views when the index is created.

The approach that I will be taking today is:

  1. Create two identical tables in the TempDB
    • On one table, indexes will be created before the insert of test data is done
    • On the other, I will first insert some data, and then create indexes
  2. Later, I will fire simple select queries against both, while monitoring the performance via monitoring statistics usage by:
    • setting of SET STATISTICS IO ON and,
    • Using the SQL Profiler

WARNING: The scripts provided in this post are provided “as-is” and without warranty for understanding purposes only. Using the DBCC scripts provided in the post on your QA or production environments can have serious side effects.

Creating the test environment

In order to create the test environment, I will first create a test table, create the necessary indexes and finally add some test data to it.

USE tempdb
GO

/*****************************************************************************************************************
   STEP 01: Create a table, complete with a primary key (to create a clustered index), and a non-clustered index
*****************************************************************************************************************/
USE tempdb
GO
CREATE TABLE DontUpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT',
  CONSTRAINT PK_DontUpdateStatistics_StatId PRIMARY KEY CLUSTERED 
  (
    StatId
  )
)
GO

CREATE NONCLUSTERED INDEX NC_DontUpdateStatistics_RandomId ON DontUpdateStatistics(RandomId)
GO

/*****************************************************************************************************************
   STEP 03: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO DontUpdateStatistics DEFAULT VALUES
GO 100000

Next, I will create another table. However indexes will be created after test data is inserted.

USE tempdb
GO
/*****************************************************************************************************************
   STEP 04: Create a table - this time, do not create any indexes
*****************************************************************************************************************/
CREATE TABLE UpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT'
)
GO

/*****************************************************************************************************************
   STEP 05: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO UpdateStatistics DEFAULT VALUES
GO 100000

/*****************************************************************************************************************
   STEP 06: Now create the primary key clustered index and the non-clustered index
*****************************************************************************************************************/
ALTER TABLE UpdateStatistics
    ADD CONSTRAINT PK_UpdateStatistics_StatId PRIMARY KEY CLUSTERED 
    (
        StatId
    )
GO
CREATE NONCLUSTERED INDEX NC_UpdateStatistics_RandomId ON UpdateStatistics(RandomId)
GO

Run the Test

The first thing I will do is to check if statistics for both tables are updated or not.

Confirm statistics updates

/*****************************************************************************************************************
   STEP 07: See if the statisics on the table in tempDB were updated
Script source: http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/
             : by Pinal Dave
*****************************************************************************************************************/
USE tempdb
GO
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('DontUpdateStatistics') OR
      OBJECT_ID = OBJECT_ID('UpdateStatistics')
GO

As can be seen, the statistics for Table #1 (“DontUpdateStatistics”) did not update:

image

The Test Queries

Next, allow me to present a set of SELECT queries that would fetch data from the tables of interest.

USE tempdb
GO
/*****************************************************************************************************************
   STEP 08 A: Clear out the Buffers and the procedure cache.
            : Make sure to run a CHECKPOINT!
Reference source: 
http://beyondrelational.com/blogs/nakul/archive/2011/10/03/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx
*****************************************************************************************************************/
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

/*****************************************************************************************************************
   STEP 08 B: Turn STATISTICS IO ON, and select some data from the DontUpdateStatistics table
*****************************************************************************************************************/
SET STATISTICS IO ON

SELECT * FROM DontUpdateStatistics WHERE StatId BETWEEN 10 AND 1000 
                                     AND RandomId > 300
GO

SET STATISTICS IO OFF

/*****************************************************************************************************************
   STEP 08 C: Clear out the Buffers and the procedure cache.
            : Make sure to run a CHECKPOINT!
Reference source: 
http://beyondrelational.com/blogs/nakul/archive/2011/10/03/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx
*****************************************************************************************************************/
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO

/*****************************************************************************************************************
   STEP 08 D: Turn STATISTICS IO ON, and select some data from the UpdateStatistics table
*****************************************************************************************************************/
SET STATISTICS IO ON

SELECT * FROM UpdateStatistics WHERE StatId BETWEEN 10 AND 1000 
                                 AND RandomId > 300
GO

SET STATISTICS IO OFF

Configure the SQL Profiler

To use the SQL Server Profiler in order to help me back my case, I connected to my SQL Server instance, and selected only one event:

Event Group: Errors & Warnings

Event Class: Missing Column Statistics

image

The Result

With the SQL Profiler running, I turned on the Actual Execution Plan in SSMS (by pressing Ctrl+M) and then executed the test query. The result was as shown below:

The Execution Plan image
Statistics Output image
Profiler Output image

Confirm statistics updates

/*****************************************************************************************************************
   STEP 09: See if the statisics on the table in tempDB were updated
Script source: http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/
             : by Pinal Dave
*****************************************************************************************************************/
USE tempdb
GO
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('DontUpdateStatistics') OR
      OBJECT_ID = OBJECT_ID('UpdateStatistics')
GO

As can be seen, the statistics for Table #1 (“DontUpdateStatistics”) are now updated. The statistics for Table #2 (“UpdateStatistics”) are unchanged:

image

Post-Mortem

First of all, I would like to draw attention to a line mentioned in the Profiler’s description of the “Missing Column statistics” event class. The line states – “You will not see this event produced unless the option to auto-compute statistics is turned off”. For the TempDB, auto-compute statistics is ON, so that’s not the root cause to this issue.

image

If this is not the issue, then what is? Summarizing the results, I can derive the following conclusions:

  • Both queries appear to have executed with equal performance impact – with a 50% split across them according to the execution plan
  • However, looking at the statistics IO output, I can see that:
    • Read-Ahead Reads for table #1 (“DontUpdateStatistics”) = 16
    • Read-Ahead Reads for table #2 (“UpdateStatistics”) = 4
    • The query against Table #1 (“DontUpdateStatistics”) is requires 4 times more reads when compared to the query against query #2
  • The SQL Profiler warns me that column statistics are not available, and therefore, a sub-optimal plan can be chosen by the optimizer

Revisiting Books On Line, (http://msdn.microsoft.com/en-us/library/ms190397.aspx), the section: “Determining When to Create Statistics”, clearly mentions that: The query optimizer creates statistics for indexes on tables or views when the index is created.

More precisely, statistics, if missing, are created when a select is performed on the columns for which missing statistics were found.

In our case, we created the indexes on table #2 (“UpdateStatistics”) after data was inserted. Because creating indexes involves selecting data from the table, a statistics update is performed. For table #1 (“DontUpdateStatistics”) though, the indexes were created before inserting the data. When the query #1 was executed, the statistics are generated, and then the optimization happens – which makes the query a little bit slower as it spends more time on the read-ahead reads.

If we look up on Books On Line for STATISTICS IO (http://msdn.microsoft.com/en-us/library/ms184361.aspx), we can see that the read-ahead read count gives us the number of pages placed into the cache. In our case, all pages for table #1 have to be placed in the cache for the statistics update.

Conclusion

It’s been a rather long post, with a simple conclusion. It is important to think about when index creation should be performed because using the proper sequence is important from the performance and IO perspective.

Until we meet next time,

Be courteous. Drive responsibly.

CAST v/s CONVERT – Is there a difference as far as SQL Server is concerned? Which is better?


Data type conversion has been one of the most used features of any programming language. Microsoft SQL Server users are familiar with the two functions: CAST() and CONVERT() and use them regularly, and more specifically, interchangeably. A few months ago, I had written a post comparing the performance of CAST v/s CONVERT, and was able to prove that there is no difference in performance of the two functions.You can read that post here: https://nakulvachhrajani.com/2011/01/03/cast-vs-convert/.

Recently, a colleague and I were discussing a particular design element and found that a data type casting was required. During this discussion, the following questions almost simultaneously came to our minds –

  • If the performance and end results are the same, what is the difference between CAST & CONVERT aside from the fact that they have different syntax elements?
  • Do we really need 2 data conversion functions?
Parameter CAST CONVERT
ANSI standard Yes No
Data-type coverage Limited Full (Date & Date Time values supported)
Performance No difference No difference
Microsoft SQL Server implementation ?? ??
Books On Line Page http://msdn.microsoft.com/en-us/library/ms187928.aspx http://msdn.microsoft.com/en-us/library/ms187928.aspx

So, I ran a little test, and today, I will share with you my findings.

The CAST v/s CONVERT Test

The test is quite simple – we have two identical T-SQL statements, one using CAST, the other using CONVERT. We will use the Properties window of SSMS to analyze the execution plan and try to see what can be found about the underlying implementation.

Let’s take a look at our test statements:

/***********************************************************
Press Ctrl+M (or go to Query->Include Actual Execution Plan)
************************************************************/
USE AdventureWorks2008R2
GO
--Use CONVERT
select CONVERT(VARCHAR(10),BusinessEntityID) FROM HumanResources.Employee

--Use CAST
select CAST(BusinessEntityID AS VARCHAR(10)) FROM HumanResources.Employee
  1. Connect to your SQL Server instance using SQL Server Management Studio (SSMS)
  2. Copy the T-SQL code above into SSMS Query editor window
  3. Run the above T-SQL statements against your SQL Server instance
  4. Change over to the Execution Plan tab in the Results pane
  5. In the results pane, notice that both CAST & CONVERT have been implemented as “Compute Scalar” operators
  6. image
  7. Press the F4 key or go to View –> Properties to launch the properties window
  8. Expand the “Defined Values” node

CONVERT implementation

CONVERT does not come up with any surprises, and has a straightforward internal implementation as demonstrated in the Properties window.

image

CAST implementation

When we move to the properties for the query using the CAST operation, we see that under the hood, SQL Server does take us for a ride. Internally, CAST is implemented as a CONVERT call. There is no difference between CAST & CONVERT besides the fact that CAST is an ANSI standard, while CONVERT is not. No wonder both CAST & CONVERT demonstrate the same performance.

image

Surprised? Try it for yourself – I was not prepared to see what I saw when I first ran through the test. I restarted the entire server and got a cup of coffee to make sure I was not dreaming!

Conclusion

I guess all I need to do now is to complete the little grid I had above:

Parameter CAST CONVERT
ANSI standard Yes No
Data-type coverage Limited Full (Date & Date Time values supported)
Performance No difference No difference
Microsoft SQL Server implementation CONVERT CONVERT
Books On Line Page http://msdn.microsoft.com/en-us/library/ms187928.aspx http://msdn.microsoft.com/en-us/library/ms187928.aspx

What would I recommend?

Although we now know that to Microsoft SQL Server, data type conversion only happens via CONVERT, I would still prefer to go with CAST unless I am converting dates or date-time values. The reason being CAST is an ANSI standard, CONVERT is not. CONVERT is implementation specific, and therefore may change in terms of the number of parameters or underlying processes. To the calling T-SQL statement, it is safe to assume that CAST would remain unchanged (unless the standard changes) and therefore, it would be the responsibility of Microsoft SQL Server to translate the CAST to a CONVERT implementation.

Do share your thoughts on the whole CAST v/s CONVERT myths and controversies floating around in the SQL Server world. I will be more than happy to research more and share my findings with you.

Until we meet next time,

Be courteous. Drive responsibility.

CAST v/s CONVERT


Data type conversion has been one of the most used features of any programming language. Microsoft SQL Server users are familiar with the two functions: CAST() and CONVERT() and use them regularly, and more specifically, interchangeably.

However, using CAST() and CONVERT() interchangeably is one of the most capital mistakes that a developer can make. Madhivanan, a SQL Server blogger on BeyondRelational.com (http://beyondrelational.com/blogs/madhivanan/default.aspx) does a very good job at highlighting the differences between CAST() and CONVERT() in his blog post here.

To summarize, the differences that he highlights are:

  1. CAST is an ANSI standard, while CONVERT is specific to Microsoft SQL Server
  2. CAST cannot be used for formatting purposes (i.e. used to type cast something to a character string), whereas CONVERT can do so, especially for datetime and money datatypes
  3. CAST cannot convert a string to a DATETIME value of a required format (i.e. formatting as mm/dd/yyyy or dd/mm/yyyy as required etc), while the CONVERT can

Finally, he warns us to use a properly calculated size value when using the CAST and CONVERT functions to convert integer values to character data types.

All very great points, and very, very useful. Thank-you, Madhivanan for the wonderful research and enlightenment.

A Question

We can draw a general conclusion that for computation purposes, CAST should be used and for formatting of values for display on a UI or a report, CONVERT should be used. However, the interesting question is one which came up when I was following the discussion on the blog the other day –

Tuesday, July 27, 2010 10:38 AM by cute_boboi

For data extraction purpose, with 1M+ records, which method is faster/recommended ? CAST or CONVERT from:
(i) Date to varchar
(ii) Int to varchar

Today, I try to answer that question.

The Demonstration

Preparing the Environment

Let’s start by creating a table, and filling it up with some test data (Running against the AdventureWorks2008 database gives you about 356409 rows in the table):

-- Step 01. Create a test table
CREATE TABLE CASTCONVERTTest (Id INT,
                              CrDate DATETIME)

-- Step 02. Generate Test Data
INSERT INTO CASTCONVERTTest
SELECT sso1.Id, sso1.crdate
FROM sys.sysobjects sso1
CROSS JOIN sys.sysobjects sso2

Now, let’s begin by pressing Ctrl+M when in the SQL Server Management Studio or go to Query->Include Actual Execution Plan to enable showing the Actual Execution Plan.

Conversion of INT to VARCHAR

In order to see the performance of using CAST and CONVERT on INT to VARCHAR conversion, we will use the following script. At each stage, we will be using DBCC freeproccache to remove all elements from the plan cache.

/***********************************************************
Press Ctrl+M (or go to Query->Include Actual Execution Plan)
************************************************************/
-- Step 03. Test conversion of INT to VARCHAR
-- A. Remove all elements from the plan cache
DBCC freeproccache
GO

-- B. Check the performance of CAST
SELECT CAST(Id AS VARCHAR(11))
FROM CASTCONVERTTest
GO

-- C. Remove all elements from the plan cache
DBCC freeproccache
GO

-- D. Check the performance of CONVERT
SELECT CONVERT(VARCHAR(11),Id)
FROM CASTCONVERTTest
GO

Let’s look at the Actual Execution Plan used by SQL Server. As you can see, we get a 50-50 split, which means that both CAST and CONVERT perform equally well for the INT to VARCHAR conversion.

image

Conversion of DATETIME to VARCHAR

In order to see the performance of using CAST and CONVERT on DATETIME to VARCHAR conversion, we will use the following script. At each stage, we will be using DBCC freeproccache to remove all elements from the plan cache.

-- Step 04. Test conversion of DATETIME to VARCHAR
-- A. Remove all elements from the plan cache
DBCC freeproccache
GO

-- B. Check the performance of CAST
SELECT CAST(CrDate AS VARCHAR(25))
FROM CASTCONVERTTest
GO

-- C. Remove all elements from the plan cache
DBCC freeproccache
GO

-- D. Check the performance of CONVERT
SELECT CONVERT(VARCHAR(25),CrDate,106)
FROM CASTCONVERTTest
GO

Let’s look at the Actual Execution Plan used by SQL Server. As you can see, again we get a 50-50 split, which means that both CAST and CONVERT perform equally well for the DATETIME to VARCHAR conversion.

image

The Cleanup

Finally, as always, let’s cleanup the environment.

-- Step 05. Cleanup!
DROP TABLE CASTCONVERTTest

In Conclusion

In conclusion, we can safely conclude that both CAST and CONVERT perform equally well for the following conversions:

  1. INT to VARCHAR
  2. DATETIME to VARCHAR

I hope the above is a satisfactory reply to the question we started with.

Powered with the research conducted by Madhivanan and with the afore described performance test, the choice is now up to the reader. Do share your practices with CAST & CONVERT on this blog. Also, do mention the reasons why you practice a particular rule of thumb, if possible.

Be courteous. Drive responsibly.