Tag Archives: Administration

Articles related to Microsoft SQL Server Administration

#0344 – SQL Server – Missing Configuration Manager on Windows 8


Microsoft SQL Server comes with a very powerful  configuration manager which can be used to manage the SQL Server services and control network visibility and footprint. Working without the SQL Server Configuration Manager is a nightmare for a DBA.

SQL Server Configuration Manager is not available in Windows 8

Many development environments have SQL Server installed on the developer workstations running a Windows 8 operating system. I have been consulted on a couple of instances where the SQL Server Configuration Manager fails to come up in the “Start” menu application list (even using the Search charm does not yield any results).

The workaround

The SQL Server Configuration Manager is in reality a Microsoft Management Console (MMC) application. Hence, the workaround is to do the following:

  1. Launch the Search charm
  2. Search for the SQL Server Configuration Manager MMC snap-in:
    • SQL Server 2012 – search for “SQLServerManager11.msc
    • SQL Server 2008R2 – search for “SQLServerManager10.msc
  3. Press “Enter” to launch (or alternatively, right-click on the result to pin to the Start Menu or create a shortcut on the desktop)

Further Reading on the SQL Server Configuration Manager

  • SQL Server Configuration Manager – Hide your SQL Server Instance [Link]
  • #0151 – SQL Server– Which TCP/IP port is my SQL Server listening on? [Link]
  • Changing SQL Server Service Account or Password – Avoid restarting SQL Server [Link]

[EDIT, 09/08/2014, 10:55AM IST]: Rectified a typo for the MSC file name related to Configuration Manager for SQL Server 2008 R2.

Until we meet next time,

Be courteous. Drive responsibly.

#0341 – SQL Server – Random “String or binary data would be truncated” errors during bulk data loads


Interfacing multiple systems is always a great source of learning. In the various integrations that I have been a part of over the years, one of the many things that I have learnt is that no two systems have the same world view of the data that they store.

To take a very simple example, assume that we are interfacing a home-grown warehousing system with a 3rd party front desk PoS system. One of the first problems that most people will run into is that the supported length for product name is different – the warehousing system might be accepting a product name of let’s say 60 characters whereas the PoS may have a limit of 50 characters.

Integrations need to take care of this sort of a mismatch, but it’s not always easy. Sometimes, the interface specifications are vague or in case of home-grown legacy systems, possibly non-existent. In most enterprise integrations, a pre-production run is often done with a backup of the existing production data. When doing a data load from one system to another, it is quite possible that we run into situations wherein the tests run fine, but a production run fails with the following error:

Msg 8152, Level 16, State 14, Line 17
String or binary data would be truncated.

As the error suggests, it is what can very simply be termed as a buffer overflow – we are attempting to insert a larger string into a smaller container (table column). Because the data was the same, the question that comes up is:

Why was the “String or binary data would be truncated” error random and not reproducible at will?

Connection Settings – ANSI_WARNINGS

The answer to the puzzle lies in the connection settings for the SET option – ANSI_WARNINGS.

If the ANSI_WARNINGS is set to OFF, attempting to insert a longer string into a smaller column automatically truncates the string. The following test confirms the behaviour:

USE tempdb;
GO

--Create Test table
IF OBJECT_ID('dbo.AnsiWarningsTest','U') IS NOT NULL
DROP TABLE dbo.AnsiWarningsTest;
GO

CREATE TABLE dbo.AnsiWarningsTest (FourtyCharacterTest VARCHAR(40));
GO

--Default value, resetting for safety
SET ANSI_WARNINGS OFF;

--Check current value of ANSI_WARNINGS
IF (@@OPTIONS & 8) = 8
PRINT 'ANSI_WARNINGS are ON';
ELSE
PRINT 'ANSI_WARNINGS are OFF';

--Attempt to insert data longer than available column value
--This should work fine
INSERT INTO dbo.AnsiWarningsTest (FourtyCharacterTest)
SELECT REPLICATE('a',100);
GO

SELECT AnsiWarningsTest.FourtyCharacterTest
FROM dbo.AnsiWarningsTest;
GO

As can be seen from the screenshot below, the only 40 characters were successfully inserted into the table – dbo.AnsiWarningsTest. The string truncation was handled automatically by Microsoft SQL Server.

image

Now, we turn ANSI_WARNINGS OFF and repeat the same test.

--Set ANSI_WARNINGS to ON
SET ANSI_WARNINGS ON;

--Check current value of ANSI_WARNINGS
IF (@@OPTIONS & 8) = 8
PRINT 'ANSI_WARNINGS are ON';
ELSE
PRINT 'ANSI_WARNINGS are OFF';

--Attempt to insert data longer than available column value
--This SHOULD result into an exception
INSERT INTO dbo.AnsiWarningsTest (FourtyCharacterTest)
SELECT REPLICATE('b',100);
GO

Attempting to execute this immediately results into the following error in the “Messages” tab of the SSMS.

ANSI_WARNINGS are ON
Msg 8152, Level 16, State 14, Line 12
String or binary data would be truncated.

Conclusion

When working with bulk data imports, it is important to know the differences in the storage design of both systems. Once known, mismatches that may result into potential string termination errors should be explicitly handled by the SUSBTRING clause.

Further Reading

  • ANSI_WARNINGS [MSDN Link]
  • SUBSTRING [MSDN Link]

Until we meet next time,

Be courteous. Drive responsibly.

#0277 – SQL Server – Interview Question – Script to Identify DML Triggers and Trigger types


As a DBA, it is very important to be aware about the database schema that you own. For an entry level DBA position, some of the common interview questions around the subject of maintaining database DML triggers are:



  • How many DML triggers exist in your database?

  • Can you classify them into INSERT, UPDATE and DELETE triggers?

  • Given any Microsoft SQL Server database, can you answer both these questions?

This post presents a script that answers all the questions above. The script below involves querying the Catalog Views – sys.triggers and sys.trigger_events to answer these questions:

USE AdventureWorks2008R2 ;
GO
SELECT  st.name AS TriggerName,
        OBJECT_NAME(st.parent_id) AS ParentTableName,
        st.is_ms_shipped AS IsMSShipped,
        st.is_disabled AS IsDisabled,
        st.is_not_for_replication AS IsNotForReplication,
        st.is_instead_of_trigger AS IsInsteadOfTrigger,
        te.type AS TypeId,
        te.type_desc AS TypeDescription,
        te.is_first AS IsTriggerFiredFirst,
        te.is_last AS IsTriggerFiredLast
FROM    sys.triggers AS st
INNER JOIN sys.trigger_events AS te ON te.object_id = st.object_id
WHERE st.parent_id > 0
  AND st.is_ms_shipped = 0
ORDER BY st.parent_id, te.type ASC;
GO

/* Results:
TriggerName          ParentTableName      IsMS    Is       IsNotFor    IsInstead Type Type        IsTrigger   IsTrigger
                                          Shipped Disabled Replication Trigger   Id   Description FiredFirst  FiredLast
-------------------- -------------------- ------- -------- ----------- --------- ---- ----------- ----------- ---------
dVendor              Vendor               0       0        1           1         3    DELETE      0           0
iWorkOrder           WorkOrder            0       0        0           0         1    INSERT      0           0
uWorkOrder           WorkOrder            0       0        0           0         2    UPDATE      0           0
iPurchaseOrderDetail PurchaseOrderDetail  0       0        0           0         1    INSERT      0           0
uPurchaseOrderDetail PurchaseOrderDetail  0       0        0           0         2    UPDATE      0           0
uPurchaseOrderHeader PurchaseOrderHeader  0       0        0           0         2    UPDATE      0           0
iduSalesOrderDetail  SalesOrderDetail     0       0        0           0         1    INSERT      0           0
iduSalesOrderDetail  SalesOrderDetail     0       0        0           0         2    UPDATE      0           0
iduSalesOrderDetail  SalesOrderDetail     0       0        0           0         3    DELETE      0           0
dEmployee            Employee             0       0        1           1         3    DELETE      0           0
uSalesOrderHeader    SalesOrderHeader     0       0        1           0         2    UPDATE      0           0
iuPerson             Person               0       0        1           0         1    INSERT      0           0
iuPerson             Person               0       0        1           0         2    UPDATE      0           0
*/

Further Reading



  • sys.triggers [Link]

  • sys.trigger_events [Link]

Until we meet next time,


Be courteous. Drive responsibly.

#0138 – SQL Server – Performance – Best Practice – Create Index Before or After a data insert – I/O, Fragmentation, CPU and Elap


In November 2011, I wrote a post titled SQL Server – Performance – Best Practice – Create Index Before or After a data insert?. The post dealt with answering the question – which method was better – whether to create an index before or after a bulk-insert? The post was essentially targeted to code that generated temporary tables for SSRS reporting or bulk processing.

Erik Eckhardt provided some great feedback on the post. Essentially, his feedback was:

  1. He wanted to see a Profiler trace comparison between the two approaches
  2. A fragmentation study was in order – which method produced the most efficient indexes?

I had intended to carry out this study a long time ago (during the Christmas week-end), but it’s better late than never. So, here’s the results of the study.

Creating the test data

The approach that I will be taking today is:

  1. Create two identical tables in the TempDB
    • On one table, indexes will be created before the insert of test data is done
    • On the other, I will first insert some data, and then create indexes

WARNING: The scripts provided in this post are provided “as-is” and without warranty for understanding purposes only. Using the DBCC scripts provided in the post on your QA or production environments can have serious side effects

The script that I will be using today is same as that in my earlier post. However, please find it below for your kind reference. First up is the query that creates the indices before inserting the data.

USE tempdb
GO

SET NOCOUNT ON

/*****************************************************************************************************************
   STEP 01: Create a table, complete with a primary key (to create a clustered index), and a non-clustered index
*****************************************************************************************************************/
USE tempdb
GO
CREATE TABLE DontUpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT',
  CONSTRAINT PK_DontUpdateStatistics_StatId PRIMARY KEY CLUSTERED 
  (
    StatId
  )
)
GO

CREATE NONCLUSTERED INDEX NC_DontUpdateStatistics_RandomId ON DontUpdateStatistics(RandomId)
GO

/*****************************************************************************************************************
   STEP 02: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO DontUpdateStatistics DEFAULT VALUES
GO 100000

Next up is the script that I recommend – wherein the indexes are created after the data is inserted.

USE tempdb
GO

SET NOCOUNT ON

/*****************************************************************************************************************
   STEP 04: Create a table - this time, do not create any indexes
*****************************************************************************************************************/
CREATE TABLE UpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT'
)
GO

/*****************************************************************************************************************
   STEP 05: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO UpdateStatistics DEFAULT VALUES
GO 100000

/*****************************************************************************************************************
   STEP 06: Now create the primary key clustered index and the non-clustered index
*****************************************************************************************************************/
ALTER TABLE UpdateStatistics
    ADD CONSTRAINT PK_UpdateStatistics_StatId PRIMARY KEY CLUSTERED 
    (
        StatId
    )
GO
CREATE NONCLUSTERED INDEX NC_UpdateStatistics_RandomId ON UpdateStatistics(RandomId)
GO

I will now start by answering the 2nd question first – which method produced an index that was more efficient from a fragmentation perspective?

Fragmentation study

Please note that in both the queries above, the clustered index is created on the column – StatId, which is an identity column. The inserts are therefore in the order we would want them to be in both cases, causing a “write-once” operation as Erik put it.

Once the data has been inserted into the temporary tables, I ran the following query that uses the sys.dm_db_index_physical_stats DMV to take a look at the index physical statistics. NOTE: I will not use the DBCC SHOWCONTIG because it is marked for deprecation (BOL reference here).

USE tempdb
GO
SELECT 'Table: DontUpdateStatistics' AS TableName,
       index_id AS IndexId,
       index_type_desc AS IndexType,
       index_depth AS IndexDepth,
       avg_fragment_size_in_pages AS 'AverageFragmentation(Pages)',
       avg_fragmentation_in_percent AS 'AverageFragmentation(Percentage)',
       fragment_count AS Fragments,
       page_count AS 'PageCount',
       partition_number AS PartitionNumber,
       alloc_unit_type_desc AS AllocationUnitType,
       index_level AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('DontUpdateStatistics'),-1,0,NULL)

SELECT 'Table: UpdateStatistics'AS TableName,
       index_id AS IndexId,
       index_type_desc AS IndexType,
       index_depth AS IndexDepth,
       avg_fragment_size_in_pages AS 'AverageFragmentation(Pages)',
       avg_fragmentation_in_percent AS 'AverageFragmentation(Percentage)',
       fragment_count AS Fragments,
       page_count AS 'PageCount',
       partition_number AS PartitionNumber,
       alloc_unit_type_desc AS AllocationUnitType,
       index_level AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('UpdateStatistics'),-1,0,NULL)

I looked at the following values:

  • avg_fragmentation_in_percent – The percent of logical fragmentation (out-of-order pages in the index)
  • fragment_count – The number of fragments (physically consecutive leaf pages) in the index
  • avg_fragment_size_in_pages – Average number of pages in one fragment in an index

Essentially, The avg_fragment_size_in_pages and avg_fragmentation_in_percent values are inversely proportional to each other. Therefore, we are looking for a higher value of avg_fragment_size_in_pages, and a lower value of avg_fragmentation_in_percent.

Below is the output that I received when the above queries were executed against my test instance.

Fragmentation Results

In case you are not able to see the screenshot above, here’s the same data in tabular format:

TableName Index

Id
IndexType Index

Depth
Average

Fragmentation(Pages)
Average

Fragmentation(Percentage)
Fragments Page

Count
Partition

Number
Allocation

Unit

Type

Index

Depth
Table: DontUpdateStatistics 1 CLUSTERED INDEX 2 11.5666666666667 1.44092219020173 30 347 1 IN_ROW_DATA 0
Table: DontUpdateStatistics 2 NONCLUSTERED INDEX 2 1 99.6108949416342 257 257 1 IN_ROW_DATA 0
Table: UpdateStatistics 1 CLUSTERED INDEX 2 173.5 0 2 347 1 IN_ROW_DATA 0
Table: UpdateStatistics 2 NONCLUSTERED INDEX 2 58 0.574712643678161 3 174 1 IN_ROW_DATA 0

Results

Contrary to expectations, the results above clearly indicate that we have higher fragmentation in cases when indexes are created before data is inserted into the tables. It does not matter when these values (i.e. the results of the DMV) are fetched – before or after a SELECT is executed against the table, i.e. the index fragmentation remains the same and are not affected by when the statistics on the table are updated.

CPU Utilization and Elapsed Time study

I would address this part of the study in 2 parts:

  1. In my original post (here), we already saw that when running the SELECT statements against the table, the Profiler was able to detect that the table: DontUpdateStatistics – did not have the expected statistics
    • We have also seen that when STATISTICS IO was turned ON, the SQL Server clearly showed a performance difference between the two SELECT statements
  2. In this post, I will try to see whether the performance gain in the second SELECT has been redistributed during the index creation or data load phases

To see the overall CPU time and elapsed time statistics, I used the SET STATISTICS TIME ON option wrapped around the Index (both clustered & non-clustered) creation and data insert statements. The result of the execution based on the data returned for the test data of 100,000 records is summarized below:

Data Insert (ms) Index (clustered & non-clustered) creation (ms) Totals (ms)
CPU Time
Table: DontUpdateStatistics 4687 0 4687
Table: UpdateStatistics 4533 400 4933
Elapsed Time
Table: DontUpdateStatistics 24264 7 24271
Table: UpdateStatistics 22364 23872 46236

Summary

It is now time to summarize the findings.

Data can be loaded to a table in multiple ways. We have been comparing the pros & cons of 2 such methods:

  1. Create necessary supporting indexes first, insert data afterwards
  2. Insert data first, create necessary supporting indexes afterwards

My original post compared these approaches in terms of the number of read-ahead reads (number of pages placed into the cache) required before the SQL server can serve our request to fetch data from the tables depending upon when and whether statistics were updated for the tables. In this post, we studied the level of index fragmentation and time statistics.

The following chart is a summary of my findings based on which approach “wins” (Wins represented as “W”, whereas a loss by “L”):

Most up-to-date statistics Read-ahead reads required to service a query Index fragmentation level Time (CPU + Elapsed time) statistics
Index before, insert after

L

L

L

W

Insert before, index after

W

W

W

L

Further reading

Previous post

Previous post

This post

This post

In short, if execution time is of utmost importance for a given scenario, the approach #1 is preferred wherein indexes are created before inserting the data (which is the most common approach). However, if fragmentation, up-to-date statistics and reduced number of read-ahead reads are of importance (in case of read-only data), the approach #2 is preferable wherein data is inserted before creating the indexes.

I trust you found the study conducted above useful & interesting. If you have any comments, or would like to know more about it, please do feel free to contact me via the associated discussion or twitter. As always, I really appreciate the feedback that you, the community, provides.

References

Until we meet next time,

Be courteous. Drive responsibly.

SQL Server – Performance – Best Practice – Create Index Before or After a data insert?


As a part database developer myself, I am sure that what I am about to document today will be useful to almost everyone who has ever written a stored procedure for with indexed tables for SSRS reporting or massive batch processing. What I will be trying today is to find an answer to the dilemma – Should indexes be created before or after the insert to the table?

The moment of index creation impacts table statistics, and therefore should impact performance. The point that got me thinking towards this test is the following line from Books On Line (http://msdn.microsoft.com/en-us/library/ms190397.aspx), Section: “Determining When to Create Statistics”, which quite clearly mentions that: “The query optimizer creates statistics for indexes on tables or views when the index is created.

The approach that I will be taking today is:

  1. Create two identical tables in the TempDB
    • On one table, indexes will be created before the insert of test data is done
    • On the other, I will first insert some data, and then create indexes
  2. Later, I will fire simple select queries against both, while monitoring the performance via monitoring statistics usage by:
    • setting of SET STATISTICS IO ON and,
    • Using the SQL Profiler

WARNING: The scripts provided in this post are provided “as-is” and without warranty for understanding purposes only. Using the DBCC scripts provided in the post on your QA or production environments can have serious side effects.

Creating the test environment

In order to create the test environment, I will first create a test table, create the necessary indexes and finally add some test data to it.

USE tempdb
GO

/*****************************************************************************************************************
   STEP 01: Create a table, complete with a primary key (to create a clustered index), and a non-clustered index
*****************************************************************************************************************/
USE tempdb
GO
CREATE TABLE DontUpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT',
  CONSTRAINT PK_DontUpdateStatistics_StatId PRIMARY KEY CLUSTERED 
  (
    StatId
  )
)
GO

CREATE NONCLUSTERED INDEX NC_DontUpdateStatistics_RandomId ON DontUpdateStatistics(RandomId)
GO

/*****************************************************************************************************************
   STEP 03: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO DontUpdateStatistics DEFAULT VALUES
GO 100000

Next, I will create another table. However indexes will be created after test data is inserted.

USE tempdb
GO
/*****************************************************************************************************************
   STEP 04: Create a table - this time, do not create any indexes
*****************************************************************************************************************/
CREATE TABLE UpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT'
)
GO

/*****************************************************************************************************************
   STEP 05: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO UpdateStatistics DEFAULT VALUES
GO 100000

/*****************************************************************************************************************
   STEP 06: Now create the primary key clustered index and the non-clustered index
*****************************************************************************************************************/
ALTER TABLE UpdateStatistics
    ADD CONSTRAINT PK_UpdateStatistics_StatId PRIMARY KEY CLUSTERED 
    (
        StatId
    )
GO
CREATE NONCLUSTERED INDEX NC_UpdateStatistics_RandomId ON UpdateStatistics(RandomId)
GO

Run the Test

The first thing I will do is to check if statistics for both tables are updated or not.

Confirm statistics updates

/*****************************************************************************************************************
   STEP 07: See if the statisics on the table in tempDB were updated
Script source: http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/
             : by Pinal Dave
*****************************************************************************************************************/
USE tempdb
GO
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('DontUpdateStatistics') OR
      OBJECT_ID = OBJECT_ID('UpdateStatistics')
GO

As can be seen, the statistics for Table #1 (“DontUpdateStatistics”) did not update:

image

The Test Queries

Next, allow me to present a set of SELECT queries that would fetch data from the tables of interest.

USE tempdb
GO
/*****************************************************************************************************************
   STEP 08 A: Clear out the Buffers and the procedure cache.
            : Make sure to run a CHECKPOINT!
Reference source: 
http://beyondrelational.com/blogs/nakul/archive/2011/10/03/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx
*****************************************************************************************************************/
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

/*****************************************************************************************************************
   STEP 08 B: Turn STATISTICS IO ON, and select some data from the DontUpdateStatistics table
*****************************************************************************************************************/
SET STATISTICS IO ON

SELECT * FROM DontUpdateStatistics WHERE StatId BETWEEN 10 AND 1000 
                                     AND RandomId > 300
GO

SET STATISTICS IO OFF

/*****************************************************************************************************************
   STEP 08 C: Clear out the Buffers and the procedure cache.
            : Make sure to run a CHECKPOINT!
Reference source: 
http://beyondrelational.com/blogs/nakul/archive/2011/10/03/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx
*****************************************************************************************************************/
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO

/*****************************************************************************************************************
   STEP 08 D: Turn STATISTICS IO ON, and select some data from the UpdateStatistics table
*****************************************************************************************************************/
SET STATISTICS IO ON

SELECT * FROM UpdateStatistics WHERE StatId BETWEEN 10 AND 1000 
                                 AND RandomId > 300
GO

SET STATISTICS IO OFF

Configure the SQL Profiler

To use the SQL Server Profiler in order to help me back my case, I connected to my SQL Server instance, and selected only one event:

Event Group: Errors & Warnings

Event Class: Missing Column Statistics

image

The Result

With the SQL Profiler running, I turned on the Actual Execution Plan in SSMS (by pressing Ctrl+M) and then executed the test query. The result was as shown below:

The Execution Plan image
Statistics Output image
Profiler Output image

Confirm statistics updates

/*****************************************************************************************************************
   STEP 09: See if the statisics on the table in tempDB were updated
Script source: http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/
             : by Pinal Dave
*****************************************************************************************************************/
USE tempdb
GO
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('DontUpdateStatistics') OR
      OBJECT_ID = OBJECT_ID('UpdateStatistics')
GO

As can be seen, the statistics for Table #1 (“DontUpdateStatistics”) are now updated. The statistics for Table #2 (“UpdateStatistics”) are unchanged:

image

Post-Mortem

First of all, I would like to draw attention to a line mentioned in the Profiler’s description of the “Missing Column statistics” event class. The line states – “You will not see this event produced unless the option to auto-compute statistics is turned off”. For the TempDB, auto-compute statistics is ON, so that’s not the root cause to this issue.

image

If this is not the issue, then what is? Summarizing the results, I can derive the following conclusions:

  • Both queries appear to have executed with equal performance impact – with a 50% split across them according to the execution plan
  • However, looking at the statistics IO output, I can see that:
    • Read-Ahead Reads for table #1 (“DontUpdateStatistics”) = 16
    • Read-Ahead Reads for table #2 (“UpdateStatistics”) = 4
    • The query against Table #1 (“DontUpdateStatistics”) is requires 4 times more reads when compared to the query against query #2
  • The SQL Profiler warns me that column statistics are not available, and therefore, a sub-optimal plan can be chosen by the optimizer

Revisiting Books On Line, (http://msdn.microsoft.com/en-us/library/ms190397.aspx), the section: “Determining When to Create Statistics”, clearly mentions that: The query optimizer creates statistics for indexes on tables or views when the index is created.

More precisely, statistics, if missing, are created when a select is performed on the columns for which missing statistics were found.

In our case, we created the indexes on table #2 (“UpdateStatistics”) after data was inserted. Because creating indexes involves selecting data from the table, a statistics update is performed. For table #1 (“DontUpdateStatistics”) though, the indexes were created before inserting the data. When the query #1 was executed, the statistics are generated, and then the optimization happens – which makes the query a little bit slower as it spends more time on the read-ahead reads.

If we look up on Books On Line for STATISTICS IO (http://msdn.microsoft.com/en-us/library/ms184361.aspx), we can see that the read-ahead read count gives us the number of pages placed into the cache. In our case, all pages for table #1 have to be placed in the cache for the statistics update.

Conclusion

It’s been a rather long post, with a simple conclusion. It is important to think about when index creation should be performed because using the proper sequence is important from the performance and IO perspective.

Until we meet next time,

Be courteous. Drive responsibly.