SQL Server – Profiler – Part 3 – Trace execution options – Save trace to file or a table, Auto-scroll and keyboard shortcuts


Based on popular demand from my colleagues and you, the kind reader, I am currently writing a series of posts on the SQL Server Profiler. The first part was an introduction attempting to answer the question – What is a SQL Trace? Permissions, space requirements and launching the Profiler. The second part dealt with Profiler Templates, Template Types & creating custom templates. Today, we will attempt to answer the following questions:

  1. How to “execute” a basic Profiler trace?
  2. Which are some of the usability features that may be of use during trace execution?
  3. Can the Profiler be manipulated via the keyboard?

Creating a basic Profiler trace

To execute a Profiler trace, the first step is, obviously to launch the Profiler. This can be done by following the steps in my first post – What is a SQL Trace? Permissions, space requirements and launching the Profiler.

Next, go to File –> New Trace to launch the “Trace Properties” window once successful authentication takes place.

image

General Properties

Trace Information (# 1 and 2)

You can provide a name to the Profiler trace for easier future reference. You can also choose the template type to suit the exercise. For this test though, let’s choose the custom template created in my previous post – Profiler Templates, Template Types & creating custom templates.

Note that depending upon the connection created during authentication, the system shows the Trace Provider name, Provider Type and Version.

Saving the trace output (#3)

Normally, a profiler trace is shown to the user on the grid. However, depending upon the number of events and data columns being captured, there is a performance overhead associated to this method. It is much better instead to dump the trace data to a trace file.

When saving to a file, the user has the option to set a maximum file size to ensure that the file does not grow out of control. When “Enable file rollover” is checked, if the max. file size is reached, the existing file is closed and a new trace file is created. If the “Enable file rollover” is unchecked, the profiler stops capturing events when the max. file size is achieved.

As an alternate to saving trace data to a file, you may also want to dump the data to a table on any Microsoft SQL Server instance. This can be achieved by checking the “Save to Table” checkbox. Also, just as file-sizes can be limited, the number of rows in the trace table can be limited by specifying the max. row count in the “Set maximum rows (in thousands):”.

(Tip: It is advisable to store the trace data to another instance of Microsoft SQL Server to avoid interference with the activity being monitored.)

The “Server processes trace data” must be used with caution. This causes the service running the trace to process trace data instead of the client application. The drawback is that server performance may be affected.

Time-bound tracing (#4)

To have the SQL Server Profiler automatically stop tracing after at certain time, check the “Enable trace stop time” and specify the required stop date and time.

Event Selection

image

This page allows you to do the following:

  1. Review the Events & data columns to capture
  2. Add or Remove the Events to capture
  3. Add or Remove data columns to capture
  4. Apply one or more filters to one or more data columns
  5. Specify data columns to Group the results on and also specify the order in which data columns should be shown to the user
  6. Brief description of the event class selected (changes when event class selection changes)
  7. Brief description of the data column (changes as the user hovers over the data column)

A step-by-step guide on customizing the template using the controls specified by #2, 3, 4 and 5 is available in my previous post – Profiler Templates, Template Types & creating custom templates.

Executing a trace

Once the Profiler trace is defined, one just needs to click on the “Run” button in the “Template Properties” window. The SQL Server Profiler will now start showing events to the user depending upon the chosen template.

image

  1. Basic controls – start, pause & stop
  2. Launch SQL Server Management Studio
  3. Launch the Performance Monitor
  4. Clear the trace data grid
  5. Review the trace properties (The “Trace Properties” window opens, but in read-only mode if the trace is running or is paused)
  6. Find a string within any of the data columns

image

Auto-scroll – a nice usability feature

Most of the usability features (launching of SSMS, PerfMon and the flexibility of the “Find” window) are quite straight-forward. However, there is one feature that I like the most. Look at the toolbar on the profiler, and note the icon with the grid and a little, blue down arrow:

image

This icon is the “Auto-scroll” button. When selected, the grid scrolls to the most recently collected data row as more and more data starts collecting on the grid. When not selected, the grid needs to be scrolled manually.

Typically, if I am trying to capture and analyze a trace for a particular database and host and want to see things as they happen, I prefer the enable “Auto-scroll”. If I am saving data to a file or a table, I would most probably be looking at the data later on and would therefore have the “Auto-scroll” turned off.

Can the Profiler be Manipulated via the Keyboard?

Of course, it can! Here’s where you can find a list of all keyboard shortcuts – http://msdn.microsoft.com/en-us/library/ms174164.aspx

In my next post…

SQL 2012 is just around the corner. Most of the community members are probably evaluating SQL 2012. For those who are not yet ready, but will ultimately move to SQL 2012 must most definitely be moving to SQL Server 2008 R2. We will see how to use the Profiler to identify which objects are in most dire need of replacement for certification against a release of SQL Server.

We would be approaching the end of the series and would therefore start looking at other scenarios where the Profiler can be used.

References

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here!

SQL Server – Profiler – Part 2 – Profiler Templates, Template Types and Creating Customized Templates


Based on popular demand from my colleagues and you, the kind reader, I am currently writing a series of posts on the SQL Server Profiler. You can read the first part on What is a SQL Trace? Permissions, space requirements and launching the Profiler. Today, we will attempt to answer the following questions:

  1. What is a Profiler Template?
  2. What are the various types of Templates available within the SQL Server profiler?
  3. How to create customized templates?

Profiler Templates

We talked yesterday about SQL Traces and it’s components. SQL Server Profiler ships with templates that define the event classes and data columns to include in traces. It is important to understand that when we execute the SQL Server Profiler, we are capturing a SQL Trace and the template gives body to the trace, i.e. the template is used to represent the events and data columns captured in the trace, but the template is itself never “executed”.

Template Types

SQL Server Profiler offers predefined trace templates that allow you to easily configure the event classes that you will most likely need for specific traces. To view the list of available templates, click on the “New Trace” button on the toolbar, and then expand the “Use the template” drop-down.

image

NOTE: If you are using a version of the SQL Server Profiler that’s different than the SQL Server instance, then the template drop-down may be blank, and you may be required to use a customized template.

Below is a brief description of the various template types available in the SQL Server Profiler. To know more about the various Event Classes associated to each, please refer the Books On Line documentation referenced in the “References” section of this post. The ones I have used most in my career till date are: TSQL_Duration, TSQL_Locks and TSQL_SPs.

Template Name Description
Blank This is a blank template that can be used during template customization
SP_Counts Captures the event raised whenever a stored procedure starts execution
Standard This is the default trace.
This is the most generic and commonly used trace that captures:
1. Connections (connect and disconnects)
2. Completion of Remote Procedure Calls
3. Start & completion of T-SQL statements & batches
TSQL

Captures all client-submitted T-SQL statements and the time issued.

Use to debug client applications.

TSQL_Duration

Captures all client-submitted T-SQL statements, their execution time (in milliseconds), and groups them by duration.

Use to identify slow queries.

TSQL_Grouped

Captures all client-submitted T-SQL statements and the time they were issued. Groups information by user or client that submitted the statement.

Use to investigate queries from a particular client or user.

TSQL_Locks Captures the client-submitted T-SQL statements and associated exceptional lock information.

Use to troubleshoot deadlocks, lock time-out, and lock escalation events.

TSQL_Replay

Captures detailed information about Transact-SQL statements that is required if the trace will be replayed.

Use to perform iterative tuning, such as benchmark testing.

TSQL_SPs Although not the default, this is perhaps the most useful trace for most systems (provided it is not to be replayed).

Captures detailed information about all executing stored procedures.

Use to analyze the component steps of stored procedures. Add the SP:Recompile event if you suspect that procedures are being recompiled.

Tuning

Captures information about stored procedures and Transact-SQL batch execution.

Use to produce trace output that Database Engine Tuning Advisor can use as a workload to tune databases.

Customizing a Profiler trace

To have a best of all worlds, one might want to create a custom Profiler template. To create a custom template, one can follow the following steps:

Go to
File
–> Templates
–>
New Template
image
Select “Server Type” image
Supply a name for the template, and also choose whether the template should be based on an existing template or not image
Go to “Events” tab and add/remove events captured using the “Show All Events” checkbox.

NOTE that I have removed a few default events and have added 2 new ones.

image
Using the “Show All Columns”, I added one additional data column capture image
Go to “Column Filters” to apply filters on the data columns 

NOTE: If multiple criteria are defined, the “AND” operator is used

image
Go to “Organize columns” to define the sequence in which columns should appear and also to group by any particular column image
Once all adjustments are done, click on “Save” to save the template image
The trace will then appear in the “Use the template” drop-down when starting a new trace for the given server type image

Setting the Default Trace

Let’s assume that we need to set the trace we just created as the default trace. Here are the very simple sequence of steps to follow:

Navigate to File –> Templates –> Edit Template image
Select the Server Type and Template Name to edit image
Check the “Use as a default template for selected server type”.

NOTE: You can also use this interface to delete a trace template.

image

Storage & Exporting/Importing templates

Customization of templates is machine and user specific. That means that the template must be stored on the disk as a physical file. A simple file-system search reveals the location of the template file:

For SQL 2012 (code named: “Denali”), the template is available at: C:Users<<<user name>>>AppDataRoamingMicrosoftSQL Profiler11.0TemplatesMicrosoft SQL Server110

So, here’s a tip – the next time you backup your system, also backup your templates so that you don’t have to set them up again! You can Export or Import your existing templates by going to File->Templates->Export Template/Import Template:

image

In my next post…

In the next post, I will be writing about the various options available to execute the trace. Finally, towards the end of the week we would look at other uses of Profiler (specifically, trace replay).

References

Please find below some of the reference Books On Line documentation that I found when researching for this post:

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here!

SQL Server – Profiler – Part 1 – What is a SQL Trace? Permissions, space requirements and launching the Profiler


Microsoft SQL Server comes with a host of development and administrator features that warrant a separate IDE – the SQL Server Management Studio (SSMS). While the SSMS allows a developer or an administrator to write and debug T-SQL code and manage and maintain SQL Server instance(s), there is a requirement for a diagnostic tool that can be used for:

  1. Troubleshooting
  2. Debugging T-SQL code
  3. Performance monitoring & analysis
  4. Audit and review activities occurring against an instance of the Microsoft SQL Server
  5. Correlate performance counters for advanced troubleshooting

SQL Trace

All of this is achieved by “tracing” the activities of the SQL Server Database engine. Such an instance is called a “SQL Trace”. A SQL Trace is most commonly described by using the following terms. We will be using these as we talk more about the SQL Server Profiler

  • Trace = A collection of events & data returned by the SQL Server database engine
  • Events = An Event is the occurrence of an event within the SQL Server Database engine
  • Event Class = A type of event that can be traced. The event class contains all of the data columns that can be reported by an event
  • Event Category = A group of related Event Classes
  • Data column = An attribute of an event
  • Filter = Criteria that limit the events collected on a trace

Permissions

Any user requiring to run a SQL Trace using SQL Server Profiler must have the ALTER TRACE permissions on the SQL Server instance. Talking about permissions, it is important to remember that such users will be able to see sensitive information and therefore such access must be restricted to members of the db_owner fixed database role, or members of sysadmin fixed server role.

Access to trace data – Minimum disk space requirements

For efficient access to the trace data, SQL Server Profiler uses the path specified by the TEMP environment variable to store the trace data. It is required that SQL Server Profiler has a minimum 10MB of free space. If by chance, the free space drops below 10MB, all SQL Server Profiler functions stop.

Launching the SQL Server Profiler

Now that we have the basics, let’s see the 3 ways by which we can invoke SQL Server Profiler.

Via Command Prompt

This has got to be the most used method to launch the SQL Server Profiler. One can go to the “Run” prompt or launch the command prompt and type the following based on the version of Microsoft SQL Server Profiler that one wants to launch:

  • profiler90 = Profiler for Microsoft SQL Server 2005
  • profiler = Profiler for Microsoft SQL Server 2008/2008 R2/SQL 2012 (code named: “Denali”)

Via SQL Server Management Studio (SSMS)

There are 2 ways in which the Profiler is launched from the SSMS:

Via the toolbar

  1. Go to Tools –> SQL Server Profiler

NOTE: No SPID filtering is performed.

image

Via the Query editor window

  1. In the Query editor window, right-click
  2. Choose “Trace Query in SQL Server Profiler” (As an alternate to the above steps, you can use the keyboard short-cut: Ctrl+Alt+P)

NOTE: Note that the SPID being traced by the Profiler is automatically set to the SPID of the Query editor window.

image

Via Activity Monitor

  1. Launch the Activity Monitor (refer my post: Activity Monitor – Underappreciated Features of Microsoft SQL Server)
  2. Under the “Process” pane, select the activity of interest
  3. Right-click and choose “Trace Process in SQL Server Profiler”

NOTE: Note that the the SPID being traced by the Profiler is automatically set to the SPID selected in the Processes pane.

image

In my next post:

In my next post, I will be looking at Profiler Templates, Template types and how to create customized templates.

References:

I have tried to provide the reader with the essentials to get started with SQL Server Profiler. The reader may refer the following Books On Line pages for advanced reading.

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here!

SQL Server – Administration – Cleanup Database Backup History


Earlier this week, I had explored why the sequence of index creation is important in a rather long post on performance tuning. You can read the post here: SQL Server – Performance – Best Practice – Create Index Before or After a data insert?

Today’s post, therefore, will be a nice, small one – something that you might want to convert into a scheduled “spring-cleaning” task.

Viewing Backup Information

Microsoft SQL Server uses MSDB to maintain a record for each completed backup. In production environments, it is generally required that all backup history is maintained. However, in Development and Quality Assurance environments, maintaining database backup history is generally not that important – after all, databases change in these environments often.

The backup history is maintained in the following tables within the MSDB:

  • backupfile
  • backupfilegroup
  • backupmediafamily
  • backupmediaset
  • backupset
  • restorefile
  • restorefilegroup
  • restorehistory

To learn more about viewing backup history information, refer the Books On Line page at: http://technet.microsoft.com/en-us/library/ms188653.aspx

Cleaning up the backup history

In order to cleanup the backup history, we can use one of the following two system stored procedures:

Some Key points to remember:

  • Running these system stored procedures requires membership to the sysadmin fixed server role
  • Running these procedures may be performance intensive
  • Please do not use these without guidance from your DBA!

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here!

SQL Server – Performance – Best Practice – Create Index Before or After a data insert?


As a part database developer myself, I am sure that what I am about to document today will be useful to almost everyone who has ever written a stored procedure for with indexed tables for SSRS reporting or massive batch processing. What I will be trying today is to find an answer to the dilemma – Should indexes be created before or after the insert to the table?

The moment of index creation impacts table statistics, and therefore should impact performance. The point that got me thinking towards this test is the following line from Books On Line (http://msdn.microsoft.com/en-us/library/ms190397.aspx), Section: “Determining When to Create Statistics”, which quite clearly mentions that: “The query optimizer creates statistics for indexes on tables or views when the index is created.

The approach that I will be taking today is:

  1. Create two identical tables in the TempDB
    • On one table, indexes will be created before the insert of test data is done
    • On the other, I will first insert some data, and then create indexes
  2. Later, I will fire simple select queries against both, while monitoring the performance via monitoring statistics usage by:
    • setting of SET STATISTICS IO ON and,
    • Using the SQL Profiler

WARNING: The scripts provided in this post are provided “as-is” and without warranty for understanding purposes only. Using the DBCC scripts provided in the post on your QA or production environments can have serious side effects.

Creating the test environment

In order to create the test environment, I will first create a test table, create the necessary indexes and finally add some test data to it.

USE tempdb
GO

/*****************************************************************************************************************
   STEP 01: Create a table, complete with a primary key (to create a clustered index), and a non-clustered index
*****************************************************************************************************************/
USE tempdb
GO
CREATE TABLE DontUpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT',
  CONSTRAINT PK_DontUpdateStatistics_StatId PRIMARY KEY CLUSTERED 
  (
    StatId
  )
)
GO

CREATE NONCLUSTERED INDEX NC_DontUpdateStatistics_RandomId ON DontUpdateStatistics(RandomId)
GO

/*****************************************************************************************************************
   STEP 03: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO DontUpdateStatistics DEFAULT VALUES
GO 100000

Next, I will create another table. However indexes will be created after test data is inserted.

USE tempdb
GO
/*****************************************************************************************************************
   STEP 04: Create a table - this time, do not create any indexes
*****************************************************************************************************************/
CREATE TABLE UpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT'
)
GO

/*****************************************************************************************************************
   STEP 05: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO UpdateStatistics DEFAULT VALUES
GO 100000

/*****************************************************************************************************************
   STEP 06: Now create the primary key clustered index and the non-clustered index
*****************************************************************************************************************/
ALTER TABLE UpdateStatistics
    ADD CONSTRAINT PK_UpdateStatistics_StatId PRIMARY KEY CLUSTERED 
    (
        StatId
    )
GO
CREATE NONCLUSTERED INDEX NC_UpdateStatistics_RandomId ON UpdateStatistics(RandomId)
GO

Run the Test

The first thing I will do is to check if statistics for both tables are updated or not.

Confirm statistics updates

/*****************************************************************************************************************
   STEP 07: See if the statisics on the table in tempDB were updated
Script source: http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/
             : by Pinal Dave
*****************************************************************************************************************/
USE tempdb
GO
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('DontUpdateStatistics') OR
      OBJECT_ID = OBJECT_ID('UpdateStatistics')
GO

As can be seen, the statistics for Table #1 (“DontUpdateStatistics”) did not update:

image

The Test Queries

Next, allow me to present a set of SELECT queries that would fetch data from the tables of interest.

USE tempdb
GO
/*****************************************************************************************************************
   STEP 08 A: Clear out the Buffers and the procedure cache.
            : Make sure to run a CHECKPOINT!
Reference source: 
http://beyondrelational.com/blogs/nakul/archive/2011/10/03/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx
*****************************************************************************************************************/
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE

/*****************************************************************************************************************
   STEP 08 B: Turn STATISTICS IO ON, and select some data from the DontUpdateStatistics table
*****************************************************************************************************************/
SET STATISTICS IO ON

SELECT * FROM DontUpdateStatistics WHERE StatId BETWEEN 10 AND 1000 
                                     AND RandomId > 300
GO

SET STATISTICS IO OFF

/*****************************************************************************************************************
   STEP 08 C: Clear out the Buffers and the procedure cache.
            : Make sure to run a CHECKPOINT!
Reference source: 
http://beyondrelational.com/blogs/nakul/archive/2011/10/03/dbcc-dropcleanbuffers-needs-to-be-preceded-by-a-checkpoint-to-remove-all-buffers.aspx
*****************************************************************************************************************/
CHECKPOINT
GO
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
GO

/*****************************************************************************************************************
   STEP 08 D: Turn STATISTICS IO ON, and select some data from the UpdateStatistics table
*****************************************************************************************************************/
SET STATISTICS IO ON

SELECT * FROM UpdateStatistics WHERE StatId BETWEEN 10 AND 1000 
                                 AND RandomId > 300
GO

SET STATISTICS IO OFF

Configure the SQL Profiler

To use the SQL Server Profiler in order to help me back my case, I connected to my SQL Server instance, and selected only one event:

Event Group: Errors & Warnings

Event Class: Missing Column Statistics

image

The Result

With the SQL Profiler running, I turned on the Actual Execution Plan in SSMS (by pressing Ctrl+M) and then executed the test query. The result was as shown below:

The Execution Plan image
Statistics Output image
Profiler Output image

Confirm statistics updates

/*****************************************************************************************************************
   STEP 09: See if the statisics on the table in tempDB were updated
Script source: http://blog.sqlauthority.com/2010/01/25/sql-server-find-statistics-update-date-update-statistics/
             : by Pinal Dave
*****************************************************************************************************************/
USE tempdb
GO
SELECT name AS index_name,
STATS_DATE(OBJECT_ID, index_id) AS StatsUpdated
FROM sys.indexes
WHERE OBJECT_ID = OBJECT_ID('DontUpdateStatistics') OR
      OBJECT_ID = OBJECT_ID('UpdateStatistics')
GO

As can be seen, the statistics for Table #1 (“DontUpdateStatistics”) are now updated. The statistics for Table #2 (“UpdateStatistics”) are unchanged:

image

Post-Mortem

First of all, I would like to draw attention to a line mentioned in the Profiler’s description of the “Missing Column statistics” event class. The line states – “You will not see this event produced unless the option to auto-compute statistics is turned off”. For the TempDB, auto-compute statistics is ON, so that’s not the root cause to this issue.

image

If this is not the issue, then what is? Summarizing the results, I can derive the following conclusions:

  • Both queries appear to have executed with equal performance impact – with a 50% split across them according to the execution plan
  • However, looking at the statistics IO output, I can see that:
    • Read-Ahead Reads for table #1 (“DontUpdateStatistics”) = 16
    • Read-Ahead Reads for table #2 (“UpdateStatistics”) = 4
    • The query against Table #1 (“DontUpdateStatistics”) is requires 4 times more reads when compared to the query against query #2
  • The SQL Profiler warns me that column statistics are not available, and therefore, a sub-optimal plan can be chosen by the optimizer

Revisiting Books On Line, (http://msdn.microsoft.com/en-us/library/ms190397.aspx), the section: “Determining When to Create Statistics”, clearly mentions that: The query optimizer creates statistics for indexes on tables or views when the index is created.

More precisely, statistics, if missing, are created when a select is performed on the columns for which missing statistics were found.

In our case, we created the indexes on table #2 (“UpdateStatistics”) after data was inserted. Because creating indexes involves selecting data from the table, a statistics update is performed. For table #1 (“DontUpdateStatistics”) though, the indexes were created before inserting the data. When the query #1 was executed, the statistics are generated, and then the optimization happens – which makes the query a little bit slower as it spends more time on the read-ahead reads.

If we look up on Books On Line for STATISTICS IO (http://msdn.microsoft.com/en-us/library/ms184361.aspx), we can see that the read-ahead read count gives us the number of pages placed into the cache. In our case, all pages for table #1 have to be placed in the cache for the statistics update.

Conclusion

It’s been a rather long post, with a simple conclusion. It is important to think about when index creation should be performed because using the proper sequence is important from the performance and IO perspective.

Until we meet next time,

Be courteous. Drive responsibly.