Tag Archives: #SQLServer

All about Microsoft SQL Server

#0141 – SQL Server – Profiler – Part 08 – Run a SQL Trace via T-SQL code (Part B)


In my previous post in this tutorial (here), I introduced the system stored procedures that would allow us to create a SQL Trace, define events and data columns to capture and start the trace using nothing but pure T-SQL code. Today, we will see the following:

  1. How to use the graphical Profiler to generate a T-SQL script for a SQL trace
  2. Customizing this trace script and run a SQL trace
  3. Open the captured trace using the graphical Profiler
  4. Explore the option to schedule traces

The exercises to this chapter are very interesting and essential in getting a firm grasp over SQL Traces.

How to use the graphical Profiler to generate a T-SQL script for a SQL trace

As I was writing my previous post, I realized that manually identifying and writing down the integer representations of the events and data columns to capture is a labour intensive activity. I generally do not write the T-SQL scripts myself. I leverage the graphical Profiler to generate the script for me. Here’s how I generate the script using the Profiler:

Connect to any instance of SQL Server and identify the events and data-columns to be captured
(because we will only be using it to generate the T-SQL script for the trace, it does not matter what server we connect to)
Configuring trace properties
Identify any filters that need to be applied Configuring Profiler filters
Start and quickly stop the trace Profiler trace
Go to File –> Export –> Script Trace Definition and choose the version for which the script is to be generated Exporting to a T-SQL Script definition
Save the script to a location of your choice  

Here is a snapshot of the T-SQL script definition that was generated (actual code has not been provided intentionally):

Generated T-SQL script definition

As you can see, we can find usages of the following system stored procedures within the script:

  1. sp_trace_create
  2. sp_trace_setevent
  3. sp_trace_setfilter
  4. sp_trace_setstatus

Customizing this trace script and run a SQL trace

The generated T-SQL trace script file is not ready to be consumed directly. Here are a couple of things that we need to change before we can go ahead and start tracing:

  1. Provide a path to store the trace files. As mentioned in the trace file, the text “InsertFileNameHere” needs to be replaced by the actual path and file name (without the .trc extension, which the SQL Server will automatically add for us)
  2. Check the value of @maxfilesize
    • For this demo, I will reduce the max. file size to 1MB for each file (using such a low value in your production use might result in performance issues)
  3. Add (if required) the @stoptime parameter (by default, this is set to NULL, which would cause the trace to continue until the trace is manually stopped or until the server shuts down)

The changes that I have made to my trace are therefore:

declare @stoptime datetime
set @maxfilesize = 1
set @stoptime = '2012-03-07 19:40:00.000'

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:MyFolderMyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

exec @rc = sp_trace_create @TraceID output, 2, N'C:BRTraceBRT', @maxfilesize, @stoptime

To start the trace, we simply need to run the entire script by hitting the “Execute” button on the Query Editor bar in SSMS. To generate a load to capture by the trace, I would simply run the script I used in one of my previous posts.

Open the captured trace using the graphical Profiler

Once the trace completes at the specified time, you can see that the trace files are now available for analysis at the mentioned path. These files are regular .trc files and can be opened using the SQL Profiler for offline analysis, thereby leveraging the power of graphical tools for analysis and pure T-SQL for data capture with minimum overhead.

Trace File Rollover
Profiler moving automatically through the rollover files

Explore the option to schedule traces

Traces are generally used for troubleshooting performance issues on a SQL Server instance. Performance issues may crop up as part of a nightly job, a ETL (Extract-Transfer-Load) operation during a BI process or during a process where it may be desirable to schedule traces for analysis the next day or so.

We have two options for scheduling traces:

  • Specify a stop-time (refer above on how to specify a stop-time)
  • Schedule traces using the SQL Server Agent, which would simply involve:
    • Starting the trace using the SQL Server Agent jobs
    • Specifying a stop time in the T-SQL trace definition and using it in the system stored procedure sp_trace_create
    • Using the sp_trace_status system stored procedure to stop the trace manually

Exercises:

  1. Convert your most used Profiler trace template into a T-SQL script definition by hand
  2. Convert your most used Profiler trace template into a T-SQL script definition using the Profiler. Check for differences between the one by hand and the one generated by the Profiler. Why are they different?
  3. Customize the T-SQL script definition generated by either of the approaches to have:
    • Max. file size of 7MB each
    • Enable rollover and shutting down of the server in case of an error
    • Limit to 5 roll-over files
  4. Run the customized template and capture the trace for a test workload
  5. Analyze the workload using the Profiler
  6. Schedule a trace using the SQL Server Agent and the T-SQL script definition files

References/Further reading:

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here! | Tweet to @nakulv_sql

#0140 – SQL Server – Profiler – Part 07 – Run a SQL Trace via T-SQL code (Part A)


For those who have been following my posts, you would recollect that I had run a 6-part series on SQL Profiler tutorials some time ago. Based on the feedback received, I am now presenting a 4-part extension to the series. In this part, I will describe how the capturing of a SQL Trace can be done in an automated fashion, from the server-side instead of from the client side.

It is quite logical that using the graphical interface for the SQL Profiler will generate a performance overhead on the SQL Server. The primary reasons for this behaviour are:

  • All events and trace data needs to be brought over to the tracing client application – this introduces an overhead on the network
  • Graphically representing this data introduces a performance overhead in making room for new trace data flowing in from the network

The ideal solution to this is to have as concise and precise trace as possible and to run the trace on the SQL Server.

T-SQL Support for SQL Traces

SQL Server comes with 5 system stored procedures that allow us to monitor performance and activity on the server using T-SQL code on the server itself and also eliminating the need for using graphical tools (Profiler) for data capture, thereby improving overall performance. We will be looking at the 4 most commonly used system stored procedures that allow us to write a SQL Trace using T-SQL.

Creating the trace – sp_trace_create

The first step is, of course, to create a SQL Trace. Once the trace is created, then we can go in and configure the trace, add events to monitor and columns to capture and finally, run the trace. The system stored procedure to create a new SQL Trace is “sp_trace_create”. It accepts the following parameters:

  1. @trace_id
    • This is the only OUTPUT parameter, which contains the trace_id assigned to the newly created SQL Trace
  2. @options
    • Allows the user to control whether the system should rollover to a new target storage file when the given maximum size is reached, or whether to shut-down the SQL Server in case a trace file cannot be written to
  3. @tracefile
    • This parameter specifies the local or UNC path of the file to which the trace output should be written to
  4. @maxfilesize
    • Allows the user to set the maximum size to which a trace file will grow before a roll-over or an overwrite happens
  5. @stoptime
    • Specifies the date and time the trace collection will be stopped
  6. @filecount
    • Allows the user to control the maximum number of rollover files, after which an overwrite should happen

An example call to the stored procedure sp_trace_create -  is shown below:

DECLARE @rc INT
DECLARE @TraceID INT
--Create the SQL Trace
EXEC @rc = sp_trace_create @TraceID output, 0, N'InsertFileNameHere', @maxfilesize, NULL 
SELECT @rc, @TraceID

Words of caution:

  1. Avoid using trace target file names that contain underscores (“_”) in their names. This is because the rollover mechanism versions files using the underscore (“_”)
    • Using underscores (“_”) in the file name will cause rollover mechanism to fail
  2. This stored procedure only defines a trace. It does not start, stop or modify an existing trace
  3. The parameters of this stored procedure are strongly typed. The stored procedure will error out if called with an incorrect combination of input parameter data types
  4. If NULL is specified for @stoptime, the trace runs until it is manually stopped or until the server shuts down

Configuring the trace – sp_trace_setevent

Now that we have the trace defined, it is time for us to configure the trace by setting the events that need to be monitored and the columns that need to be captured. As is the case with the graphical tool (Profiler), the events/columns configurations can be done only on traces that are currently stopped.

As expected, the stored procedure sp_trace_setevent expects the following 4 parameters:

  1. @trace_id
    • The @trace_id returned by sp_trace_create
  2. @eventid
    • The integer representation of the particular event to capture
  3. @columnid
    • The integer representation of the particular data column to capture
  4. @on
    • Bit flag indicating whether to capture this event/column or not (corresponds to the check-box on the graphical tool (Profiler))

What this means is that if we are capturing 10 data columns for 1 event, we would need to write 10 sp_trace_setevent statements. Here’s an example:

-- Set the events
declare @on bit
set @on = 1
exec sp_trace_setevent @TraceID, 125, 1, @on
exec sp_trace_setevent @TraceID, 125, 3, @on
exec sp_trace_setevent @TraceID, 125, 8, @on
exec sp_trace_setevent @TraceID, 125, 12, @on
exec sp_trace_setevent @TraceID, 125, 14, @on
exec sp_trace_setevent @TraceID, 125, 22, @on
exec sp_trace_setevent @TraceID, 125, 26, @on
exec sp_trace_setevent @TraceID, 125, 34, @on
exec sp_trace_setevent @TraceID, 125, 35, @on

Per Books On Line (http://msdn.microsoft.com/en-us/library/ms186265(SQL.110).aspx), the above statements mean that we are capturing:

  • Event ID = 125, i.e. “Deprecation Announcement”
  • Column ID = 1, i.e. “TextData”
  • Column ID = 3, i.e. “DatabaseID”
  • Column ID = 8, i.e. “HostName”
  • Column ID = 12, i.e. “SPID”
  • and so on..

Setting the filters – sp_trace_setfilter

Now that we have the events & columns for the trace defined, we may want to filter the trace to capture only the events generated by let’s say, a specific host machine or against a specific database. To do so, we can use the sp_trace_setfilter system stored procedure. The stored procedure accepts the following as parameters:

  1. @traceid
    • The @trace_id returned by sp_trace_create
  2. @columnid
    • The ColumnId for the column that needs to be filtered
    • Use a NULL here to clear all filters for the specified trace
  3. @logical_operator
    • Specifies whether the “AND” or “OR” logical operations are applied with respect to other filters on the same trace
  4. @comparison_operator
    • Specifies an enumerated integer value that indicates the type of comparison to make
  5. @value
    • Specifies the value on which to filter

Words of caution:

  1. The data type of @value parameter must match the data type of the column to be filtered
  2. To apply two filters on a particular column (e.g. the application name column and one filter on the username column), the filters must be applied on application name in sequence

This is demonstrated very clearly by the following example, which has been borrowed from the Books On Line:

--Notice that filters for column Id #10 ("Application Name") are applied together
sp_trace_setfilter  1, 10, 0, 6, N'SQLT%'
sp_trace_setfilter  1, 10, 0, 6, N'MS%'
sp_trace_setfilter  1, 11, 0, 0, N'joe'

Setting the trace status – sp_trace_setstatus

This one is perhaps the simplest of all system stored procedures for working with a SQL Trace. Now that a new trace created and  events, columns and filters have been defined, it is time to start the trace. The parameters therefore are:

  1. @traceid
    • The @trace_id returned by sp_trace_create
  2. @status
    • Start or stop the data capture

An example would be:

-- Set the trace status to start
exec sp_trace_setstatus @TraceID, 1

Following these 4 steps essentially completes defining, modifying and starting a SQL Trace. In the next part of the tutorial, we will see how the graphical tool (Profiler) helps us to generate the T-SQL code for a given SQL trace and how to run the T-SQL based SQL Trace.

Further reading:

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here! | Tweet to @nakulv_sql

#0139 – SQL Server – Productivity feature – customize keyboard shortcuts


Recently, I was working with a developer colleague of mine to troubleshoot an issue he had with one of his queries. A stored procedure he wrote was consuming data from another procedure and was not functioning as expected. I suspected the issue to be a missing change in the related stored procedure.

The easiest way to verify this is obviously to script out the object and verify for the existence of the required code. Scripting out a programmability object (a stored procedure, function or a view) can be done in any one of the following ways:

  1. Script the object using the Object Explorer
  2. Query the catalog view – sys.sql_modules
  3. Use the sp_helptext system stored procedure

The recommended way is of course, to use the Object Explorer to script out the object. However, the fastest method for most practical purposes is using the system stored procedure sp_helptext.

sp_helptext takes at least one parameter – the object name, with a normal usage similar to the one shown below:

USE AdventureWorks2008R2
GO
--Normal implementation
sp_helptext uspGetBillOfMaterials
GO

Scripted object using sp_helptext

However, there is another way, which is even faster than typing in sp_helptext – which my colleague was surprised to learn about. That method is to customize the keyboard settings in SSMS such that any key one of the various supported, customizable key combinations stands for executing the system stored procedure sp_helptext.

Customizing the SSMS Keyboard

In order to customize the keyboard settings for your SSMS instance, here are the simple steps that need to be followed:

  1. Navigate out to Tools –> Options
  2. Within the Options window, expand the “Environment” node and navigate to the “Keyboard” node
  3. Navigate to the “Query shortcuts” node by expanding the “Keyboard” node
  4. Notice that various key combinations and assigned stored procedures to be executed are listed in the “Query shortcuts” node
  5. Against the preferred key combination, enter the stored procedure that you would like to execute

Keyboard customization in SSMS

As you can see from the screenshot above, the following are the default key assignments that come with SSMS:

  • Alt + F1 = sp_help
  • Ctrl + 1 = sp_who
  • Ctrl + 2 = sp_lock

In my case, I had the sp_helptext added as the key combination Ctrl+F1.

Please restart the SSMS after making changes to the keyboard configuration.

Using the customization

Once the keyboard combinations are assigned, there is no longer a need to type in sp_helptext anymore. Simply selecting the object name and using the key combination defined (in my case, Ctrl+F1), executes the stored procedure and results are returned.

Using the keyboard customization in SSMS

I hope you liked today’s tip!

References:

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here! | Tweet to @nakulv_sql

#0138 – SQL Server – Performance – Best Practice – Create Index Before or After a data insert – I/O, Fragmentation, CPU and Elap


In November 2011, I wrote a post titled SQL Server – Performance – Best Practice – Create Index Before or After a data insert?. The post dealt with answering the question – which method was better – whether to create an index before or after a bulk-insert? The post was essentially targeted to code that generated temporary tables for SSRS reporting or bulk processing.

Erik Eckhardt provided some great feedback on the post. Essentially, his feedback was:

  1. He wanted to see a Profiler trace comparison between the two approaches
  2. A fragmentation study was in order – which method produced the most efficient indexes?

I had intended to carry out this study a long time ago (during the Christmas week-end), but it’s better late than never. So, here’s the results of the study.

Creating the test data

The approach that I will be taking today is:

  1. Create two identical tables in the TempDB
    • On one table, indexes will be created before the insert of test data is done
    • On the other, I will first insert some data, and then create indexes

WARNING: The scripts provided in this post are provided “as-is” and without warranty for understanding purposes only. Using the DBCC scripts provided in the post on your QA or production environments can have serious side effects

The script that I will be using today is same as that in my earlier post. However, please find it below for your kind reference. First up is the query that creates the indices before inserting the data.

USE tempdb
GO

SET NOCOUNT ON

/*****************************************************************************************************************
   STEP 01: Create a table, complete with a primary key (to create a clustered index), and a non-clustered index
*****************************************************************************************************************/
USE tempdb
GO
CREATE TABLE DontUpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT',
  CONSTRAINT PK_DontUpdateStatistics_StatId PRIMARY KEY CLUSTERED 
  (
    StatId
  )
)
GO

CREATE NONCLUSTERED INDEX NC_DontUpdateStatistics_RandomId ON DontUpdateStatistics(RandomId)
GO

/*****************************************************************************************************************
   STEP 02: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO DontUpdateStatistics DEFAULT VALUES
GO 100000

Next up is the script that I recommend – wherein the indexes are created after the data is inserted.

USE tempdb
GO

SET NOCOUNT ON

/*****************************************************************************************************************
   STEP 04: Create a table - this time, do not create any indexes
*****************************************************************************************************************/
CREATE TABLE UpdateStatistics
( StatId INT IDENTITY(1,1),
  RandomId INT DEFAULT (RAND()*1000),
  StatName VARCHAR(10) DEFAULT 'DEFAULT'
)
GO

/*****************************************************************************************************************
   STEP 05: Insert some test data into the table
*****************************************************************************************************************/
USE tempdb
GO
INSERT INTO UpdateStatistics DEFAULT VALUES
GO 100000

/*****************************************************************************************************************
   STEP 06: Now create the primary key clustered index and the non-clustered index
*****************************************************************************************************************/
ALTER TABLE UpdateStatistics
    ADD CONSTRAINT PK_UpdateStatistics_StatId PRIMARY KEY CLUSTERED 
    (
        StatId
    )
GO
CREATE NONCLUSTERED INDEX NC_UpdateStatistics_RandomId ON UpdateStatistics(RandomId)
GO

I will now start by answering the 2nd question first – which method produced an index that was more efficient from a fragmentation perspective?

Fragmentation study

Please note that in both the queries above, the clustered index is created on the column – StatId, which is an identity column. The inserts are therefore in the order we would want them to be in both cases, causing a “write-once” operation as Erik put it.

Once the data has been inserted into the temporary tables, I ran the following query that uses the sys.dm_db_index_physical_stats DMV to take a look at the index physical statistics. NOTE: I will not use the DBCC SHOWCONTIG because it is marked for deprecation (BOL reference here).

USE tempdb
GO
SELECT 'Table: DontUpdateStatistics' AS TableName,
       index_id AS IndexId,
       index_type_desc AS IndexType,
       index_depth AS IndexDepth,
       avg_fragment_size_in_pages AS 'AverageFragmentation(Pages)',
       avg_fragmentation_in_percent AS 'AverageFragmentation(Percentage)',
       fragment_count AS Fragments,
       page_count AS 'PageCount',
       partition_number AS PartitionNumber,
       alloc_unit_type_desc AS AllocationUnitType,
       index_level AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('DontUpdateStatistics'),-1,0,NULL)

SELECT 'Table: UpdateStatistics'AS TableName,
       index_id AS IndexId,
       index_type_desc AS IndexType,
       index_depth AS IndexDepth,
       avg_fragment_size_in_pages AS 'AverageFragmentation(Pages)',
       avg_fragmentation_in_percent AS 'AverageFragmentation(Percentage)',
       fragment_count AS Fragments,
       page_count AS 'PageCount',
       partition_number AS PartitionNumber,
       alloc_unit_type_desc AS AllocationUnitType,
       index_level AS IndexLevel
FROM sys.dm_db_index_physical_stats(DB_ID(),OBJECT_ID('UpdateStatistics'),-1,0,NULL)

I looked at the following values:

  • avg_fragmentation_in_percent – The percent of logical fragmentation (out-of-order pages in the index)
  • fragment_count – The number of fragments (physically consecutive leaf pages) in the index
  • avg_fragment_size_in_pages – Average number of pages in one fragment in an index

Essentially, The avg_fragment_size_in_pages and avg_fragmentation_in_percent values are inversely proportional to each other. Therefore, we are looking for a higher value of avg_fragment_size_in_pages, and a lower value of avg_fragmentation_in_percent.

Below is the output that I received when the above queries were executed against my test instance.

Fragmentation Results

In case you are not able to see the screenshot above, here’s the same data in tabular format:

TableName Index

Id
IndexType Index

Depth
Average

Fragmentation(Pages)
Average

Fragmentation(Percentage)
Fragments Page

Count
Partition

Number
Allocation

Unit

Type

Index

Depth
Table: DontUpdateStatistics 1 CLUSTERED INDEX 2 11.5666666666667 1.44092219020173 30 347 1 IN_ROW_DATA 0
Table: DontUpdateStatistics 2 NONCLUSTERED INDEX 2 1 99.6108949416342 257 257 1 IN_ROW_DATA 0
Table: UpdateStatistics 1 CLUSTERED INDEX 2 173.5 0 2 347 1 IN_ROW_DATA 0
Table: UpdateStatistics 2 NONCLUSTERED INDEX 2 58 0.574712643678161 3 174 1 IN_ROW_DATA 0

Results

Contrary to expectations, the results above clearly indicate that we have higher fragmentation in cases when indexes are created before data is inserted into the tables. It does not matter when these values (i.e. the results of the DMV) are fetched – before or after a SELECT is executed against the table, i.e. the index fragmentation remains the same and are not affected by when the statistics on the table are updated.

CPU Utilization and Elapsed Time study

I would address this part of the study in 2 parts:

  1. In my original post (here), we already saw that when running the SELECT statements against the table, the Profiler was able to detect that the table: DontUpdateStatistics – did not have the expected statistics
    • We have also seen that when STATISTICS IO was turned ON, the SQL Server clearly showed a performance difference between the two SELECT statements
  2. In this post, I will try to see whether the performance gain in the second SELECT has been redistributed during the index creation or data load phases

To see the overall CPU time and elapsed time statistics, I used the SET STATISTICS TIME ON option wrapped around the Index (both clustered & non-clustered) creation and data insert statements. The result of the execution based on the data returned for the test data of 100,000 records is summarized below:

Data Insert (ms) Index (clustered & non-clustered) creation (ms) Totals (ms)
CPU Time
Table: DontUpdateStatistics 4687 0 4687
Table: UpdateStatistics 4533 400 4933
Elapsed Time
Table: DontUpdateStatistics 24264 7 24271
Table: UpdateStatistics 22364 23872 46236

Summary

It is now time to summarize the findings.

Data can be loaded to a table in multiple ways. We have been comparing the pros & cons of 2 such methods:

  1. Create necessary supporting indexes first, insert data afterwards
  2. Insert data first, create necessary supporting indexes afterwards

My original post compared these approaches in terms of the number of read-ahead reads (number of pages placed into the cache) required before the SQL server can serve our request to fetch data from the tables depending upon when and whether statistics were updated for the tables. In this post, we studied the level of index fragmentation and time statistics.

The following chart is a summary of my findings based on which approach “wins” (Wins represented as “W”, whereas a loss by “L”):

Most up-to-date statistics Read-ahead reads required to service a query Index fragmentation level Time (CPU + Elapsed time) statistics
Index before, insert after

L

L

L

W

Insert before, index after

W

W

W

L

Further reading

Previous post

Previous post

This post

This post

In short, if execution time is of utmost importance for a given scenario, the approach #1 is preferred wherein indexes are created before inserting the data (which is the most common approach). However, if fragmentation, up-to-date statistics and reduced number of read-ahead reads are of importance (in case of read-only data), the approach #2 is preferable wherein data is inserted before creating the indexes.

I trust you found the study conducted above useful & interesting. If you have any comments, or would like to know more about it, please do feel free to contact me via the associated discussion or twitter. As always, I really appreciate the feedback that you, the community, provides.

References

Until we meet next time,

Be courteous. Drive responsibly.

#0137 – SQL Server – Review – Pluralsight course – SQL Server Questions & Answers – by Pinal Dave & Vinod Kumar


I recently had the opportunity to view some of the newest courses from Pluralsight, an organization that offers online training courses for developers. One of the courses that I viewed was the SQL Server Questions & Answers (http://www.pluralsight-training.net/microsoft/Courses/TableOfContents?courseName=sql-server-qa) by Pinal Dave (blog) and Vinod Kumar (blog), both of whom have been a source of inspiration to the entire SQL Server community and enthusiasts like myself.

Today, I take the opportunity to write a review on the course.

Book or Online course? Are they the same?

Pinal & Vinod recently authored a book “SQL Server Interview Questions and Answers”, which I reviewed in my post here. My review was “With real world questions and practical answers, this is one book you simply cannot afford to miss.

So, the question now is – both the book and the online course have similar names. Are they the same? My answer: No.

While the online course is based a highly filtered content from the book, the online course is much more than the book. Pinal Dave takes the place of the garden-variety SQL Server developer & administrator whose mind is full of myths surrounding SQL Server. Vinod Kumar spreads light by demonstrating the problems caused by these common misconceptions and also demonstrates the right way to do things.

The online course is more of a “myth-buster”, and is definitely not the same as the Interview Questions book.

Overall feel of the course

When I first looked at the course index, I thought – there’s no way such a laundry list of concepts can be covered in a time that’s just above 120minutes (2 hours). But, the simplicity by which Pinal & Vinod explained the most complicated concepts left me hungry for more. I took the entire course in one, uninterrupted session and it’s quite possible that almost everyone who views the course would have a similar experience.

The course is all about eliminating some of the most common misconceptions in the developer community around SQL Server. Here are some of the best:

  • Are stored procedures pre-compiled?
  • What is the difference between temporary tables & temporary variables?
  • Does the order of columns in the WHERE clause matter?
  • What is the real difference between DELETE & TRUNCATE?

In addition, the course also enlightens us on some of the basic elements like – how long should a transaction be? or, are locks maintained throughout the duration of the transaction?

At one point in the course, Vinod says – "Dynamic SQL is really not bad, it’s the way in which we prepare & execute SQL that matters." – It just can’t get any better than this!

Some surprises….

Pinal & Vinod never cease to surprise me. The final part of the session is not about questions and answers, but about how the various Tools & utilities within SQL Server can help us make working with SQL Server a breeze. While I have written about them in the past and use them in my day-to-day work, I was surprised to note a couple of things that Vinod had up his sleeve.

  1. Activity Monitor
  2. Object explorer details
  3. Filtering within Object Explorer details
  4. Template Explorer
    • Vinod demonstrates some really cool tricks here (and my post does not cover them)
  5. Query Editor & Query Editor results tab (I won’t share any links here – watch the course!)

My Recommendation

Every SQL Server developer must take the course. If you are into DB design, this course has elements that you would definitely enjoy, especially the part which demystifies DENY v/s GRANT operations.

Do not miss the course for a highly productive work environment! So, this week-end, register with Pluralsight and take the course.

Until we meet next time,

Be courteous. Drive responsibly.

Subscribe to my posts via E-mail: Subscribe here | Read my posts via your favourite RSS reader: Click Here! | Tweet to @nakulv_sql