Category Archives: Blog

Articles related to this Blog

UNIX timestamp and the Year 2038 problem


My first post was on the Ghost of the Y2K, wherein I attempt to revisit the infamous Y2K problem (read and listen to a nice summary on NPR’s Science Friday) and also draw attention to the fact that the problem may not be gone yet for those who continue to use 2-digit year representation. Earlier this week, I wrote about a problem that one of my colleagues asked my help for. They were trying to convert a UNIX timestamp (BIGINT) to a SQL Server DATETIME representation. However, because the DATEADD function can only accept the INT data-type for the interval value, they were at a dead-end. After about an hour of trying what the UNIX timestamp is, I came up with two solutions solution for them – one of which I presented in the last post. You can read about it here.

When I was researching the possible solutions, a strange observation dawned on me and further research revealed that I was not the only one to notice this. This is popularly known as the Year 2038 problem – and is a very real threat to all 32-bit applications which hope to make it till 2038. Here, I attempt to explain the year 2038 problem and also attempt to come up with another alternate solution to using UNIX timestamp values in the DATEADD function.

The INT data-type

The INT data type is the primary integer data type in SQL Server. Consuming 4 bytes in storage, the INT is a 32-bit data-type with a range of -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647).

In Binary format, the ceiling value for INT (2,147,483,647) equates to 01111111111111111111111111111111 – All ones! (The string is exactly 32-bits long, and there are 31 “1’s” to be precise). Attempting to convert this number to a SQL Server DATETIME gives us the DATETIME as 2038-01-19 03:14:07.000.

DECLARE @DateInt     INT

SET @DateInt    = 2147483647 /* Max. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The Year 2038 problem

Now that we know what the UNIX timestamp is, and also knowing the eternal truth that time and tide wait for none, we will see how January 19, 2038, 3:14:08 (UTC) is represented in UNIX timestamp format. Because we know that UNIX timestamp is simply a uniformly incrementing, the number is obviously 2,147,483,647 + 1 = 2,147,483,648. Alternatively, based on the computation logic used in my previous post, the number comes out to be 2,147,483,648.

SELECT CAST((DATEDIFF(dd,'1970-01-01 00:00:00.000','2038-01-19 03:14:08.000') * 86400) AS BIGINT) 
       +                                            -- Get the number of seconds based on the number of days elapsed
       DATEDIFF(ss,'2038-01-19 00:00:00.000','2038-01-19 03:14:08.000')
                                                    -- Add the number of seconds elapsed since midnight

There is a very important point I want to highlight by showing the conversion script above. If you observe, I had to cast the number to a BIGINT. This is because, when represented in binary format, 2,147,483,648 = 10000000000000000000000000000000.

This is a clear buffer overflow for the INT data-type (the 32nd bit in a 4 byte representation is the sign bit). For the INT data-type, this corresponds to the floor value of the range (-2,147,483,648). What this means is that at exactly January 19, 2038, 3:14:08 UTC time, the clocks using the 32-bit INT data-type would start reading: 1901-12-13 20:45:52.000

DECLARE @DateInt     INT

SET @DateInt    = -2147483648 /* Min. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The results could be disastrous if such a rollover happens unchecked. If your application is currently processing data 20 years ahead of time, you will start running into problems very soon.

This is the Year 2038 problem, also known as Y2K38 or Y2.038K or the UNIX Millennium Bug. In essence, what my colleague was asking was – “How can I overcome the year 2038 bug in my 32-bit application?”

The solution:

The only solution to the year 2038 problem is also hidden in the script shown above, wherein I use a CAST to a BIGINT value. We can use unsigned integer values, use BIGINT values, but the only real solution is to move towards 64-bit systems, where the wrap-around comes approximately 292 billion years from now. Computing systems would be very, very different by then.

Using BIGINT values for DATEADD – Method 02

My previous post dealing with using BIGINT values in DATEADD mentioned that there are actually two alternate implementations on how to represent the BIGINT UNIX timestamp values as SQL Server DATETIME. With the background of the year 2038 problem, I present before you method #02, which is as under. The script should be self-explanatory:

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

/* Method 02 - The max. value for INT (2147483647) corresponds to '2038-01-19 03:14:07.000' */
/*             If the passed value is greater than '2038-01-19 03:14:07.000', we add the    */
/*             difference (passedValue-2147483647) to '2038-01-19 03:14:07.000',            */
/*             else we use the standard difference with 1970.                               */

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2147483648

SELECT DATEADD(ss,@DateInt,'1970-01-01') AS 'IntegerDateRange',
       CASE WHEN (@DateBigInt-2147483647) > 0 THEN DATEADD(ss,(@DateBigInt-2147483647),'2038-01-19 03:14:07.000')
                                              ELSE DATEADD(ss,@DateBigInt,             '1970-01-01 00:00:00.000')
                                              END AS 'BigIntDateRange'

Now that we have seen both methods of handling the year 2038 problem created by 32-bit data-types, it is up to the reader to pick and use any method they find best suited for their application code.

Some Trivia

Today, I leave you with some very interesting trivia (courtesy: Wikipedia):

  1. UNIX Time passed 1000000000 on September 9, 2001, at exactly 01:46:40 (UTC). This is popularly called the UNIX billennium
  2. UNIX Time passed 1234567890 on February 13, 2009, at exactly 23:31:30 (UTC)  – Interestingly, it was a Friday
  3. UNIX Time passed 2^30 (1073741824) on January 10, 2004, at exactly 13:37:04 (UTC). This is half-way between UNIX epoch and year 2038 problem
  4. UNIX Time passed 1111111111 on March 18, 2005, at exactly 01:58:31 (UTC)
  5. UNIX Time will pass 2000000000 on May 18, 2033, at exactly 03:33:20 (UTC). This will be the 2nd UNIX billennium

In Conclusion…

In my previous post, I had mentioned that I did not believe the Microsoft Connect case filed for making DATEADD compatible with BIGINT is really a SQL Server defect. This is because as demonstrated above, the issue is actually an architectural limitation of the 32-bit architecture. Whatever code change we do to overcome this limitation will always be workaround, and not the true solution.

Many embedded systems (like automobile control systems and CNC controllers) use the UNIX timestamp to maintain their internal clock. These systems will also be affected by the Year 2038 problem. Hence, my question to the community is do you work with UNIX timestamps? If yes, I would like to hear from you as to how you handle (or plan to handle) both these issues that we discussed this week. Do leave a comment as you go.

Be courteous. Drive responsibly.

Converting UNIX timestamp (BIGINT) to DATETIME value using DATEADD function


Contrary to popular belief, offices can be fun – especially if you are faced with new challenges each day. Hence, this week, we will be taking a small break from discussing the “Underappreciated features of SQL Server”, and instead discuss something that happened at the office a few days ago. One of the project engineers asked me a seemingly very simple question.

“Can I represent a UNIX time value to a valid date using SQL Server? If yes, can you help me on how to achieve this?”

As always, I gladly agreed to assist him resolve his query. But before I could help him, I needed some help myself. The first question that came from me to him was “What is a UNIX time?”. The poor guy was petrified at the prospect of being helped by someone who doesn’t even know what the context of the discussion means. Anyway, this is what he explained to me:

UNIX time

UNIX time, or POSIX time is defined as the number of seconds elapsed since midnight of January 1, 1970 (in Coordinated Universal Time (UTC)), not counting leap seconds. The only reason why this does not consider leap seconds is because the Coordinated Universal Time (UTC) did not come into existence (in it’s current form) until 1972. UTC introduced the concept of adding and deleting leap seconds to keep the clock in synchronization with the Earth’s rotation.

Anyway, UNIX time is quite simple actually (simple is always good!). The UNIX epoch is the time 00:00:00 UTC on January 01, 1970. The Unix time number is zero at the Unix epoch, and increases by exactly 86400 seconds (24 hours * 60 minutes/hour * 60 seconds/minute) per day since the epoch.

Thus, a UNIX time of 1293840000 corresponds to the midnight of January 01, 2011 (the New Year!).

Once I was clear on this, I was able to help him out and here’s how. Immediately after starting, I landed up with a problem.

The Problem

Theoretically, simply adding the number of seconds to January 01, 1970 using the SQL DATEADD function should be sufficient to convert a UNIX timestamp to SQL Server DATETIME. And hence, I directly attempted the same, but the solution failed in basic unit testing. Here’s how:

For midnight, January 01, 2011 (UNIX time = 1293840000), the approach of directly adding the seconds to the UNIX epoch works fine. However, the moment I attempt to convert a timestamp value that is higher than the INTEGER data type limit (e.g. midnight January 01, 2050 = UNIX time 2524608000), I end up in an arithmetic overflow error. The source of the problem is the DATEADD function, which cannot accept a BIGINT interval value.

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2524608000 /* (UNIX time = 1293840000 => midnight, January 01, 2050) */

/**************** Building the Scenario *****************/
PRINT DATEADD(ss, @DateInt, '1970-01-01')
PRINT DATEADD(ss, @DateBigInt, '1970-01-01')

image

In today’s business world, not being able to handle dates as “close” as January 01, 2050 is not at all acceptable. This “limitation” has already been reported to Microsoft (refer the Connect link here). While I do not believe this to be a SQL Server defect (Why? read on…), we undoubtedly need something for applications being written today. Below are a few of my attempts to overcome this challenge.

The Solutions

I will be presenting two possible solutions before you – one today, and the other one later on in the week. Depending upon your preference, you may use either one in your application.

For today, let’s work our way backwards. We do not necessarily need to add the entire difference (in seconds) to the UNIX epoch time. DATEADD provides us the freedom to add years, months, and even days. Because we know that each day in the UNIX time is exactly 86400 seconds long, we can easily calculate the number of days elapsed based on the seconds information provided by performing a simple division.

In an ideal world, we would not be dealing only with a time value of midnight. We can have any hour, minute or second in the day coming through as part of the BIGINT number that we receive in UNIX time from interfacing systems. Because (in normal approximated terms) a day can have a maximum of 86400 seconds (which can be handled by an INT data-type), we simply need to use the modulo operator to get the number of seconds from the last midnight value.

/* Method 01 - Calculate based on the number of seconds per day (60sec*60min*24hrs)=86400sec */
SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2524644630 /* (UNIX time = 2524644630 => 10:10AM, 30 seconds, January 01, 2050) */
SELECT DATEADD(ss, @DateInt,            '1970-01-01')                                    AS 'IntegerDateRange',
       (@DateBigInt/86400) AS NumberOfDaysSinceEpoch,
       (@DateBigInt%86400) AS NumberOfSecondsFromMidnight,
       DATEADD(ss, (@DateBigInt%86400), DATEADD(DAY, @DateBigInt/86400, '1970-01-01' ) ) AS 'BigIntDateRange'

Here’s what the output looks like:

image

Conclusion

We have thus successfully represented a UNIX time as a SQL Server DATETIME value (represented as UTC time). However, please note that because of the fundamental differences between UTC and UNIX time, the conversions can be a bit off depending upon the number of leap seconds introduced till the date under consideration.

As I was researching on how to use BIGINT in the DATEADD function, an interesting observation came to light about using the 32-bit INT data-type – which forces me to start thinking about the time I have left to upgrade my 32-bit home operating system and hardware to 64-bit. This is referred to popularly as the Year 2038 problem, which I will write about later in the week along with the second method to convert a BIGINT to a DATETIME value. The upcoming article will also justify why I do not believe this to be a SQL Server defect.

Be courteous. Drive responsibly.

Multi-Server Queries – Underappreciated features of Microsoft SQL Server


This is part of the series of posts that I am running under the title "Under Appreciated Features of SQL Server". The series has been inspired by Andy Warren’s editorial on SQLServerCentral.com.

After Activity Monitor, this is one of the other features that I was surprised to see in the list of underappreciated features. This feature has been a dream come true for anybody who manages more than one server at a time. The need to be able to query a bunch of servers in one go and the fact that this feature has been around since Microsoft SQL Server 2005 had made me think that this should have been pretty common in usage by now.

Anyway, let’s quickly see how to use this wonderful feature that Microsoft SQL Server Management Studio provides.

Running Multi-Server Queries

One of our esteemed SQL Server MVPs is Mr. Pinal Dave (blog). He has, on June 14, 2009 written a very short, concise and precise blog post on how to run Multi-Server Queries. You can find that particular post here.

Keep in mind that the most important usability change that you need to make in order to use this feature is that all servers participating in a multi-server query must be registered servers.

What Pinal’s blog post does not cover are some navigational aids and query execution options. I will attempt to cover them here.

Changes to the Query Editor window

The Query editor window in a multi-server query is similar to the regular single-instance query editor, but there is one minor difference. The SSMS query status bar informs us about the number of servers that a particular query will run against. On my test environment, let’s say I have two registered instances. When I attempt to run a multi-server query, this is what I see:

image

Query Execution Options:

In Pinal’s post, you would have already seen how the results would appear in your results pane by default. Results from both instances are available in a single grid, and the SQL Server instance name helps us identify which server contributed to the result. However, we can change the way these results are displayed. You can navigate out to Tools->Options->Query Results to see a window similar to the following. Please note that the values in the screenshot below are default values.

image

  • Add login name to the results – Setting this to TRUE adds a column containing the login name which produces the result row
  • Add server name to the results – Adds a column containing the server instance name for a particular result row when set to TRUE
  • Merge Results – Shows results from different servers in the same grid when set to TRUE

Allow me to change the SQL Server registration of one of the servers to use SQL Server authentication. Also, I will set the “Merge Results” property to FALSE, and the “Add login name to the results” to TRUE. Below will be the output when I run the following query (borrowed from the example in Pinal’s blog):

SELECT SERVERPROPERTY('Edition') AS Edition,
SERVERPROPERTY('ProductLevel') AS ProductLevel,
SERVERPROPERTY('ProductVersion') AS ProductVersion
GO 

image

Important points to note

Limitation

As of now, it is not possible to select a subset of servers from within a server group (you can select a single instance, but not a subset). The recommendation would be to have a separate group if such a selection needs to be made often.

This is not a programmability enhancement!

Also, this feature is nothing more than an ability to execute the same query against multiple registered servers and obtain their results within a single window. It is not a programmability enhancement, and hence the following should not be expected:

  1. Cross-server support for JOIN or ORDER BY clauses
  2. Transaction support across multiple servers

Security considerations

Quoting MSDN:

Because the connections maintained by a Central Management Server execute in the context of the user, by using Windows Authentication, the effective permissions on the registered servers might vary. For example, the user might be a member of the sysadmin fixed server role on the instance of SQL Server A, but have limited permissions on the instance of SQL Server B.

You may also want to read:

This post completes the first part of my series on “Underappreciated features of Microsoft SQL Server”. Do share your feedback on how the journey has been till now, and what else would you like me to write on.

We will resume our journey to explore T-SQL Enhancements in the next couple of weeks. In the meanwhile, I will share with you some other interesting incidents that happened last month.

Until we meet next time,

Be courteous. Drive responsibly.

T-SQL Debugger – Underappreciated Features of Microsoft SQL Server


This is a part of my ongoing series on the "Underappreciated Features of Microsoft SQL Server", inspired from Andy Warren’s editorial on SQL Server Central.com. You can read the editorial and follow the discussion here.

More often than not, there is a myth in the developer community that Microsoft SQL Server does not provide the debugging abilities that other IDEs provide. SQL Server 2000 provided basic debugging ability. Starting Microsoft SQL Server 2005, the product now comes with it’s own Management Studio, which is based upon Microsoft’s renowned IDE – the Visual Studio. Therefore, the most common debugging abilities come out of the box for SQL Server – enough to give application developers a run for their money! Let’s have a look at some of the basic debugging options that are available to us in SSMS.

Please note that throughout this example, we will be using Microsoft SQL Server 2008, and the AdventureWorks2008 sample database as our reference. Please allow for a larger loading period of this post because the Debugger is best explained visually.

NOTE: For T-SQL Debugging, connecting the Transact-SQL editor to the server with a login having the sysadmin fixed server role is not sufficient; the Windows login of the Visual Studio user must also be a member of the sysadmin fixed server role on that instance of SQL Server.

Creating the environment

We can debug almost everything in SQL Server – ad-hoc SQL statement batches, stored procedures and even triggers! To begin with, we will create the following stored procedure on the SQL Server 2008 instance. The purpose of using a stored procedure that updates the [Sales].[SalesOrderDetail] table is that the table has a trigger on it and hence it would be easy for us to see how to debug triggers in Microsoft SQL Server 2008.

--Step 01: Create the following Stored Procedure
USE AdventureWorks2008;
GO
IF OBJECT_ID ( 'Sales.proc_UpdateSalesOrderDetailUnitPrice', 'P' ) IS NOT NULL 
    DROP PROCEDURE [Sales].[proc_UpdateSalesOrderDetailUnitPrice];
GO

CREATE PROCEDURE [Sales].[proc_UpdateSalesOrderDetailUnitPrice]
    @tSalesOrderId       INT,
    @tSalesOrderDetailId INT,
    @tUnitPrice          MONEY
AS
BEGIN
    --Wrap the UPDATE in a transaction so that we do not make permanent changes to the DB
    BEGIN TRANSACTION MyDebuggingTest
        UPDATE [Sales].[SalesOrderDetail]
        SET UnitPrice = @tUnitPrice
        WHERE SalesOrderID = @tSalesOrderId AND
              SalesOrderDetailID = @tSalesOrderDetailId
    ROLLBACK TRANSACTION MyDebuggingTest
END
GO

Here’s the query that we will be using to invoke the stored procedure in our debugging session.

USE AdventureWorks2008
GO

DECLARE @tSalesOrderId INT = 43659,
        @tSalesOrderDetailId INT = 2,
        @tUnitPrice MONEY = 2010.255

EXEC [Sales].[proc_UpdateSalesOrderDetailUnitPrice] @tSalesOrderId, @tSalesOrderDetailId, @tUnitPrice;

Performing the Debug session

We can initiate the Debugging session by clicking the “green arrow” button on the SSMS toolbar or by going to Debug –> Start Debugging or by using the Alt+F5 key combination.

Note that SQL Server shows reminds you that a Debug Session is in progress by showing the “Debugging Query…” message in the query status bar. You also get the Local Variable watcher and the Call Stack/Breakpoint editor and Output Windows.

Initiate Debugging session

(Use F10 to advance to the next statement)

image
Use F11 to “step-into” the stored procedure

Notice that both “Locals” and “Call stack” windows are updated

image
Use F11 on the UPDATE statement to step into the Trigger!

Also notice that the “Call Stack” shows both the Server Instance name and the Database name where the object being debugged resides

image
Once inside a trigger, define a breakpoint by pressing F9, use “Continue” (Alt+F5) to proceed with the execution until the breakpoint is hit

Is it just me or the breakpoint window is not what we expected to be?

image
You also get the standard debugging options like “Run to cursor”/”Set next statement”, etc image

What is not supported?

The SQL Server Debugger is powerful, but still in it’s infancy. To the best of my knowledge, the following cannot be done in SQL Server 2008 debugger:

  1. Edit & Continue – the objects that you step into are temporary scripts created by SQL Server – making changes to these will not have any effect
  2. Conditional Breakpoints – its a take it or leave it situation with breakpoints and SQL Server – you cannot have conditional breakpoints
  3. Pre-set breakpoints – Because SQL Server opens up a temporary copy of the object scripts, one cannot pre-set breakpoints before the object is “stepped into”

Interesting information in the breakpoint window

Talking about the limitations of Transact-SQL Debugging, did you notice anything strange in the screenshot where we hit the breakpoint (the second-last screenshot in the table above)?

You had to go back and take another look, didn’t you? No worries – I had to do that too!

The content in the breakpoint window is not what we expected it to be. Instead of showing the object name where the breakpoint is, SSMS is showing us a number. What can this number be? It’s the object_id of the trigger we are currently in! Here’s the confirmation:

SELECT * FROM sys.objects WHERE object_id = 1563152614

Result:

 image

But why does SSMS give us the object_id instead of the object name? That’s because it is SQL Server’s way of keeping track of the breakpoint for future debugging sessions within the solution/project. If you are not running a debug session under a solution/project, closing SSMS will reset all breakpoints.

But don’t take my word for it. Rerun this debug session and see for yourself the importance of using the object_id. Once done, close SSMS. Reopen SSMS and run through this debug session once again – the breakpoint will be gone!

Important Resources

Here are some important MSDN resources to get you started on T-SQL Debugging

Powered with these resources, I hope that you will be able to unleash the power of T-SQL Debugging and use it for your making a smarter workplace. If you find any cool tricks to use with SQL Debugging, please share them here!

Until we meet next time,

Be courteous. Drive responsibly.

Blocked Process Report & Event Class – Underappreciated Features of SQL Server


This is in continuation to my series on “Underappreciated features of SQL Server”, which is based on the editorial with the same name by Andy Warren on SQLServerCentral.com. The editorial is available here.

Locking & Blocking are not new to anyone who has worked on a production SQL Server based system with as little as 200 concurrent users. Locking is essential to maintain a database’s consistency. However, whenever a process holds a lock for more than a required amount of time, it starts interfering with the normal operation of other processes on the server. This manifests into long-running queries and application timeouts.

The conventional way of identifying blocking is by the use of TRACE flags 1204 and 1222. Microsoft SQL Server 2005 makes this a little bit easier by introducing a new feature called the Blocked Process Report which can help administrators monitor their servers for blocking whenever the blocking exceeds a given threshold.

Blocking Processes

Explaining blocking processes and deadlocks is beyond the scope of this article. If you need help in clearing the concept, please refer the MSDN resource here. Also, please do not proceed ahead without clarity on these concepts – it will do more harm than good.

A deadlock occurs when two or more tasks permanently block each other by each task having a lock on a resource which the other tasks are trying to lock. A blocking situation is when the blocking is not permanent, but serve enough to impact the performance of the other tasks.

Using the AdventureWorks database, running the following batch in two separate query windows simultaneously will result in a blocking situation. You will notice that the “second” query waits for the first and then executes only after the “first” finishes. Give it a try.

BEGIN TRANSACTION BlockedProcessReport
    PRINT 'Entering Transaction at:' + CONVERT(VARCHAR(20),GETDATE(),114);
    
    UPDATE HumanResources.Employee SET CurrentFlag = 0
    --Introduce a 20 second delay
    WAITFOR DELAY '00:00:20'

    PRINT 'Leaving Transaction at: ' + CONVERT(VARCHAR(20),GETDATE(),114);
ROLLBACK TRANSACTION BlockedProcessReport

Here are the results from my execution:

--Window 01
Entering Transaction at:08:54:45:793

(290 row(s) affected)
Leaving Transaction at: 08:55:05:803

--Window 02
Entering Transaction at:08:54:47:517

(290 row(s) affected)
Leaving Transaction at: 08:55:25:903

Clearly, the process running in window #2 was blocked by the process running in window #1. The most common symptom that such an issue is occurring would be users calling up the support staff screaming about a poorly performing system. Let’s see how we can now identify blocked queries using the features new to Microsoft SQL Server 2005.

Preparing the environment

Before we go any further, let’s ensure that the blocking process thresholds are low enough for us to easily trigger the recognition of transactions as blocking transactions.

CAUTION!!! Please do not adjust these settings on your production and Quality Assurance environments without recommendation from a senior database administrator, or from Microsoft.

--Set the visibility of advanced options to ON
sp_configure 'show advanced options',1;
GO
--Use RECONFIGURE hint to prevent a restart of the SQL Server for changes to take effect
RECONFIGURE;
GO
--Fetch the value of the Blocked Process threshold
--By default, this value is 0, i.e. no Blocked Process reports are generated
sp_configure 'blocked process threshold';
GO
--Set the blocked process threshold value to 5 seconds
sp_configure 'blocked process threshold',5;
GO
--Use RECONFIGURE hint to prevent a restart of the SQL Server for changes to take effect
RECONFIGURE;
GO

Using the default health reports shipped with SQL Server Management Studio

For those administrators who do not want to go get their hands greasy just yet, Microsoft SQL Server comes with a couple of useful reports out-of-the-box. One such report is the Blocked Process report. You can access this from the Object Explorer by right-clicking on the instance name –> Reports –> Standard Reports –> Activity – All Blocking Transactions

image

When you run the report, under normal circumstances, it should not show any blocked processes. Now, re-run the batch of queries designed to simulate blocking in two separate query windows. While these are running, refresh the Blocked process report. This is how it will look like – simple, clear and concise:

image

This query can now be given to the development teams with proof that it is indeed, causing performance issues and needs attention.

Using the SQL Server Profiler – Blocked Process Event Class

Starting Microsoft SQL Server 2005, the Profiler introduces a new Event Class – Blocked Process Report. The Blocked Process Report event class indicates that a task has been blocked for more than a specified amount of time. This event class does not include system tasks or tasks that are waiting on non deadlock-detectable resources. Let’s see how we can use this to detect blocking & deadlocks.

We will begin by setting up a SQL Server Profiler trace on our test instance. Use a (blank) trace template for simplicity. In the “Events Selection” tab, browse out to the “Errors and Warnings” section and choose “Blocked Process Report” as demonstrated below:

image

Run the trace. While the trace is running, re-run the batch of queries designed to simulate blocking in two separate query windows. Monitoring the profiler trace shows us the following:

image

Blocked process threshold uses the deadlock monitor background thread to walk through the list of tasks waiting for a time greater than or multiples of the configured threshold. The event is generated once per reporting interval for each of the blocked tasks. Because our blocked process threshold is set for 5 seconds, a blocked process report has been generated by the blocked process at 5, 10 and 15 seconds of execution time respectively (At the 20 seconds mark, the first process had already proceeded ahead and ended the blockage, which is why it is not seen in the report).

Clicking on the individual blocked process report, we see an XML like the one below, which shows the blocked process as the one with SPID (54) and the blocking process to be one with SPID (53).

< blocked-process-report monitorLoop="20097">
 < blocked-process>
  < process id="process87be28" taskpriority="0" logused="0" waitresource="KEY: 5:72057594044153856 (6d003d0ff717)" waittime="17725" ownerId="120328" transactionname="BlockedProcessReport" lasttranstarted="2011-01-22T09:41:14.083" XDES="0x77247f0" lockMode="U" schedulerid="1" kpid="3128" status="suspended" spid="54" sbid="0" ecid="0" priority="0" transcount="2" lastbatchstarted="2011-01-22T09:41:14.083" lastbatchcompleted="2011-01-22T09:40:58.680" clientapp="Microsoft SQL Server Management Studio - Query" hostname="VPCW2K3" hostpid="2348" loginname="VPCW2K3Administrator" isolationlevel="read committed (2)" xactid="120328" currentdb="5" lockTimeout="4294967295" clientoption1="671090784" clientoption2="390200">
   < executionStack>
    < frame line="4" stmtstart="16" sqlhandle="0x0200000036147f28ca6059811837a9a13bd54860b50d7590"/>
    < frame line="4" stmtstart="252" stmtend="432" sqlhandle="0x02000000ee8af21f3464249db7b445232f6c427f69424d78"/>
   < /executionStack>
   < inputbuf>
BEGIN TRANSACTION BlockedProcessReport
    PRINT &apos;Entering Transaction at:&apos; + CONVERT(VARCHAR(20),GETDATE(),114);
    
    UPDATE HumanResources.Employee SET CurrentFlag = 0
    --Introduce a 20 second delay
    WAITFOR DELAY &apos;00:00:20&apos;

    PRINT &apos;Leaving Transaction at: &apos; + CONVERT(VARCHAR(20),GETDATE(),114);
ROLLBACK TRANSACTION BlockedProcessReport   < /inputbuf>
  < /process>
 < /blocked-process>
 < blocking-process>
  < process status="suspended" waittime="19027" spid="53" sbid="0" ecid="0" priority="0" transcount="1" lastbatchstarted="2011-01-22T09:41:12.773" lastbatchcompleted="2011-01-22T09:40:38.603" clientapp="Microsoft SQL Server Management Studio - Query" hostname="VPCW2K3" hostpid="2348" loginname="VPCW2K3Administrator" isolationlevel="read committed (2)" xactid="120327" currentdb="5" lockTimeout="4294967295" clientoption1="671090784" clientoption2="390200">
   < executionStack>
    < frame line="6" stmtstart="434" stmtend="496" sqlhandle="0x02000000ee8af21f3464249db7b445232f6c427f69424d78"/>
   < /executionStack>
   < inputbuf>
BEGIN TRANSACTION BlockedProcessReport
    PRINT &apos;Entering Transaction at:&apos; + CONVERT(VARCHAR(20),GETDATE(),114);
    
    UPDATE HumanResources.Employee SET CurrentFlag = 0
    --Introduce a 20 second delay
    WAITFOR DELAY &apos;00:00:20&apos;

    PRINT &apos;Leaving Transaction at: &apos; + CONVERT(VARCHAR(20),GETDATE(),114);
ROLLBACK TRANSACTION BlockedProcessReport   < /inputbuf>
  < /process>
 < /blocking-process>
< /blocked-process-report>

Cleanup the environment

As we come to a close on this demo, we will cleanup the environment by resetting the blocked process threshold value to 0.

CAUTION!!! Please do not adjust these settings on your production and Quality Assurance environments without recommendation from a senior database administrator, or from Microsoft.

--Reset the blocked process threshold value to 0
sp_configure 'blocked process threshold',0;
GO
--Use RECONFIGURE hint to prevent a restart of the SQL Server for changes to take effect
RECONFIGURE;
GO
--Fetch the value of the Blocked Process threshold
--By default, this value is 0, i.e. no Blocked Process reports are generated
sp_configure 'blocked process threshold';
GO
--Set the visibility of advanced options to OFF
sp_configure 'show advanced options',0;
GO
--Use RECONFIGURE hint to prevent a restart of the SQL Server for changes to take effect
RECONFIGURE;
GO

Blocked Process Monitoring Tool

Jonathan Kehayias has developed a Blocked Process Monitoring Tool, which is available on CodePlex at: http://sqlblockedprocessmon.codeplex.com/

In Conclusion

As systems scale up to the cloud in the coming days; preventing blocking and deadlocks are going to be of utmost importance. It is required that the entire application development process takes into account the fact that ultimately, if the underlying database is struggling to get the data you need in time because of poor application design, no amount of high-end technology and hardware is going to resolve the problem at hand. I hope that the above has given development teams and DBAs a new and easy way to identify and troubleshoot performance bottle-necks in their systems.

While you do this, please keep in mind the following extract from SQL Server Books On Line: “The blocked process report is done on a "best effort" basis. There is no guarantee of any "real-time" or even close to real-time reporting.“

Until we meet next time,

Be courteous. Drive responsibly.