Tag Archives: #SQLServer

All about Microsoft SQL Server

Exception handling in T-SQL/TRY…CATCH – Underappreciated features of Microsoft SQL Server


As we continue our journey through the “Underappreciated features of SQL Server”, this week are are looking at a few of the T-SQL enhancements that the community felt did not get the deserved attention. This was in response to Andy Warren’s editorial of the same name on SQLServerCentral.com.

Today, we will look at exception handling in T-SQL using the TRY…CATCH statements introduced since SQL Server 2005. Because this is now almost 6 years old, I was a bit surprised to see this in the list of underappreciated features. Hence, while I will touch upon the essential points, I will provide pointers that would help you to get started on the use of TRY…CATCH and then explore a new related feature introduced in SQL 11 (“Denali”). I am posting this on a Thursday, because I will be covering the exception handling options from SQL 7/2000 onwards and also so that it the reader gets some time to experiment with the various options presented here over the week-end.

Exception handling in T-SQL – before SQL Server 2005

Before SQL Server 2005, exception handling was very primitive, with a handful of limitations. Let’s see how exception handling was done in the days of SQL Server 7/2000:

BEGIN TRANSACTION ExceptionHandling
   DECLARE @ErrorNum INT
   DECLARE @ErrorMsg VARCHAR(8000)

   --Divide by 0 to generate the error
   SELECT 1/0

   -- Error handling in SQL 7/2000 
   -- Drawback #1: Checking for @@Error must be done immediately after execution fo a statement
   SET @ErrorNum = @@ERROR
   IF @ErrorNum <> 0 
   BEGIN
      -- Error handling in SQL 7/2000 
      -- Drawback #2: When returning an error message to the calling program, the Message number,
      --              and location/line# are no longer same as the original error.
      --              Explicit care must be taken to ensure that the error number & message
      --              is returned to the user.
      -- Drawback #3: Error message is not controlled by the application, but by the T-SQL code!
      -- (Please note that there is only one 'E' in RAISERROR)
      SELECT @ErrorMsg = description FROM sysmessages WHERE error = @ErrorNum
      RAISERROR ('An error occured within a user transaction. Error number is: %d, and message is: %s',16, 1, @ErrorNum, @ErrorMsg) WITH LOG
      ROLLBACK TRANSACTION ExceptionHandling
   END

IF (@ErrorNum = 0)
   COMMIT TRANSACTION ExceptionHandling

image

While a revolutionary method in those days, to our “developed” eyes, a couple of issues that jump right as us:

  1. You cannot monitor a batch for Exceptions/error, i.e. you cannot be sure that a particular block of code will be able to “catch” all errors that might happen in another block of code somewhere else
  2. The checking for the return value of the @@ERROR system function is where most mistakes happen. This has to be checked immediately after the execution of the T-SQL statement that you want to monitor
  3. You can inform the calling program/application about the error, however the error number, error description and originating line number change – the error number is now that of RAISERROR() and the error message is custom. The originating line number is that of the line from where the call to RAISERROR is made. When used with the “WITH LOG” option, this is big drawback when doing a post-error analysis
  4. Because the error message is custom, the calling application has no control over the nature of the message (e.g. sometimes, we might want to customize the message text that is displayed to the users) – this cannot be done

Clearly, with the advent of nested operations and multi-tier architecture and patterns, this had a lot of drawbacks, and the world was in need of something new.

Exception handling in T-SQL – SQL Server 2005/2008

SQL Server 2005 saw a major change in the history and architecture of Microsoft SQL Server. SQL Server now started using the Visual Studio platform and was now Microsoft .NET framework based. One of the things that this move gave to SQL Server was the fact that it could now leverage the built-in TRY…CATCH functionality provided by the framework. To know more about this functionality, please refer an article from our esteemed MVP, Mr. Pinal Dave (blog).

Let’s see how our code would look and work when we implement TRY…CATCH blocks:

BEGIN TRANSACTION ExceptionHandling
   -- Error handling in SQL 2005 & up
   -- Monitor an entire batch of queries for errors
   BEGIN TRY
      --Some valid operations here
      SELECT 1/1

      --Divide by 0 to generate the error
      SELECT 1/0

      --Some valid operations here
      SELECT 1/2

      COMMIT TRANSACTION ExceptionHandling
   END TRY
   BEGIN CATCH
      -- Error handling in SQL 2005 & up
      -- Error information can be returned to the calling code as a result set
      SELECT ERROR_NUMBER() AS ErrorNumber
            ,ERROR_SEVERITY() AS ErrorSeverity
            ,ERROR_STATE() AS ErrorState
            ,ERROR_PROCEDURE() AS ErrorProcedure
            ,ERROR_LINE() AS ErrorLine
            ,ERROR_MESSAGE() AS ErrorMessage;

      -- Or via RAISERROR
      DECLARE @ErrorNumber    INT            = ERROR_NUMBER()
      DECLARE @ErrorMessage   NVARCHAR(4000) = ERROR_MESSAGE()
      DECLARE @ErrorProcedure NVARCHAR(4000) = ERROR_PROCEDURE()
      DECLARE @ErrorLine      INT            = ERROR_LINE()

      RAISERROR ('An error occurred within a user transaction. 
                  Error Number        : %d
                  Error Message       : %s  
                  Affected Procedure  : %s
                  Affected Line Number: %d'
                  , 16, 1
                  , @ErrorNumber, @ErrorMessage, @ErrorProcedure,@ErrorLine)
       
      IF @@TRANCOUNT > 0
         ROLLBACK TRANSACTION ExceptionHandling
   END CATCH

As you can see, most of the problems are now resolved. Microsoft SQL Server provides specific functions that would return the correct information about an error occurring from anywhere within a block of monitored code (i.e. the code in “TRY”). One problem remains however – if your code uses RAISERROR, you would still have the problem of an incorrect error number (if not embedded within the message).

image

The basic structure and use of TRY…CATCH is quite simple as demonstrated above. There is much more to TRY…CATCH, which I will leave it upon the reader to explore.

Exception handling in T-SQL – SQL 11 (Code named: “Denali”)

The remaining drawback of RAISERROR mentioned above and the solution provided by Microsoft in it’s upcoming SQL Server release (SQL 11 – Code named: “Denali”) is handled very well by the renowned MVP, Mr. Jacob Sebastian (blog) here.

SQL 11/”Denali” comes with a new THROW command that helps the T-SQL code to re-raise the same error that has been encountered in a monitored block of code. While I was reading Jacob’s article, a question cropped up in my mind as to whether we could re-raise errors in a nested hierarchy. The concept of “inner exceptions” is still foreign to SQL Server, but THROW does allow for nesting. Please find below the script I developed to experiment with nesting exception-handling:

IMPORTANT NOTE: The THROW command must be preceded with a statement ending in a semi-colon (;)

IF EXISTS (SELECT * FROM sys.objects WHERE name = 'InnerProc' AND type = 'P')
    DROP PROCEDURE InnerProc
GO

CREATE PROCEDURE InnerProc
AS
BEGIN
    BEGIN TRANSACTION ExceptionHandling
       BEGIN TRY
          PRINT 'In the TRY block of the Inner Procedure...'
          SELECT 1/1

          RAISERROR('An error occured in the Inner procedure.',17,1)  --Line #10 considering CREATE PROC...as Line #1

          COMMIT TRANSACTION ExceptionHandling
       END TRY
       BEGIN CATCH
          SELECT ERROR_NUMBER() AS ErrorNumber
                ,ERROR_SEVERITY() AS ErrorSeverity
                ,ERROR_STATE() AS ErrorState
                ,ERROR_PROCEDURE() AS ErrorProcedure
                ,ERROR_LINE() AS ErrorLine
                ,ERROR_MESSAGE() AS ErrorMessage;

          IF @@TRANCOUNT > 0
             ROLLBACK TRANSACTION ExceptionHandling

          PRINT 'Throwing error from the CATCH block of the INNER Procedure...';   
          --Preceding statement MUST be a semi-colon ';'
          THROW
       END CATCH
END
GO

IF EXISTS (SELECT * FROM sys.objects WHERE name = 'OuterProc' AND type = 'P')
    DROP PROCEDURE OuterProc
GO
CREATE PROCEDURE OuterProc
AS
BEGIN
    BEGIN TRY
        PRINT 'In the TRY block of the Outer Procedure...'
        EXEC InnerProc
    END TRY
    BEGIN CATCH
        PRINT 'In the CATCH block of the Outer Procedure...';
        --Preceding statement MUST be a semi-colon ';'
        THROW
    END CATCH
END
GO

--Executing the outer procedure
EXEC OuterProc

image

Edit (October 15, 2011): You can explore the differences between THROW and RAISEERROR in my post here – Sunset for RAISERROR and sunrise for THROW – SQL 11 (“Denali”) (End Edit)

Phew!…That was a long one. I hope that you now have a good overview of the powerful exception handling that Microsoft SQL Server provides. In the next release of your product, ensure that all our T-SQL code is protected from unhandled exceptions.

If you want to read more on exception handling, you can refer Erland Sommarskog’s (blog) post here.

If you have any good exception handling tips to share, do let us know by leaving your feedback.

Until we meet next time,

Be courteous. Drive responsibly.

Table valued parameters to a table valued function – Underappreciated features of Microsoft SQL Server


This is part of the series of posts that I am running under the title “Under Appreciated Features of SQL Server“. The series has been inspired by Andy Warren’s editorial on SQLServerCentral.com.

Table-valued parameters are a new parameter type in SQL Server 2008. Because a lot of production systems still run Microsoft SQL Server 2005, I am not too surprised to find this on the underappreciated features list.

Table-valued parameters are declared by using user-defined table types. You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.

Creating user-defined table types

In SQL Server 2008, a user-defined table type is a user-defined type that represents the definition of a table structure. Therefore, the creation of user defined table types is similar to creating a table. Let’s create a table data type for handling UNIQUEIDENTIFIER values:

USE AdventureWorks2008
GO
-- Create a GUIDTableType to hold a table of GUIDs/UNIQUEIDENTIFIERs
CREATE TYPE GUIDTableType AS TABLE 
    ( GUIDValue UNIQUEIDENTIFIER )
GO

Once created, you can use the following DMV to get a list of available table data types:

USE AdventureWorks2008
GO
SELECT * FROM sys.table_types

OR use the SSMS Object Explorer

image

Create routines consuming the user-defined table type

Let us create simple programming routines that consume the user-defined table type created above:

USE AdventureWorks2008
GO
IF EXISTS (SELECT * FROM sys.objects where name = 'proc_GetSalesOrdersbyRowGuid' AND type = 'P')
    DROP PROCEDURE proc_GetSalesOrdersbyRowGuid
GO
CREATE PROCEDURE proc_GetSalesOrdersbyRowGuid
    @SalesOrderRowGuids GUIDTableType READONLY
AS
BEGIN
    SELECT Sales.SalesOrderDetail.*
    FROM @SalesOrderRowGuids SalesOrderRowGuids
    INNER JOIN Sales.SalesOrderDetail ON Sales.SalesOrderDetail.rowguid = SalesOrderRowGuids.GUIDValue
END
GO

USE AdventureWorks2008
GO
IF OBJECT_ID(N'dbo.func_SalesOrderDetailInformation', N'TF') IS NOT NULL
    DROP FUNCTION dbo.func_SalesOrderDetailInformation;
GO
CREATE FUNCTION dbo.func_SalesOrderDetailInformation
    (@SalesOrderRowGuids dbo.GUIDTableType READONLY)
RETURNS @retSalesOrderDetailInformation TABLE 
(
    -- Columns returned by the function
    [SalesOrderID]          [INT]          NOT NULL, 
    [SalesOrderDetailID]    [INT]          NOT NULL,
    [CarrierTrackingNumber] [NVARCHAR](25) NULL,
    [OrderQty]              [SMALLINT]     NOT NULL,
    [ProductID]             [INT]          NOT NULL,
    [SpecialOfferID]        [INT]          NOT NULL,
    [UnitPrice]             [MONEY]        NOT NULL,
    [UnitPriceDiscount]     [MONEY]        NOT NULL,
    [LineTotal]             [NUMERIC]      NOT NULL,
    [ModifiedDate]          [DATETIME]     NOT NULL
)
AS 
-- Returns the first name, last name, job title, and contact type for the specified contact.
BEGIN
    INSERT @retSalesOrderDetailInformation
    SELECT Sales.SalesOrderDetail.SalesOrderID,
           Sales.SalesOrderDetail.SalesOrderDetailID,
           Sales.SalesOrderDetail.CarrierTrackingNumber,
           Sales.SalesOrderDetail.OrderQty,
           Sales.SalesOrderDetail.ProductID,
           Sales.SalesOrderDetail.SpecialOfferID,
           Sales.SalesOrderDetail.UnitPrice,
           Sales.SalesOrderDetail.UnitPriceDiscount,
           Sales.SalesOrderDetail.LineTotal,
           Sales.SalesOrderDetail.ModifiedDate
    FROM @SalesOrderRowGuids SalesOrderRowGuids
    INNER JOIN Sales.SalesOrderDetail ON Sales.SalesOrderDetail.rowguid = SalesOrderRowGuids.GUIDValue
    
    RETURN
END
GO

Consuming the user-defined table type

Remember how we said that user-defined table types are similar to tables? That’s true! Here’s how to consume the newly user defined table type:

USE AdventureWorks2008
GO

DECLARE @SalesOrders GUIDTableType

INSERT INTO @SalesOrders VALUES ('B207C96D-D9E6-402B-8470-2CC176C42283'),
                                ('04C4DE91-5815-45D6-8670-F462719FBCE3'),
                                ('80667840-F962-4EE3-96E0-AECA108E0D4F'),
                                ('E9D54907-E7B7-4969-80D9-76BA69F8A836'),
                                ('AC769034-3C2F-495C-A5A7-3B71CDB25D4E')

EXEC proc_GetSalesOrdersbyRowGuid @SalesOrders

SELECT * FROM func_SalesOrderDetailInformation(@SalesOrders)

Here’s what you get for output from the stored procedure:

image

Here’s what you get for output from the function:

image

Please note that table data types cannot be targets of SELECT INTO and INSERT EXEC statements. It can however, be in the FROM clause of either of these. Also, just as is the case with other parameters, a table data type is scoped to the stored procedure, function or dynamic Transact-SQL text as applicable.

For application programmers:

Per Books On Line, “Transact-SQL passes table-valued parameters to routines by reference to avoid making a copy of the input data.” So, there you have it – pointers and by-reference passing are still alive and kicking!

Benefits of the table valued table types

Table-valued parameters offer more flexibility and in some cases better performance than temporary tables or other ways to pass a list of parameters. There are tons of benefits, and there can be nothing better than the list provided by MSDN:

  • Do not acquire locks for the initial population of data from a client
  • Provide a simple programming model
  • Enable you to include complex business logic in a single routine
  • Reduce round trips to the server
  • Can have a table structure of different cardinality
  • Are strongly typed
  • Enable the client to specify sort order and unique keys

Do you use user defined table data types? If yes, how do you find this feature? Do let us know!

Until we meet next time,

Be courteous. Drive responsibly.

Clearing SSMS History


For those of you who visit BeyondRelational regularly, you would have noticed that the web-site now has a new feature called “Ask”. The feature is a “personal forum” and allows you to approach one of the SQL Server experts associated with BeyondRelational directly with your queries. It facilitates a one-on-one interaction, which is what I prefer. The feature is still in the beta stage but with your support, it will definitely be a successful one.

Recently on the “Ask” feature, I received a question by one of the readers asking me how to clear out the SSMS history. For those of you who work with multiple SQL servers with multiple logins, the “Connect to Server” window seems to fill up very quickly, and starts to look something like this. Sometimes, these servers might not even be in existence (e.g.. if you frequently refresh your virtual RND/test environments).

image

You might notice that there is no way to clear out this history. So, how do you go about clearing out the clutter that working with SSMS generates?

The Manual Way

The following is a series of manual steps that you need to follow to clear out the SSMS history:

  1. Close all open instances of SSMS on your workstation
  2. Depending upon your version of the SSMS client tools, navigate to and delete the following:
  3. SSMS Client Tools Version Path File to Delete
    SQL 11 (“Denali”) %USERPROFILE%AppDataRoamingMicrosoftSQL Server Management Studio11.0 SqlStudio.bin
    SQL 2008 %USERPROFILE%AppDataRoamingMicrosoftMicrosoft SQL Server100ToolsShell SqlStudio.bin
    SQL 2005 %USERPROFILE%AppDataRoamingMicrosoftMicrosoft SQL Server90ToolsShell mru.dat
  4. Launch SSMS
  5. It might take a little while to launch, as it recreates the “SqlStudio.bin”/”mru.dat”. Once launched, you will see that all SSMS history is gone! All that remains is the list of servers installed on my test server

image

The Automated Way

Unfortunately, there is not wizard or button that one can click and magically clear out the history. A Microsoft Connect case has already been logged for introduction of the same (you can read the case here). Let’s hope that by the time the first of the SQL 11 RCs come out, this feature is part of them.

Until we meet next time,

Be courteous. Drive responsibly.

“Ad hoc update to system catalogs is not supported.” when using RECONFIGURE


I just had a busy travelling week-end, and hence today, we will start things off this week with a little incident that I encountered a while back. As with any development team, we have a bunch of servers lying around that are shared across the team purely for RND purposes. I had to run a particular test on one of these environments and that’s when this happened.

The situation was that my test required me to enable the xp_cmdshell feature of Microsoft SQL Server 2008. Hence, I proceeded with the following set of scripts, which are pretty straight-forward and “should” work on all environments, provided I have the appropriate permissions. I was connected to the server with the “sa” login and used the RECONFIGURE command so that I do not need a SQL Server restart for the changes to take effect.

sp_configure 'show advanced options',1;
reconfigure
go

sp_configure 'xp_cmdshell',1;
reconfigure
go

Although the scripts are quite simple and was connected using the highest possible privileges, the SQL Server just would not allow me to apply the changes. The error that it threw was as under:

Msg 5808, Level 16, State 1, Line 3

Ad hoc update to system catalogs is not supported.

As the test was somewhat urgent, I proceeded with the following work-around.

sp_configure 'show advanced options',1;
reconfigure with override
go

sp_configure 'xp_cmdshell',1;
reconfigure with override
go

Later in the day, I did some research and found that a team member had been toying around with the SQL Server as a practical hands-on experience for one of the examinations that the team member was preparing for. However, he had forgotten to roll-back the changes made by him before he put the server back into the “available” repository – which is what was causing the problem.

SQL Server has a setting called – “allow updates”. This setting was used in SQL Server 2000 to allow direct ad-hoc updates to system catalogs and tables. This setting is now deprecated since SQL Server 2005 and all SQL Server installations come with this set to 0 by default. Per Books On Line (read the BOL help on this option here), this setting has no effect, except that the RECONFIGURE statement will fail, just as it is doing in our case. Running the following statement to reset this back to 0 did the trick and I was now able to use RECONFIGURE again to reset the environment back to the “factory default”.

-- Reset the "allow updates" setting to the recommended 0
sp_configure 'allow updates',0;
reconfigure with override
go

-- Reset the environment back as the test is complete
sp_configure 'xp_cmdshell',0;
reconfigure
go

sp_configure 'show advanced options',0;
reconfigure
go

Lessons Learnt

This incident has refreshed in my memory the two basic rules that I have been taught by my father (way back when I was introduced to computers with Windows 3.1!)

  1. Irrespective of whether you are doing a production change or a local RND change, always keep a log of the changes made
  2. Once done, if the change was a purely researching change, always roll the environment back to the condition you found it in

Here’s something I don’t understand

By default, a user cannot see the advanced configuration options. Even when you run sp_configure on a clean install of SQL Server 2008, the “allow updates” option is right at the top! Why? If the product is not expecting users to use this setting, it should be hidden from the users view. If a user sees something, he/she is bound to use it.

If you work with a product and if you get a say during product evolution, request you to ensure that things which a user should not be using are either hidden from the user’s view, or are at least hard to access.

Until we meet next time,

Be courteous. Drive responsibly.

UNIX timestamp and the Year 2038 problem


My first post was on the Ghost of the Y2K, wherein I attempt to revisit the infamous Y2K problem (read and listen to a nice summary on NPR’s Science Friday) and also draw attention to the fact that the problem may not be gone yet for those who continue to use 2-digit year representation. Earlier this week, I wrote about a problem that one of my colleagues asked my help for. They were trying to convert a UNIX timestamp (BIGINT) to a SQL Server DATETIME representation. However, because the DATEADD function can only accept the INT data-type for the interval value, they were at a dead-end. After about an hour of trying what the UNIX timestamp is, I came up with two solutions solution for them – one of which I presented in the last post. You can read about it here.

When I was researching the possible solutions, a strange observation dawned on me and further research revealed that I was not the only one to notice this. This is popularly known as the Year 2038 problem – and is a very real threat to all 32-bit applications which hope to make it till 2038. Here, I attempt to explain the year 2038 problem and also attempt to come up with another alternate solution to using UNIX timestamp values in the DATEADD function.

The INT data-type

The INT data type is the primary integer data type in SQL Server. Consuming 4 bytes in storage, the INT is a 32-bit data-type with a range of -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647).

In Binary format, the ceiling value for INT (2,147,483,647) equates to 01111111111111111111111111111111 – All ones! (The string is exactly 32-bits long, and there are 31 “1’s” to be precise). Attempting to convert this number to a SQL Server DATETIME gives us the DATETIME as 2038-01-19 03:14:07.000.

DECLARE @DateInt     INT

SET @DateInt    = 2147483647 /* Max. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The Year 2038 problem

Now that we know what the UNIX timestamp is, and also knowing the eternal truth that time and tide wait for none, we will see how January 19, 2038, 3:14:08 (UTC) is represented in UNIX timestamp format. Because we know that UNIX timestamp is simply a uniformly incrementing, the number is obviously 2,147,483,647 + 1 = 2,147,483,648. Alternatively, based on the computation logic used in my previous post, the number comes out to be 2,147,483,648.

SELECT CAST((DATEDIFF(dd,'1970-01-01 00:00:00.000','2038-01-19 03:14:08.000') * 86400) AS BIGINT) 
       +                                            -- Get the number of seconds based on the number of days elapsed
       DATEDIFF(ss,'2038-01-19 00:00:00.000','2038-01-19 03:14:08.000')
                                                    -- Add the number of seconds elapsed since midnight

There is a very important point I want to highlight by showing the conversion script above. If you observe, I had to cast the number to a BIGINT. This is because, when represented in binary format, 2,147,483,648 = 10000000000000000000000000000000.

This is a clear buffer overflow for the INT data-type (the 32nd bit in a 4 byte representation is the sign bit). For the INT data-type, this corresponds to the floor value of the range (-2,147,483,648). What this means is that at exactly January 19, 2038, 3:14:08 UTC time, the clocks using the 32-bit INT data-type would start reading: 1901-12-13 20:45:52.000

DECLARE @DateInt     INT

SET @DateInt    = -2147483648 /* Min. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The results could be disastrous if such a rollover happens unchecked. If your application is currently processing data 20 years ahead of time, you will start running into problems very soon.

This is the Year 2038 problem, also known as Y2K38 or Y2.038K or the UNIX Millennium Bug. In essence, what my colleague was asking was – “How can I overcome the year 2038 bug in my 32-bit application?”

The solution:

The only solution to the year 2038 problem is also hidden in the script shown above, wherein I use a CAST to a BIGINT value. We can use unsigned integer values, use BIGINT values, but the only real solution is to move towards 64-bit systems, where the wrap-around comes approximately 292 billion years from now. Computing systems would be very, very different by then.

Using BIGINT values for DATEADD – Method 02

My previous post dealing with using BIGINT values in DATEADD mentioned that there are actually two alternate implementations on how to represent the BIGINT UNIX timestamp values as SQL Server DATETIME. With the background of the year 2038 problem, I present before you method #02, which is as under. The script should be self-explanatory:

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

/* Method 02 - The max. value for INT (2147483647) corresponds to '2038-01-19 03:14:07.000' */
/*             If the passed value is greater than '2038-01-19 03:14:07.000', we add the    */
/*             difference (passedValue-2147483647) to '2038-01-19 03:14:07.000',            */
/*             else we use the standard difference with 1970.                               */

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2147483648

SELECT DATEADD(ss,@DateInt,'1970-01-01') AS 'IntegerDateRange',
       CASE WHEN (@DateBigInt-2147483647) > 0 THEN DATEADD(ss,(@DateBigInt-2147483647),'2038-01-19 03:14:07.000')
                                              ELSE DATEADD(ss,@DateBigInt,             '1970-01-01 00:00:00.000')
                                              END AS 'BigIntDateRange'

Now that we have seen both methods of handling the year 2038 problem created by 32-bit data-types, it is up to the reader to pick and use any method they find best suited for their application code.

Some Trivia

Today, I leave you with some very interesting trivia (courtesy: Wikipedia):

  1. UNIX Time passed 1000000000 on September 9, 2001, at exactly 01:46:40 (UTC). This is popularly called the UNIX billennium
  2. UNIX Time passed 1234567890 on February 13, 2009, at exactly 23:31:30 (UTC)  – Interestingly, it was a Friday
  3. UNIX Time passed 2^30 (1073741824) on January 10, 2004, at exactly 13:37:04 (UTC). This is half-way between UNIX epoch and year 2038 problem
  4. UNIX Time passed 1111111111 on March 18, 2005, at exactly 01:58:31 (UTC)
  5. UNIX Time will pass 2000000000 on May 18, 2033, at exactly 03:33:20 (UTC). This will be the 2nd UNIX billennium

In Conclusion…

In my previous post, I had mentioned that I did not believe the Microsoft Connect case filed for making DATEADD compatible with BIGINT is really a SQL Server defect. This is because as demonstrated above, the issue is actually an architectural limitation of the 32-bit architecture. Whatever code change we do to overcome this limitation will always be workaround, and not the true solution.

Many embedded systems (like automobile control systems and CNC controllers) use the UNIX timestamp to maintain their internal clock. These systems will also be affected by the Year 2038 problem. Hence, my question to the community is do you work with UNIX timestamps? If yes, I would like to hear from you as to how you handle (or plan to handle) both these issues that we discussed this week. Do leave a comment as you go.

Be courteous. Drive responsibly.