“Ad hoc update to system catalogs is not supported.” when using RECONFIGURE


I just had a busy travelling week-end, and hence today, we will start things off this week with a little incident that I encountered a while back. As with any development team, we have a bunch of servers lying around that are shared across the team purely for RND purposes. I had to run a particular test on one of these environments and that’s when this happened.

The situation was that my test required me to enable the xp_cmdshell feature of Microsoft SQL Server 2008. Hence, I proceeded with the following set of scripts, which are pretty straight-forward and “should” work on all environments, provided I have the appropriate permissions. I was connected to the server with the “sa” login and used the RECONFIGURE command so that I do not need a SQL Server restart for the changes to take effect.

sp_configure 'show advanced options',1;
reconfigure
go

sp_configure 'xp_cmdshell',1;
reconfigure
go

Although the scripts are quite simple and was connected using the highest possible privileges, the SQL Server just would not allow me to apply the changes. The error that it threw was as under:

Msg 5808, Level 16, State 1, Line 3

Ad hoc update to system catalogs is not supported.

As the test was somewhat urgent, I proceeded with the following work-around.

sp_configure 'show advanced options',1;
reconfigure with override
go

sp_configure 'xp_cmdshell',1;
reconfigure with override
go

Later in the day, I did some research and found that a team member had been toying around with the SQL Server as a practical hands-on experience for one of the examinations that the team member was preparing for. However, he had forgotten to roll-back the changes made by him before he put the server back into the “available” repository – which is what was causing the problem.

SQL Server has a setting called – “allow updates”. This setting was used in SQL Server 2000 to allow direct ad-hoc updates to system catalogs and tables. This setting is now deprecated since SQL Server 2005 and all SQL Server installations come with this set to 0 by default. Per Books On Line (read the BOL help on this option here), this setting has no effect, except that the RECONFIGURE statement will fail, just as it is doing in our case. Running the following statement to reset this back to 0 did the trick and I was now able to use RECONFIGURE again to reset the environment back to the “factory default”.

-- Reset the "allow updates" setting to the recommended 0
sp_configure 'allow updates',0;
reconfigure with override
go

-- Reset the environment back as the test is complete
sp_configure 'xp_cmdshell',0;
reconfigure
go

sp_configure 'show advanced options',0;
reconfigure
go

Lessons Learnt

This incident has refreshed in my memory the two basic rules that I have been taught by my father (way back when I was introduced to computers with Windows 3.1!)

  1. Irrespective of whether you are doing a production change or a local RND change, always keep a log of the changes made
  2. Once done, if the change was a purely researching change, always roll the environment back to the condition you found it in

Here’s something I don’t understand

By default, a user cannot see the advanced configuration options. Even when you run sp_configure on a clean install of SQL Server 2008, the “allow updates” option is right at the top! Why? If the product is not expecting users to use this setting, it should be hidden from the users view. If a user sees something, he/she is bound to use it.

If you work with a product and if you get a say during product evolution, request you to ensure that things which a user should not be using are either hidden from the user’s view, or are at least hard to access.

Until we meet next time,

Be courteous. Drive responsibly.

UNIX timestamp and the Year 2038 problem


My first post was on the Ghost of the Y2K, wherein I attempt to revisit the infamous Y2K problem (read and listen to a nice summary on NPR’s Science Friday) and also draw attention to the fact that the problem may not be gone yet for those who continue to use 2-digit year representation. Earlier this week, I wrote about a problem that one of my colleagues asked my help for. They were trying to convert a UNIX timestamp (BIGINT) to a SQL Server DATETIME representation. However, because the DATEADD function can only accept the INT data-type for the interval value, they were at a dead-end. After about an hour of trying what the UNIX timestamp is, I came up with two solutions solution for them – one of which I presented in the last post. You can read about it here.

When I was researching the possible solutions, a strange observation dawned on me and further research revealed that I was not the only one to notice this. This is popularly known as the Year 2038 problem – and is a very real threat to all 32-bit applications which hope to make it till 2038. Here, I attempt to explain the year 2038 problem and also attempt to come up with another alternate solution to using UNIX timestamp values in the DATEADD function.

The INT data-type

The INT data type is the primary integer data type in SQL Server. Consuming 4 bytes in storage, the INT is a 32-bit data-type with a range of -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647).

In Binary format, the ceiling value for INT (2,147,483,647) equates to 01111111111111111111111111111111 – All ones! (The string is exactly 32-bits long, and there are 31 “1’s” to be precise). Attempting to convert this number to a SQL Server DATETIME gives us the DATETIME as 2038-01-19 03:14:07.000.

DECLARE @DateInt     INT

SET @DateInt    = 2147483647 /* Max. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The Year 2038 problem

Now that we know what the UNIX timestamp is, and also knowing the eternal truth that time and tide wait for none, we will see how January 19, 2038, 3:14:08 (UTC) is represented in UNIX timestamp format. Because we know that UNIX timestamp is simply a uniformly incrementing, the number is obviously 2,147,483,647 + 1 = 2,147,483,648. Alternatively, based on the computation logic used in my previous post, the number comes out to be 2,147,483,648.

SELECT CAST((DATEDIFF(dd,'1970-01-01 00:00:00.000','2038-01-19 03:14:08.000') * 86400) AS BIGINT) 
       +                                            -- Get the number of seconds based on the number of days elapsed
       DATEDIFF(ss,'2038-01-19 00:00:00.000','2038-01-19 03:14:08.000')
                                                    -- Add the number of seconds elapsed since midnight

There is a very important point I want to highlight by showing the conversion script above. If you observe, I had to cast the number to a BIGINT. This is because, when represented in binary format, 2,147,483,648 = 10000000000000000000000000000000.

This is a clear buffer overflow for the INT data-type (the 32nd bit in a 4 byte representation is the sign bit). For the INT data-type, this corresponds to the floor value of the range (-2,147,483,648). What this means is that at exactly January 19, 2038, 3:14:08 UTC time, the clocks using the 32-bit INT data-type would start reading: 1901-12-13 20:45:52.000

DECLARE @DateInt     INT

SET @DateInt    = -2147483648 /* Min. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The results could be disastrous if such a rollover happens unchecked. If your application is currently processing data 20 years ahead of time, you will start running into problems very soon.

This is the Year 2038 problem, also known as Y2K38 or Y2.038K or the UNIX Millennium Bug. In essence, what my colleague was asking was – “How can I overcome the year 2038 bug in my 32-bit application?”

The solution:

The only solution to the year 2038 problem is also hidden in the script shown above, wherein I use a CAST to a BIGINT value. We can use unsigned integer values, use BIGINT values, but the only real solution is to move towards 64-bit systems, where the wrap-around comes approximately 292 billion years from now. Computing systems would be very, very different by then.

Using BIGINT values for DATEADD – Method 02

My previous post dealing with using BIGINT values in DATEADD mentioned that there are actually two alternate implementations on how to represent the BIGINT UNIX timestamp values as SQL Server DATETIME. With the background of the year 2038 problem, I present before you method #02, which is as under. The script should be self-explanatory:

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

/* Method 02 - The max. value for INT (2147483647) corresponds to '2038-01-19 03:14:07.000' */
/*             If the passed value is greater than '2038-01-19 03:14:07.000', we add the    */
/*             difference (passedValue-2147483647) to '2038-01-19 03:14:07.000',            */
/*             else we use the standard difference with 1970.                               */

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2147483648

SELECT DATEADD(ss,@DateInt,'1970-01-01') AS 'IntegerDateRange',
       CASE WHEN (@DateBigInt-2147483647) > 0 THEN DATEADD(ss,(@DateBigInt-2147483647),'2038-01-19 03:14:07.000')
                                              ELSE DATEADD(ss,@DateBigInt,             '1970-01-01 00:00:00.000')
                                              END AS 'BigIntDateRange'

Now that we have seen both methods of handling the year 2038 problem created by 32-bit data-types, it is up to the reader to pick and use any method they find best suited for their application code.

Some Trivia

Today, I leave you with some very interesting trivia (courtesy: Wikipedia):

  1. UNIX Time passed 1000000000 on September 9, 2001, at exactly 01:46:40 (UTC). This is popularly called the UNIX billennium
  2. UNIX Time passed 1234567890 on February 13, 2009, at exactly 23:31:30 (UTC)  – Interestingly, it was a Friday
  3. UNIX Time passed 2^30 (1073741824) on January 10, 2004, at exactly 13:37:04 (UTC). This is half-way between UNIX epoch and year 2038 problem
  4. UNIX Time passed 1111111111 on March 18, 2005, at exactly 01:58:31 (UTC)
  5. UNIX Time will pass 2000000000 on May 18, 2033, at exactly 03:33:20 (UTC). This will be the 2nd UNIX billennium

In Conclusion…

In my previous post, I had mentioned that I did not believe the Microsoft Connect case filed for making DATEADD compatible with BIGINT is really a SQL Server defect. This is because as demonstrated above, the issue is actually an architectural limitation of the 32-bit architecture. Whatever code change we do to overcome this limitation will always be workaround, and not the true solution.

Many embedded systems (like automobile control systems and CNC controllers) use the UNIX timestamp to maintain their internal clock. These systems will also be affected by the Year 2038 problem. Hence, my question to the community is do you work with UNIX timestamps? If yes, I would like to hear from you as to how you handle (or plan to handle) both these issues that we discussed this week. Do leave a comment as you go.

Be courteous. Drive responsibly.

Converting UNIX timestamp (BIGINT) to DATETIME value using DATEADD function


Contrary to popular belief, offices can be fun – especially if you are faced with new challenges each day. Hence, this week, we will be taking a small break from discussing the “Underappreciated features of SQL Server”, and instead discuss something that happened at the office a few days ago. One of the project engineers asked me a seemingly very simple question.

“Can I represent a UNIX time value to a valid date using SQL Server? If yes, can you help me on how to achieve this?”

As always, I gladly agreed to assist him resolve his query. But before I could help him, I needed some help myself. The first question that came from me to him was “What is a UNIX time?”. The poor guy was petrified at the prospect of being helped by someone who doesn’t even know what the context of the discussion means. Anyway, this is what he explained to me:

UNIX time

UNIX time, or POSIX time is defined as the number of seconds elapsed since midnight of January 1, 1970 (in Coordinated Universal Time (UTC)), not counting leap seconds. The only reason why this does not consider leap seconds is because the Coordinated Universal Time (UTC) did not come into existence (in it’s current form) until 1972. UTC introduced the concept of adding and deleting leap seconds to keep the clock in synchronization with the Earth’s rotation.

Anyway, UNIX time is quite simple actually (simple is always good!). The UNIX epoch is the time 00:00:00 UTC on January 01, 1970. The Unix time number is zero at the Unix epoch, and increases by exactly 86400 seconds (24 hours * 60 minutes/hour * 60 seconds/minute) per day since the epoch.

Thus, a UNIX time of 1293840000 corresponds to the midnight of January 01, 2011 (the New Year!).

Once I was clear on this, I was able to help him out and here’s how. Immediately after starting, I landed up with a problem.

The Problem

Theoretically, simply adding the number of seconds to January 01, 1970 using the SQL DATEADD function should be sufficient to convert a UNIX timestamp to SQL Server DATETIME. And hence, I directly attempted the same, but the solution failed in basic unit testing. Here’s how:

For midnight, January 01, 2011 (UNIX time = 1293840000), the approach of directly adding the seconds to the UNIX epoch works fine. However, the moment I attempt to convert a timestamp value that is higher than the INTEGER data type limit (e.g. midnight January 01, 2050 = UNIX time 2524608000), I end up in an arithmetic overflow error. The source of the problem is the DATEADD function, which cannot accept a BIGINT interval value.

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2524608000 /* (UNIX time = 1293840000 => midnight, January 01, 2050) */

/**************** Building the Scenario *****************/
PRINT DATEADD(ss, @DateInt, '1970-01-01')
PRINT DATEADD(ss, @DateBigInt, '1970-01-01')

image

In today’s business world, not being able to handle dates as “close” as January 01, 2050 is not at all acceptable. This “limitation” has already been reported to Microsoft (refer the Connect link here). While I do not believe this to be a SQL Server defect (Why? read on…), we undoubtedly need something for applications being written today. Below are a few of my attempts to overcome this challenge.

The Solutions

I will be presenting two possible solutions before you – one today, and the other one later on in the week. Depending upon your preference, you may use either one in your application.

For today, let’s work our way backwards. We do not necessarily need to add the entire difference (in seconds) to the UNIX epoch time. DATEADD provides us the freedom to add years, months, and even days. Because we know that each day in the UNIX time is exactly 86400 seconds long, we can easily calculate the number of days elapsed based on the seconds information provided by performing a simple division.

In an ideal world, we would not be dealing only with a time value of midnight. We can have any hour, minute or second in the day coming through as part of the BIGINT number that we receive in UNIX time from interfacing systems. Because (in normal approximated terms) a day can have a maximum of 86400 seconds (which can be handled by an INT data-type), we simply need to use the modulo operator to get the number of seconds from the last midnight value.

/* Method 01 - Calculate based on the number of seconds per day (60sec*60min*24hrs)=86400sec */
SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2524644630 /* (UNIX time = 2524644630 => 10:10AM, 30 seconds, January 01, 2050) */
SELECT DATEADD(ss, @DateInt,            '1970-01-01')                                    AS 'IntegerDateRange',
       (@DateBigInt/86400) AS NumberOfDaysSinceEpoch,
       (@DateBigInt%86400) AS NumberOfSecondsFromMidnight,
       DATEADD(ss, (@DateBigInt%86400), DATEADD(DAY, @DateBigInt/86400, '1970-01-01' ) ) AS 'BigIntDateRange'

Here’s what the output looks like:

image

Conclusion

We have thus successfully represented a UNIX time as a SQL Server DATETIME value (represented as UTC time). However, please note that because of the fundamental differences between UTC and UNIX time, the conversions can be a bit off depending upon the number of leap seconds introduced till the date under consideration.

As I was researching on how to use BIGINT in the DATEADD function, an interesting observation came to light about using the 32-bit INT data-type – which forces me to start thinking about the time I have left to upgrade my 32-bit home operating system and hardware to 64-bit. This is referred to popularly as the Year 2038 problem, which I will write about later in the week along with the second method to convert a BIGINT to a DATETIME value. The upcoming article will also justify why I do not believe this to be a SQL Server defect.

Be courteous. Drive responsibly.

Multi-Server Queries – Underappreciated features of Microsoft SQL Server


This is part of the series of posts that I am running under the title "Under Appreciated Features of SQL Server". The series has been inspired by Andy Warren’s editorial on SQLServerCentral.com.

After Activity Monitor, this is one of the other features that I was surprised to see in the list of underappreciated features. This feature has been a dream come true for anybody who manages more than one server at a time. The need to be able to query a bunch of servers in one go and the fact that this feature has been around since Microsoft SQL Server 2005 had made me think that this should have been pretty common in usage by now.

Anyway, let’s quickly see how to use this wonderful feature that Microsoft SQL Server Management Studio provides.

Running Multi-Server Queries

One of our esteemed SQL Server MVPs is Mr. Pinal Dave (blog). He has, on June 14, 2009 written a very short, concise and precise blog post on how to run Multi-Server Queries. You can find that particular post here.

Keep in mind that the most important usability change that you need to make in order to use this feature is that all servers participating in a multi-server query must be registered servers.

What Pinal’s blog post does not cover are some navigational aids and query execution options. I will attempt to cover them here.

Changes to the Query Editor window

The Query editor window in a multi-server query is similar to the regular single-instance query editor, but there is one minor difference. The SSMS query status bar informs us about the number of servers that a particular query will run against. On my test environment, let’s say I have two registered instances. When I attempt to run a multi-server query, this is what I see:

image

Query Execution Options:

In Pinal’s post, you would have already seen how the results would appear in your results pane by default. Results from both instances are available in a single grid, and the SQL Server instance name helps us identify which server contributed to the result. However, we can change the way these results are displayed. You can navigate out to Tools->Options->Query Results to see a window similar to the following. Please note that the values in the screenshot below are default values.

image

  • Add login name to the results – Setting this to TRUE adds a column containing the login name which produces the result row
  • Add server name to the results – Adds a column containing the server instance name for a particular result row when set to TRUE
  • Merge Results – Shows results from different servers in the same grid when set to TRUE

Allow me to change the SQL Server registration of one of the servers to use SQL Server authentication. Also, I will set the “Merge Results” property to FALSE, and the “Add login name to the results” to TRUE. Below will be the output when I run the following query (borrowed from the example in Pinal’s blog):

SELECT SERVERPROPERTY('Edition') AS Edition,
SERVERPROPERTY('ProductLevel') AS ProductLevel,
SERVERPROPERTY('ProductVersion') AS ProductVersion
GO 

image

Important points to note

Limitation

As of now, it is not possible to select a subset of servers from within a server group (you can select a single instance, but not a subset). The recommendation would be to have a separate group if such a selection needs to be made often.

This is not a programmability enhancement!

Also, this feature is nothing more than an ability to execute the same query against multiple registered servers and obtain their results within a single window. It is not a programmability enhancement, and hence the following should not be expected:

  1. Cross-server support for JOIN or ORDER BY clauses
  2. Transaction support across multiple servers

Security considerations

Quoting MSDN:

Because the connections maintained by a Central Management Server execute in the context of the user, by using Windows Authentication, the effective permissions on the registered servers might vary. For example, the user might be a member of the sysadmin fixed server role on the instance of SQL Server A, but have limited permissions on the instance of SQL Server B.

You may also want to read:

This post completes the first part of my series on “Underappreciated features of Microsoft SQL Server”. Do share your feedback on how the journey has been till now, and what else would you like me to write on.

We will resume our journey to explore T-SQL Enhancements in the next couple of weeks. In the meanwhile, I will share with you some other interesting incidents that happened last month.

Until we meet next time,

Be courteous. Drive responsibly.

T-SQL Debugger – Underappreciated Features of Microsoft SQL Server


This is a part of my ongoing series on the "Underappreciated Features of Microsoft SQL Server", inspired from Andy Warren’s editorial on SQL Server Central.com. You can read the editorial and follow the discussion here.

More often than not, there is a myth in the developer community that Microsoft SQL Server does not provide the debugging abilities that other IDEs provide. SQL Server 2000 provided basic debugging ability. Starting Microsoft SQL Server 2005, the product now comes with it’s own Management Studio, which is based upon Microsoft’s renowned IDE – the Visual Studio. Therefore, the most common debugging abilities come out of the box for SQL Server – enough to give application developers a run for their money! Let’s have a look at some of the basic debugging options that are available to us in SSMS.

Please note that throughout this example, we will be using Microsoft SQL Server 2008, and the AdventureWorks2008 sample database as our reference. Please allow for a larger loading period of this post because the Debugger is best explained visually.

NOTE: For T-SQL Debugging, connecting the Transact-SQL editor to the server with a login having the sysadmin fixed server role is not sufficient; the Windows login of the Visual Studio user must also be a member of the sysadmin fixed server role on that instance of SQL Server.

Creating the environment

We can debug almost everything in SQL Server – ad-hoc SQL statement batches, stored procedures and even triggers! To begin with, we will create the following stored procedure on the SQL Server 2008 instance. The purpose of using a stored procedure that updates the [Sales].[SalesOrderDetail] table is that the table has a trigger on it and hence it would be easy for us to see how to debug triggers in Microsoft SQL Server 2008.

--Step 01: Create the following Stored Procedure
USE AdventureWorks2008;
GO
IF OBJECT_ID ( 'Sales.proc_UpdateSalesOrderDetailUnitPrice', 'P' ) IS NOT NULL 
    DROP PROCEDURE [Sales].[proc_UpdateSalesOrderDetailUnitPrice];
GO

CREATE PROCEDURE [Sales].[proc_UpdateSalesOrderDetailUnitPrice]
    @tSalesOrderId       INT,
    @tSalesOrderDetailId INT,
    @tUnitPrice          MONEY
AS
BEGIN
    --Wrap the UPDATE in a transaction so that we do not make permanent changes to the DB
    BEGIN TRANSACTION MyDebuggingTest
        UPDATE [Sales].[SalesOrderDetail]
        SET UnitPrice = @tUnitPrice
        WHERE SalesOrderID = @tSalesOrderId AND
              SalesOrderDetailID = @tSalesOrderDetailId
    ROLLBACK TRANSACTION MyDebuggingTest
END
GO

Here’s the query that we will be using to invoke the stored procedure in our debugging session.

USE AdventureWorks2008
GO

DECLARE @tSalesOrderId INT = 43659,
        @tSalesOrderDetailId INT = 2,
        @tUnitPrice MONEY = 2010.255

EXEC [Sales].[proc_UpdateSalesOrderDetailUnitPrice] @tSalesOrderId, @tSalesOrderDetailId, @tUnitPrice;

Performing the Debug session

We can initiate the Debugging session by clicking the “green arrow” button on the SSMS toolbar or by going to Debug –> Start Debugging or by using the Alt+F5 key combination.

Note that SQL Server shows reminds you that a Debug Session is in progress by showing the “Debugging Query…” message in the query status bar. You also get the Local Variable watcher and the Call Stack/Breakpoint editor and Output Windows.

Initiate Debugging session

(Use F10 to advance to the next statement)

image
Use F11 to “step-into” the stored procedure

Notice that both “Locals” and “Call stack” windows are updated

image
Use F11 on the UPDATE statement to step into the Trigger!

Also notice that the “Call Stack” shows both the Server Instance name and the Database name where the object being debugged resides

image
Once inside a trigger, define a breakpoint by pressing F9, use “Continue” (Alt+F5) to proceed with the execution until the breakpoint is hit

Is it just me or the breakpoint window is not what we expected to be?

image
You also get the standard debugging options like “Run to cursor”/”Set next statement”, etc image

What is not supported?

The SQL Server Debugger is powerful, but still in it’s infancy. To the best of my knowledge, the following cannot be done in SQL Server 2008 debugger:

  1. Edit & Continue – the objects that you step into are temporary scripts created by SQL Server – making changes to these will not have any effect
  2. Conditional Breakpoints – its a take it or leave it situation with breakpoints and SQL Server – you cannot have conditional breakpoints
  3. Pre-set breakpoints – Because SQL Server opens up a temporary copy of the object scripts, one cannot pre-set breakpoints before the object is “stepped into”

Interesting information in the breakpoint window

Talking about the limitations of Transact-SQL Debugging, did you notice anything strange in the screenshot where we hit the breakpoint (the second-last screenshot in the table above)?

You had to go back and take another look, didn’t you? No worries – I had to do that too!

The content in the breakpoint window is not what we expected it to be. Instead of showing the object name where the breakpoint is, SSMS is showing us a number. What can this number be? It’s the object_id of the trigger we are currently in! Here’s the confirmation:

SELECT * FROM sys.objects WHERE object_id = 1563152614

Result:

 image

But why does SSMS give us the object_id instead of the object name? That’s because it is SQL Server’s way of keeping track of the breakpoint for future debugging sessions within the solution/project. If you are not running a debug session under a solution/project, closing SSMS will reset all breakpoints.

But don’t take my word for it. Rerun this debug session and see for yourself the importance of using the object_id. Once done, close SSMS. Reopen SSMS and run through this debug session once again – the breakpoint will be gone!

Important Resources

Here are some important MSDN resources to get you started on T-SQL Debugging

Powered with these resources, I hope that you will be able to unleash the power of T-SQL Debugging and use it for your making a smarter workplace. If you find any cool tricks to use with SQL Debugging, please share them here!

Until we meet next time,

Be courteous. Drive responsibly.