Category Archives: Imported from BeyondRelational

These posts are imported from my old blog page: http://beyondrelational.com/modules/2/blogs/77/nakuls-blog.aspx

Table valued parameters to a table valued function – Underappreciated features of Microsoft SQL Server


This is part of the series of posts that I am running under the title “Under Appreciated Features of SQL Server“. The series has been inspired by Andy Warren’s editorial on SQLServerCentral.com.

Table-valued parameters are a new parameter type in SQL Server 2008. Because a lot of production systems still run Microsoft SQL Server 2005, I am not too surprised to find this on the underappreciated features list.

Table-valued parameters are declared by using user-defined table types. You can use table-valued parameters to send multiple rows of data to a Transact-SQL statement or a routine, such as a stored procedure or function, without creating a temporary table or many parameters.

Creating user-defined table types

In SQL Server 2008, a user-defined table type is a user-defined type that represents the definition of a table structure. Therefore, the creation of user defined table types is similar to creating a table. Let’s create a table data type for handling UNIQUEIDENTIFIER values:

USE AdventureWorks2008
GO
-- Create a GUIDTableType to hold a table of GUIDs/UNIQUEIDENTIFIERs
CREATE TYPE GUIDTableType AS TABLE 
    ( GUIDValue UNIQUEIDENTIFIER )
GO

Once created, you can use the following DMV to get a list of available table data types:

USE AdventureWorks2008
GO
SELECT * FROM sys.table_types

OR use the SSMS Object Explorer

image

Create routines consuming the user-defined table type

Let us create simple programming routines that consume the user-defined table type created above:

USE AdventureWorks2008
GO
IF EXISTS (SELECT * FROM sys.objects where name = 'proc_GetSalesOrdersbyRowGuid' AND type = 'P')
    DROP PROCEDURE proc_GetSalesOrdersbyRowGuid
GO
CREATE PROCEDURE proc_GetSalesOrdersbyRowGuid
    @SalesOrderRowGuids GUIDTableType READONLY
AS
BEGIN
    SELECT Sales.SalesOrderDetail.*
    FROM @SalesOrderRowGuids SalesOrderRowGuids
    INNER JOIN Sales.SalesOrderDetail ON Sales.SalesOrderDetail.rowguid = SalesOrderRowGuids.GUIDValue
END
GO

USE AdventureWorks2008
GO
IF OBJECT_ID(N'dbo.func_SalesOrderDetailInformation', N'TF') IS NOT NULL
    DROP FUNCTION dbo.func_SalesOrderDetailInformation;
GO
CREATE FUNCTION dbo.func_SalesOrderDetailInformation
    (@SalesOrderRowGuids dbo.GUIDTableType READONLY)
RETURNS @retSalesOrderDetailInformation TABLE 
(
    -- Columns returned by the function
    [SalesOrderID]          [INT]          NOT NULL, 
    [SalesOrderDetailID]    [INT]          NOT NULL,
    [CarrierTrackingNumber] [NVARCHAR](25) NULL,
    [OrderQty]              [SMALLINT]     NOT NULL,
    [ProductID]             [INT]          NOT NULL,
    [SpecialOfferID]        [INT]          NOT NULL,
    [UnitPrice]             [MONEY]        NOT NULL,
    [UnitPriceDiscount]     [MONEY]        NOT NULL,
    [LineTotal]             [NUMERIC]      NOT NULL,
    [ModifiedDate]          [DATETIME]     NOT NULL
)
AS 
-- Returns the first name, last name, job title, and contact type for the specified contact.
BEGIN
    INSERT @retSalesOrderDetailInformation
    SELECT Sales.SalesOrderDetail.SalesOrderID,
           Sales.SalesOrderDetail.SalesOrderDetailID,
           Sales.SalesOrderDetail.CarrierTrackingNumber,
           Sales.SalesOrderDetail.OrderQty,
           Sales.SalesOrderDetail.ProductID,
           Sales.SalesOrderDetail.SpecialOfferID,
           Sales.SalesOrderDetail.UnitPrice,
           Sales.SalesOrderDetail.UnitPriceDiscount,
           Sales.SalesOrderDetail.LineTotal,
           Sales.SalesOrderDetail.ModifiedDate
    FROM @SalesOrderRowGuids SalesOrderRowGuids
    INNER JOIN Sales.SalesOrderDetail ON Sales.SalesOrderDetail.rowguid = SalesOrderRowGuids.GUIDValue
    
    RETURN
END
GO

Consuming the user-defined table type

Remember how we said that user-defined table types are similar to tables? That’s true! Here’s how to consume the newly user defined table type:

USE AdventureWorks2008
GO

DECLARE @SalesOrders GUIDTableType

INSERT INTO @SalesOrders VALUES ('B207C96D-D9E6-402B-8470-2CC176C42283'),
                                ('04C4DE91-5815-45D6-8670-F462719FBCE3'),
                                ('80667840-F962-4EE3-96E0-AECA108E0D4F'),
                                ('E9D54907-E7B7-4969-80D9-76BA69F8A836'),
                                ('AC769034-3C2F-495C-A5A7-3B71CDB25D4E')

EXEC proc_GetSalesOrdersbyRowGuid @SalesOrders

SELECT * FROM func_SalesOrderDetailInformation(@SalesOrders)

Here’s what you get for output from the stored procedure:

image

Here’s what you get for output from the function:

image

Please note that table data types cannot be targets of SELECT INTO and INSERT EXEC statements. It can however, be in the FROM clause of either of these. Also, just as is the case with other parameters, a table data type is scoped to the stored procedure, function or dynamic Transact-SQL text as applicable.

For application programmers:

Per Books On Line, “Transact-SQL passes table-valued parameters to routines by reference to avoid making a copy of the input data.” So, there you have it – pointers and by-reference passing are still alive and kicking!

Benefits of the table valued table types

Table-valued parameters offer more flexibility and in some cases better performance than temporary tables or other ways to pass a list of parameters. There are tons of benefits, and there can be nothing better than the list provided by MSDN:

  • Do not acquire locks for the initial population of data from a client
  • Provide a simple programming model
  • Enable you to include complex business logic in a single routine
  • Reduce round trips to the server
  • Can have a table structure of different cardinality
  • Are strongly typed
  • Enable the client to specify sort order and unique keys

Do you use user defined table data types? If yes, how do you find this feature? Do let us know!

Until we meet next time,

Be courteous. Drive responsibly.

Clearing SSMS History


For those of you who visit BeyondRelational regularly, you would have noticed that the web-site now has a new feature called “Ask”. The feature is a “personal forum” and allows you to approach one of the SQL Server experts associated with BeyondRelational directly with your queries. It facilitates a one-on-one interaction, which is what I prefer. The feature is still in the beta stage but with your support, it will definitely be a successful one.

Recently on the “Ask” feature, I received a question by one of the readers asking me how to clear out the SSMS history. For those of you who work with multiple SQL servers with multiple logins, the “Connect to Server” window seems to fill up very quickly, and starts to look something like this. Sometimes, these servers might not even be in existence (e.g.. if you frequently refresh your virtual RND/test environments).

image

You might notice that there is no way to clear out this history. So, how do you go about clearing out the clutter that working with SSMS generates?

The Manual Way

The following is a series of manual steps that you need to follow to clear out the SSMS history:

  1. Close all open instances of SSMS on your workstation
  2. Depending upon your version of the SSMS client tools, navigate to and delete the following:
  3. SSMS Client Tools Version Path File to Delete
    SQL 11 (“Denali”) %USERPROFILE%AppDataRoamingMicrosoftSQL Server Management Studio11.0 SqlStudio.bin
    SQL 2008 %USERPROFILE%AppDataRoamingMicrosoftMicrosoft SQL Server100ToolsShell SqlStudio.bin
    SQL 2005 %USERPROFILE%AppDataRoamingMicrosoftMicrosoft SQL Server90ToolsShell mru.dat
  4. Launch SSMS
  5. It might take a little while to launch, as it recreates the “SqlStudio.bin”/”mru.dat”. Once launched, you will see that all SSMS history is gone! All that remains is the list of servers installed on my test server

image

The Automated Way

Unfortunately, there is not wizard or button that one can click and magically clear out the history. A Microsoft Connect case has already been logged for introduction of the same (you can read the case here). Let’s hope that by the time the first of the SQL 11 RCs come out, this feature is part of them.

Until we meet next time,

Be courteous. Drive responsibly.

“Ad hoc update to system catalogs is not supported.” when using RECONFIGURE


I just had a busy travelling week-end, and hence today, we will start things off this week with a little incident that I encountered a while back. As with any development team, we have a bunch of servers lying around that are shared across the team purely for RND purposes. I had to run a particular test on one of these environments and that’s when this happened.

The situation was that my test required me to enable the xp_cmdshell feature of Microsoft SQL Server 2008. Hence, I proceeded with the following set of scripts, which are pretty straight-forward and “should” work on all environments, provided I have the appropriate permissions. I was connected to the server with the “sa” login and used the RECONFIGURE command so that I do not need a SQL Server restart for the changes to take effect.

sp_configure 'show advanced options',1;
reconfigure
go

sp_configure 'xp_cmdshell',1;
reconfigure
go

Although the scripts are quite simple and was connected using the highest possible privileges, the SQL Server just would not allow me to apply the changes. The error that it threw was as under:

Msg 5808, Level 16, State 1, Line 3

Ad hoc update to system catalogs is not supported.

As the test was somewhat urgent, I proceeded with the following work-around.

sp_configure 'show advanced options',1;
reconfigure with override
go

sp_configure 'xp_cmdshell',1;
reconfigure with override
go

Later in the day, I did some research and found that a team member had been toying around with the SQL Server as a practical hands-on experience for one of the examinations that the team member was preparing for. However, he had forgotten to roll-back the changes made by him before he put the server back into the “available” repository – which is what was causing the problem.

SQL Server has a setting called – “allow updates”. This setting was used in SQL Server 2000 to allow direct ad-hoc updates to system catalogs and tables. This setting is now deprecated since SQL Server 2005 and all SQL Server installations come with this set to 0 by default. Per Books On Line (read the BOL help on this option here), this setting has no effect, except that the RECONFIGURE statement will fail, just as it is doing in our case. Running the following statement to reset this back to 0 did the trick and I was now able to use RECONFIGURE again to reset the environment back to the “factory default”.

-- Reset the "allow updates" setting to the recommended 0
sp_configure 'allow updates',0;
reconfigure with override
go

-- Reset the environment back as the test is complete
sp_configure 'xp_cmdshell',0;
reconfigure
go

sp_configure 'show advanced options',0;
reconfigure
go

Lessons Learnt

This incident has refreshed in my memory the two basic rules that I have been taught by my father (way back when I was introduced to computers with Windows 3.1!)

  1. Irrespective of whether you are doing a production change or a local RND change, always keep a log of the changes made
  2. Once done, if the change was a purely researching change, always roll the environment back to the condition you found it in

Here’s something I don’t understand

By default, a user cannot see the advanced configuration options. Even when you run sp_configure on a clean install of SQL Server 2008, the “allow updates” option is right at the top! Why? If the product is not expecting users to use this setting, it should be hidden from the users view. If a user sees something, he/she is bound to use it.

If you work with a product and if you get a say during product evolution, request you to ensure that things which a user should not be using are either hidden from the user’s view, or are at least hard to access.

Until we meet next time,

Be courteous. Drive responsibly.

UNIX timestamp and the Year 2038 problem


My first post was on the Ghost of the Y2K, wherein I attempt to revisit the infamous Y2K problem (read and listen to a nice summary on NPR’s Science Friday) and also draw attention to the fact that the problem may not be gone yet for those who continue to use 2-digit year representation. Earlier this week, I wrote about a problem that one of my colleagues asked my help for. They were trying to convert a UNIX timestamp (BIGINT) to a SQL Server DATETIME representation. However, because the DATEADD function can only accept the INT data-type for the interval value, they were at a dead-end. After about an hour of trying what the UNIX timestamp is, I came up with two solutions solution for them – one of which I presented in the last post. You can read about it here.

When I was researching the possible solutions, a strange observation dawned on me and further research revealed that I was not the only one to notice this. This is popularly known as the Year 2038 problem – and is a very real threat to all 32-bit applications which hope to make it till 2038. Here, I attempt to explain the year 2038 problem and also attempt to come up with another alternate solution to using UNIX timestamp values in the DATEADD function.

The INT data-type

The INT data type is the primary integer data type in SQL Server. Consuming 4 bytes in storage, the INT is a 32-bit data-type with a range of -2^31 (-2,147,483,648) to 2^31-1 (2,147,483,647).

In Binary format, the ceiling value for INT (2,147,483,647) equates to 01111111111111111111111111111111 – All ones! (The string is exactly 32-bits long, and there are 31 “1’s” to be precise). Attempting to convert this number to a SQL Server DATETIME gives us the DATETIME as 2038-01-19 03:14:07.000.

DECLARE @DateInt     INT

SET @DateInt    = 2147483647 /* Max. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The Year 2038 problem

Now that we know what the UNIX timestamp is, and also knowing the eternal truth that time and tide wait for none, we will see how January 19, 2038, 3:14:08 (UTC) is represented in UNIX timestamp format. Because we know that UNIX timestamp is simply a uniformly incrementing, the number is obviously 2,147,483,647 + 1 = 2,147,483,648. Alternatively, based on the computation logic used in my previous post, the number comes out to be 2,147,483,648.

SELECT CAST((DATEDIFF(dd,'1970-01-01 00:00:00.000','2038-01-19 03:14:08.000') * 86400) AS BIGINT) 
       +                                            -- Get the number of seconds based on the number of days elapsed
       DATEDIFF(ss,'2038-01-19 00:00:00.000','2038-01-19 03:14:08.000')
                                                    -- Add the number of seconds elapsed since midnight

There is a very important point I want to highlight by showing the conversion script above. If you observe, I had to cast the number to a BIGINT. This is because, when represented in binary format, 2,147,483,648 = 10000000000000000000000000000000.

This is a clear buffer overflow for the INT data-type (the 32nd bit in a 4 byte representation is the sign bit). For the INT data-type, this corresponds to the floor value of the range (-2,147,483,648). What this means is that at exactly January 19, 2038, 3:14:08 UTC time, the clocks using the 32-bit INT data-type would start reading: 1901-12-13 20:45:52.000

DECLARE @DateInt     INT

SET @DateInt    = -2147483648 /* Min. range of the INT data-type */
SELECT DATEADD(ss, @DateInt, '1970-01-01')

The results could be disastrous if such a rollover happens unchecked. If your application is currently processing data 20 years ahead of time, you will start running into problems very soon.

This is the Year 2038 problem, also known as Y2K38 or Y2.038K or the UNIX Millennium Bug. In essence, what my colleague was asking was – “How can I overcome the year 2038 bug in my 32-bit application?”

The solution:

The only solution to the year 2038 problem is also hidden in the script shown above, wherein I use a CAST to a BIGINT value. We can use unsigned integer values, use BIGINT values, but the only real solution is to move towards 64-bit systems, where the wrap-around comes approximately 292 billion years from now. Computing systems would be very, very different by then.

Using BIGINT values for DATEADD – Method 02

My previous post dealing with using BIGINT values in DATEADD mentioned that there are actually two alternate implementations on how to represent the BIGINT UNIX timestamp values as SQL Server DATETIME. With the background of the year 2038 problem, I present before you method #02, which is as under. The script should be self-explanatory:

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

/* Method 02 - The max. value for INT (2147483647) corresponds to '2038-01-19 03:14:07.000' */
/*             If the passed value is greater than '2038-01-19 03:14:07.000', we add the    */
/*             difference (passedValue-2147483647) to '2038-01-19 03:14:07.000',            */
/*             else we use the standard difference with 1970.                               */

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2147483648

SELECT DATEADD(ss,@DateInt,'1970-01-01') AS 'IntegerDateRange',
       CASE WHEN (@DateBigInt-2147483647) > 0 THEN DATEADD(ss,(@DateBigInt-2147483647),'2038-01-19 03:14:07.000')
                                              ELSE DATEADD(ss,@DateBigInt,             '1970-01-01 00:00:00.000')
                                              END AS 'BigIntDateRange'

Now that we have seen both methods of handling the year 2038 problem created by 32-bit data-types, it is up to the reader to pick and use any method they find best suited for their application code.

Some Trivia

Today, I leave you with some very interesting trivia (courtesy: Wikipedia):

  1. UNIX Time passed 1000000000 on September 9, 2001, at exactly 01:46:40 (UTC). This is popularly called the UNIX billennium
  2. UNIX Time passed 1234567890 on February 13, 2009, at exactly 23:31:30 (UTC)  – Interestingly, it was a Friday
  3. UNIX Time passed 2^30 (1073741824) on January 10, 2004, at exactly 13:37:04 (UTC). This is half-way between UNIX epoch and year 2038 problem
  4. UNIX Time passed 1111111111 on March 18, 2005, at exactly 01:58:31 (UTC)
  5. UNIX Time will pass 2000000000 on May 18, 2033, at exactly 03:33:20 (UTC). This will be the 2nd UNIX billennium

In Conclusion…

In my previous post, I had mentioned that I did not believe the Microsoft Connect case filed for making DATEADD compatible with BIGINT is really a SQL Server defect. This is because as demonstrated above, the issue is actually an architectural limitation of the 32-bit architecture. Whatever code change we do to overcome this limitation will always be workaround, and not the true solution.

Many embedded systems (like automobile control systems and CNC controllers) use the UNIX timestamp to maintain their internal clock. These systems will also be affected by the Year 2038 problem. Hence, my question to the community is do you work with UNIX timestamps? If yes, I would like to hear from you as to how you handle (or plan to handle) both these issues that we discussed this week. Do leave a comment as you go.

Be courteous. Drive responsibly.

Converting UNIX timestamp (BIGINT) to DATETIME value using DATEADD function


Contrary to popular belief, offices can be fun – especially if you are faced with new challenges each day. Hence, this week, we will be taking a small break from discussing the “Underappreciated features of SQL Server”, and instead discuss something that happened at the office a few days ago. One of the project engineers asked me a seemingly very simple question.

“Can I represent a UNIX time value to a valid date using SQL Server? If yes, can you help me on how to achieve this?”

As always, I gladly agreed to assist him resolve his query. But before I could help him, I needed some help myself. The first question that came from me to him was “What is a UNIX time?”. The poor guy was petrified at the prospect of being helped by someone who doesn’t even know what the context of the discussion means. Anyway, this is what he explained to me:

UNIX time

UNIX time, or POSIX time is defined as the number of seconds elapsed since midnight of January 1, 1970 (in Coordinated Universal Time (UTC)), not counting leap seconds. The only reason why this does not consider leap seconds is because the Coordinated Universal Time (UTC) did not come into existence (in it’s current form) until 1972. UTC introduced the concept of adding and deleting leap seconds to keep the clock in synchronization with the Earth’s rotation.

Anyway, UNIX time is quite simple actually (simple is always good!). The UNIX epoch is the time 00:00:00 UTC on January 01, 1970. The Unix time number is zero at the Unix epoch, and increases by exactly 86400 seconds (24 hours * 60 minutes/hour * 60 seconds/minute) per day since the epoch.

Thus, a UNIX time of 1293840000 corresponds to the midnight of January 01, 2011 (the New Year!).

Once I was clear on this, I was able to help him out and here’s how. Immediately after starting, I landed up with a problem.

The Problem

Theoretically, simply adding the number of seconds to January 01, 1970 using the SQL DATEADD function should be sufficient to convert a UNIX timestamp to SQL Server DATETIME. And hence, I directly attempted the same, but the solution failed in basic unit testing. Here’s how:

For midnight, January 01, 2011 (UNIX time = 1293840000), the approach of directly adding the seconds to the UNIX epoch works fine. However, the moment I attempt to convert a timestamp value that is higher than the INTEGER data type limit (e.g. midnight January 01, 2050 = UNIX time 2524608000), I end up in an arithmetic overflow error. The source of the problem is the DATEADD function, which cannot accept a BIGINT interval value.

DECLARE @DateInt     INT
DECLARE @DateBigInt  BIGINT

SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2524608000 /* (UNIX time = 1293840000 => midnight, January 01, 2050) */

/**************** Building the Scenario *****************/
PRINT DATEADD(ss, @DateInt, '1970-01-01')
PRINT DATEADD(ss, @DateBigInt, '1970-01-01')

image

In today’s business world, not being able to handle dates as “close” as January 01, 2050 is not at all acceptable. This “limitation” has already been reported to Microsoft (refer the Connect link here). While I do not believe this to be a SQL Server defect (Why? read on…), we undoubtedly need something for applications being written today. Below are a few of my attempts to overcome this challenge.

The Solutions

I will be presenting two possible solutions before you – one today, and the other one later on in the week. Depending upon your preference, you may use either one in your application.

For today, let’s work our way backwards. We do not necessarily need to add the entire difference (in seconds) to the UNIX epoch time. DATEADD provides us the freedom to add years, months, and even days. Because we know that each day in the UNIX time is exactly 86400 seconds long, we can easily calculate the number of days elapsed based on the seconds information provided by performing a simple division.

In an ideal world, we would not be dealing only with a time value of midnight. We can have any hour, minute or second in the day coming through as part of the BIGINT number that we receive in UNIX time from interfacing systems. Because (in normal approximated terms) a day can have a maximum of 86400 seconds (which can be handled by an INT data-type), we simply need to use the modulo operator to get the number of seconds from the last midnight value.

/* Method 01 - Calculate based on the number of seconds per day (60sec*60min*24hrs)=86400sec */
SET @DateInt    = 1293840000 /* (UNIX time = 1293840000 => midnight, January 01, 2011) */
SET @DateBigInt = 2524644630 /* (UNIX time = 2524644630 => 10:10AM, 30 seconds, January 01, 2050) */
SELECT DATEADD(ss, @DateInt,            '1970-01-01')                                    AS 'IntegerDateRange',
       (@DateBigInt/86400) AS NumberOfDaysSinceEpoch,
       (@DateBigInt%86400) AS NumberOfSecondsFromMidnight,
       DATEADD(ss, (@DateBigInt%86400), DATEADD(DAY, @DateBigInt/86400, '1970-01-01' ) ) AS 'BigIntDateRange'

Here’s what the output looks like:

image

Conclusion

We have thus successfully represented a UNIX time as a SQL Server DATETIME value (represented as UTC time). However, please note that because of the fundamental differences between UTC and UNIX time, the conversions can be a bit off depending upon the number of leap seconds introduced till the date under consideration.

As I was researching on how to use BIGINT in the DATEADD function, an interesting observation came to light about using the 32-bit INT data-type – which forces me to start thinking about the time I have left to upgrade my 32-bit home operating system and hardware to 64-bit. This is referred to popularly as the Year 2038 problem, which I will write about later in the week along with the second method to convert a BIGINT to a DATETIME value. The upcoming article will also justify why I do not believe this to be a SQL Server defect.

Be courteous. Drive responsibly.