Tag Archives: #SQLServer

All about Microsoft SQL Server

CROSS APPLY – Underappreciated features of Microsoft SQL Server


This is in continuation to my series on the Underappreciated features of Microsoft SQL Server. The series has been inspired from Andy Warren’s editorial on SQLServerCentral.com of the same name.

Today, we will look at a great new T-SQL enhancement introduced since SQL Server 2005 – the APPLY operator. Per Books On Line, “the APPLY operator allows you to invoke a table-valued function for each row returned by an outer table expression of a query.

Instead of joining two tables, when APPLY is used, we join the output of a table-valued function with the outer table – such that the each row in the output of the table valued function is evaluated for each row of the outer table.

The two forms of APPLY

Just as we have multiple forms of the JOIN operator, we also have two forms of the APPLY operator – CROSS APPLY and OUTER APPLY. The difference is quite simple:

  1. CROSS APPLY only returns rows from the outer table which produce a result set from the table valued function
  2. OUTER APPLY returns both rows – irrespective of whether or not they produce a result set. NULL values are seen for the output of the table valued function for such rows

Examples

I believe the most common use of APPLY outside of a business application, is in performance tuning and database administration. One of the things that DBA are always monitoring is the answer to the question – “Which queries are currently running against a particular SQL Server?”. The simple query for this is:

SELECT * 
FROM sys.dm_exec_requests ser
CROSS APPLY sys.dm_exec_sql_text(ser.sql_handle)

As you can see, the query is such that the sql_handle is taken from the DMV – sys.dm_exec_requests and then applied to the function – sys.dm_exec_sql_text. Because we do not want NULL values, we used CROSS APPLY. As an exercise, try using OUTER APPLY and see what happens.

For a more examples, I would redirect the reader to Books On Line at: http://technet.microsoft.com/en-us/library/ms175156.aspx. The example and explanation is excellent, and very easy to understand.

The big difference – CROSS APPLY v/s CROSS JOIN

So, one might say that if the output of the table valued function was an actual table, CROSS APPLY can be replaced by a CROSS JOIN. However, that is not entirely true. CROSS JOIN will produce a Cartesian product, hence, if the outer table has m rows and the inner table n, the output will be (m x n) rows. CROSS APPLY, on the other hand, is more similar to an INNER JOIN.

Some things to keep in mind

Finally, let me draw your attention to a few things that you should keep in mind before using CROSS APPLY:

  • It’s quite obvious that to use the APPLY operator, the compatibility level of the database must at least be 90
  • Performance impact – It is quite clear that we will have at least one scan every time the TVF is executed. Hence, please keep an eye out on performance aspects before jumping in and using CROSS APPLY in everything – moderation is always good

Until we meet next time,

Be courteous. Drive responsibly.

PIVOT & Dynamic Cross-tabs – Underappreciated features of Microsoft SQL Server


This is in continuation to my series on the Underappreciated features of Microsoft SQL Server. The series has been inspired from Andy Warren’s editorial on SQLServerCentral.com of the same name.

After a number of posts on my visit to Tech-Ed 2011 (India), we are now back on track and will continue from where we left off. We were discussing the T-SQL related underappreciated features and the next one in line was the PIVOT operator. Because PIVOT has been around since SQL Server 2005, and is simplifies a very important functionality, I was quite surprised to see this on the list of underappreciated features. Hence, I will start off with an introduction, provide and explain an example, and then point out the reader to a couple of reference resources that I used when I was learning this T-SQL enhancement.

Cross-tabs are a very important piece of any OLTP application. These systems typically have large amounts of data – all in great detail, but arranged “row-wise”. This arrangement is very much required for a high-performance OLTP system because it generally involves more incoming transactions and less of read operations. However, not every “row-wise” detail can be put up on a report – it makes great technical sense, but not business sense. For the data to have some business value, it needs to have the ability to be aggregated effectively when requested. What this means is that we need to convert one table-valued expression into another table.

The problem

Let’s take an example. In a sales setup, it is important to measure the number of orders placed by certain employees. In the AdventureWorks2008R2 sample database, this information is stored such that all unique values of interest (EmployeeId and VendorId) are in individual columns.

USE AdventureWorks2008R2
GO
SELECT poh.PurchaseOrderID, poh.EmployeeID, poh.VendorID
FROM Purchasing.PurchaseOrderHeader poh

image

What we need is that the unique values from one column in the expression (EmployeeID) are converted or transformed into multiple columns in the output, and an aggregation is performed on the remaining columns in the output. The unique values in the EmployeeID column themselves need to become fields in the final result set.

  EmployeeID as Columns
VendorID as Rows Aggregations – COUNT of PurchaseOrderIDs for the vendor in the row key for the employee in the column key

The Conventional Solution

Conventionally (before SQL 2005), we would have ended up doing something like:

USE AdventureWorks2008R2
GO
SELECT poh.VendorID,
       (SELECT COUNT(poh1.PurchaseOrderID) 
        FROM Purchasing.PurchaseOrderHeader poh1 
        WHERE poh.VendorID = poh1.VendorID AND poh1.EmployeeID = 250) AS Employee1,
       (SELECT COUNT(poh1.PurchaseOrderID) 
        FROM Purchasing.PurchaseOrderHeader poh1 
        WHERE poh.VendorID = poh1.VendorID AND poh1.EmployeeID = 251) AS Employee2,
       (SELECT COUNT(poh1.PurchaseOrderID) 
        FROM Purchasing.PurchaseOrderHeader poh1 
        WHERE poh.VendorID = poh1.VendorID AND poh1.EmployeeID = 256) AS Employee3,
       (SELECT COUNT(poh1.PurchaseOrderID) 
        FROM Purchasing.PurchaseOrderHeader poh1 
        WHERE poh.VendorID = poh1.VendorID AND poh1.EmployeeID = 257) AS Employee4,
       (SELECT COUNT(poh1.PurchaseOrderID) 
        FROM Purchasing.PurchaseOrderHeader poh1 
        WHERE poh.VendorID = poh1.VendorID AND poh1.EmployeeID = 260) AS Employee5
FROM Purchasing.PurchaseOrderHeader poh
WHERE poh.EmployeeID IN ( 250, 251, 256, 257, 260 )
GROUP BY poh.VendorID

Such a query would have given us the result as shown in the screen-shot below, which is what we require.

image

When we look at the execution plan of this query, we find that it is anything but efficient – in fact, it’s terrible! Due to space constraints, I am only showing the core of the execution plan (i.e. where most computations are concentrated).

image

Depending upon the situation, you may also want to use a complex series of the SELECT…CASE statements.

The solution – PIVOT & UNPIVOT operators

Come SQL 2005, we had a exciting T-SQL enhancements introduced by Microsoft. Some of them are the introduction of the PIVOT and UNPIVOT operators.

The following is the T-SQL query using PIVOT to carry out the same computation demonstrated above:

USE AdventureWorks2008R2;
GO
SELECT VendorID, 
       [250] AS Emp1, 
       [251] AS Emp2, 
       [256] AS Emp3, 
       [257] AS Emp4, 
       [260] AS Emp5
FROM (SELECT PurchaseOrderID, EmployeeID, VendorID
      FROM Purchasing.PurchaseOrderHeader) p
PIVOT (COUNT (PurchaseOrderID)
       FOR EmployeeID IN ( [250], [251], [256], [257], [260] )
      ) AS pvt
ORDER BY pvt.VendorID;

In this example, the results of the following sub-query are PIVOT’ed on the EmployeeID column.

USE AdventureWorks2008R2;
GO
SELECT PurchaseOrderID, EmployeeID, VendorID
FROM Purchasing.PurchaseOrderHeader

Here is a walk-through of the above query:

  1. The PurchaseOrderID column serves as the grouping column along with the EmployeeID
  2. This set is then aggregated to produce an output as under:

image

Some points to consider:

  1. When PIVOT and UNPIVOT are used against databases that are upgraded to SQL Server 2005 or later, the compatibility level of the database must be set to 90 or higher
  2. When aggregate functions are used with PIVOT, the presence of any null values in the value column are not considered when computing an aggregation

To demonstrate UNPIVOT, let’s store the output of the PIVOT operation above into a temporary table variable and then execute the UNPIVOT operation on it. By doing so, there is a very important point I want to demonstrate.

USE AdventureWorks2008R2;
GO
--Declare a temporary table variable to hold the output of the PIVOT
DECLARE @pvt TABLE (VendorID INT, Emp1 INT, Emp2 INT, Emp3 INT, Emp4 INT, Emp5 INT)

INSERT INTO @pvt
SELECT VendorID, 
       [250] AS Emp1, 
       [251] AS Emp2, 
       [256] AS Emp3, 
       [257] AS Emp4, 
       [260] AS Emp5
FROM (SELECT PurchaseOrderID, EmployeeID, VendorID
      FROM Purchasing.PurchaseOrderHeader) p
PIVOT (COUNT (PurchaseOrderID)
       FOR EmployeeID IN ( [250], [251], [256], [257], [260] )
      ) AS pvt
ORDER BY pvt.VendorID

--UNPIVOT the table.
SELECT VendorID, Employee, Orders
FROM 
   (SELECT VendorID, Emp1, Emp2, Emp3, Emp4, Emp5
   FROM @pvt) p
UNPIVOT
   (Orders FOR Employee IN 
      (Emp1, Emp2, Emp3, Emp4, Emp5)
)AS unpvt;
GO

The following is the output:

image

At this point, I would like you to understand the following major differences between PIVOT and UNPIVOT:

  1. UNPIVOT is not the exact reverse of PIVOT. This is because during a PIVOT operation, we aggregate data, which causes loss of data-granularity by merging multiple rows into a single row
  2. Also, because PIVOT ignores NULL values during the aggregations, they would no longer be present in the output of the UNPIVOT operation

Comparing PIVOT with the conventional query

Any feature is only useful if it has it’s own benefits. Obviously, PIVOT comes with the benefit of being more read-able and maintainable, but is it any better in terms of performance over the older, more conventional query?

So, I put both queries together, turned on the execution plan and executed them. The below was what the execution plan showed me. Do I need to say anything more?

image

Some great reference resources

I hope that you can now appreciate the benefits of using newer features of T-SQL as they come along. While there is an age-old saying that “If it isn’t broken, don’t fix it”, but in this case, we are not fixing anything – we are doing value addition – which is always welcome. As we say good-bye today, I will leave you with some great reference resources that helped me understand PIVOT:

  1. Getting started with PIVOT Queries in SQL Server 2005/2008 by Jacob Sebastian
  2. Another PIVOT Query example by Jacob Sebastian
  3. Posts on PIVOT & UNPIVOT tables by Pinal Dave
  4. Dynamic PIVOT in SQL Server 2005 by Madhivanan
  5. Dynamic cross-tab with multiple PIVOT columns by Madhivanan

Until we meet next time,

Be courteous. Drive responsibly.

AdventureWorks documentation – Data Dictionary


Today is a special day. It is Holy Thursday (or Maundy Thursday) – the day before Good Friday and the long Easter week-end, which is the Christian celebration of the Resurrection. It is the day when the Lord’s Supper or Holy Communion/Eucharist was instituted by Jesus Christ. This was made immortal by Leonardo Da Vinci in his famous fresco – “The Last Supper”.

Anyway, it’s still a working day in most parts of the world, and speaking of work reminds me that all of us have used the AdventureWorks family of sample databases at one time or the other. Ever since SQL Server 2005 came out, the AdventureWorks database has for me been the go-to place for studying about SQL Server, doing a quick test or demonstrating my thoughts to my managers or my team.

While the AdventureWorks family of databases is easily available from CodePlex, there is no well-known place where we can go to for lookup to the description of the schema and the tables. Recently, while researching for one of my posts, I stumbled upon the entire data dictionary for the AdventureWorks family of databases.

A data dictionary is a centralized repository of information about data such as meaning, relationships to other data, origin, usage, and format. Data dictionaries are commonly used to circulate copies of the database schema to vendors and technology partners, and once a product is released, this may be made available to the end customers depending upon the need (eg. to allow for customization or study).

You can get the data dictionary for AdventureWorks sample databases at: http://technet.microsoft.com/en-us/library/ms124438(SQL.100).aspx

For those who would like to download the AdventureWorks database for SQL 11 (“Denali”), head over to: http://msftdbprodsamples.codeplex.com/releases/view/55330

I trust that the AdventureWorks data dictionary will help you a lot in finding your way through the database. Next week, we will be resuming the Underappreciated features of Microsoft SQL Server series.

Have a Happy Easter!

Until we meet next time,

Be courteous. Drive responsibly.

Want to ask me a question? Do so at: http://beyondrelational.com/ask/nakul/default.aspx

A script to verify a database backup


Today, I will be sharing a very small, but important script. Recently, one of the database backups we had received failed to restore. I was faced with a problem of determining whether the problem was with the backup itself, or whether it was an I/O subsystem issue or some other failure.

Like with all tools & utilities, SQL Server provides great options when used via commands instead of the UI. Similarly, the RESTORE command provides the facility to very easily validate a backup for you. Please find below the script I used to validate my backup and was able to determine that the backup received was indeed, corrupt.

USE MASTER
-- Add a new backup device
-- Ensure that the SQL Server can read from the physical location where the backup is placed
--                    TYPE      NAME		   PHYSICAL LOCATION
EXEC SP_ADDUMPDEVICE 'disk','networkdrive','\VPCW2K8Database BackupTest.bak'

-- Execute the Restore operation in VERIFY ONLY mode
-- Provide the actual paths where you plan to restore the database.
-- This is because VERIFYONLY also checks for available space
RESTORE
VERIFYONLY
FROM  networkdrive
WITH
MOVE N'TESTDB_DATA'    TO N'E:TestDBTestDB_Data.mdf',  
MOVE N'TESTDB_INDEXES' TO N'E:TestDBTestDB_Idx.mdf',  
MOVE N'TESTDB_LOG'     TO N'E:TestDBTestDB_LOG.ldf'

-- DROP THE DEVICE
--                   Name         , Physical File (OPTIONAL - if present, the file is deleted)
EXEC SP_DROPDEVICE 'networkdrive'

The checks performed by RESTORE VERIFYONLY include (per Books On Line):

  • That the backup set is complete and all volumes are readable
  • Some header fields of database pages, such as the page ID (as if it were about to write the data)
  • Checksum (if present on the media)
  • Checking for sufficient space on destination devices

What methods do you use to validate your backups? Do leave a small note as your comments.

Until we meet next time,

Be courteous. Drive responsibly.

XPS Error – Your current security settings do not allow this file to be downloaded.


This Friday, I will take a break from the SQL world and share with you a scare that I recently overcame.

XPS or XML Paper Specification is gaining popularity as an alternate document format for exchange over the Internet and E-mail. Sometime in the first week of April, somebody sent me a XPS file for my review via E-mail.

As soon as I attempted to open the file (in Internet Explorer), I encountered the following message:

Security Error – Your current security settings do not allow this file to be downloaded.” 

I did not expect this. Nothing had changed on my computer – no new installations, no changes to security settings – simply nothing! And yet, I was able to open XPS documents that I generated just fine – but this document that I had received did not open. How is this even possible?

After much toiling (about 20minutes!), I figured out the solution. To protect our security, Microsoft does not trust anything that comes from external sources and was hence blocking the opening of the XPS document because the operating system thought it to be a potential threat.

So, this is what I did:

  1. Right-click on the file
  2. Click the “Unblock” button in the “Security” section of the “General” file  properties tab
  3. Once done, I attempted to open the file, and success!

 

It’s a different thing altogether that I was not able to complete my review on that day, but it’s okay.

Finally, have you ever come across such an issue? If yes, what was your solution? Do let us know.

Until we meet next time,

Be courteous. Drive responsibly.