#0416 – SQL Server – Msg 8101 – Use column lists when working with IDENTITY columns


I have often written about IDENTITY columns on my blog. Identity columns, most commonly used to implement auto-increment keys, have been around for more than a decade now. Yet, I often see teams run into interesting use cases especially in cases where data is being migrated from one system to another.

Today’s post is based on one such incident that came to my attention.

The team was trying to migrate data from one table to another as part of an exercise to change the database structure for more efficiency. When moving the data from one table to another, they were using the option (SET IDENTITY_INSERT ON) in order to explicitly insert values into the Identity column. However, they were running into an error.

Msg 8101, Level 16, State 1, Line 24
An explicit value for the identity column in table 'dbo.tIdentity' can only be specified when a column list is used and IDENTITY_INSERT is ON.

Here is a simulation of what they were doing:

USE tempdb;
GO
SET NOCOUNT ON;
--Prepare the environment
--   Create a table, and add some test data into it
--Safety Check
IF OBJECT_ID('tIdentity','U') IS NOT NULL
    DROP TABLE dbo.tIdentity;
GO
--Create a table
CREATE TABLE dbo.tIdentity (IdentityId INT IDENTITY(1,1),
                            IdentityValue VARCHAR(10)
                           );
GO
--Turn Explicit IDENTITY_INSERT ON and insert duplicate IDENTITY values
SET IDENTITY_INSERT dbo.tIdentity ON;
GO
--NOTICE: No column list has been supplied in the INSERT
INSERT INTO dbo.tIdentity
VALUES (1, 'One'),
       (2, 'Two');
GO

--RESULTS
--Msg 8101, Level 16, State 1, Line 24
--An explicit value for the identity column in table 'dbo.tIdentity' can only be pecified when a column list is used and IDENTITY_INSERT is ON.

The Solution

Let’s re-read the error. It clearly gives an indication of what the issue is – if we need to insert an explicit value into Identity columns, we need to explicitly use column lists in our insert statements, as shown below.

USE tempdb;
GO
SET NOCOUNT ON;
--Prepare the environment
--Create a table, and add some test data into it
--Safety Check
IF OBJECT_ID('tIdentity','U') IS NOT NULL
    DROP TABLE dbo.tIdentity;
GO

--Create a table
CREATE TABLE dbo.tIdentity (IdentityId INT IDENTITY(1,1),
                            IdentityValue VARCHAR(10)
                           );
GO

--Turn Explicit IDENTITY_INSERT ON and insert duplicate IDENTITY values
SET IDENTITY_INSERT dbo.tIdentity ON;
GO

--NOTE: Column list has been supplied in the INSERT,
--      so, no errors will be encountered    
INSERT INTO dbo.tIdentity ([IdentityId], [IdentityValue])
VALUES (1, 'One'),
       (2, 'Two');
GO

--Confirm that data has been inserted
SELECT IdentityId,
       IdentityValue
FROM dbo.tIdentity;
GO

--Now that data has been inserted, turn OFF IDENTITY_INSERT
SET IDENTITY_INSERT dbo.tIdentity OFF;
GO

-----------------------------------------------------------------
--RESULTS
----------
--IdentityId  IdentityValue
--1           One
--2           Two
 -----------------------------------------------------------------

Hope you will find this helpful.

Untill we meet next time,

Be courteous. Drive responsibly.

Advertisements

#0415 – SQL Server – Performance Tuning – Use STRING_AGG to generate comma separated strings


With more and more data being exchanged over APIs, generating comma-separated strings are becoming a much more common requirement.

A few years ago, I wrote about two different ways to generate comma-separated strings. The most common one I find to be in use when generating comma-separated values from a table is the intermediate conversion of XML. This however, is a very costly mechanism and can potentially take minutes for the query to run depending upon the amount of data involved.

SQL Server 2017 brings a new aggregate function that can be used to generate comma-separated values extremely fast. The function is STRING_AGG().

Here’s a sample of it’s usage:


 --WARNING: THIS SCRIPT IS PROVIDED AS-IS AND WITHOUT
-- WARRANTY.
-- FOR DEMONSTRATION PURPOSES ONLY
--Step 01: Generate Temp table to store source data
DECLARE @NamesTable TABLE ([Id] INT,
[Name] NVARCHAR(50)
);
--Step 02: Generate test data
INSERT INTO @NamesTable
VALUES (1, 'A'),
(2, 'D'),
(2, 'C'),
(3, 'E'),
(3, 'H'),
(3, 'G');
--Step 03: Using STRING_AGG to generate comma-separated strings
SELECT STRING_AGG(tbl.Name, ',') AS [CommaSeparatedString]
FROM @NamesTable AS tbl;
GO
/RESULTS**
CommaSeparatedString
A,D,C,E,H,G
*/

Advantages of STRING_AGG:

  • Can be used just like any other aggregate function in a query
  • Can work with any user supplied separator – doesn’t necessarily have to be a comma
  • No manual step required – Separators are not added at the end of the concatenated string
  • STRING_AGG() is significantly faster than using XML based methods
  • Can be used with any compatibility level as long as the version is SQL Server 2017 (or higher) and Azure SQL database

Here’s an example of how STRING_AGG can be used with any separator:

 --WARNING: THIS SCRIPT IS PROVIDED AS-IS AND WITHOUT
-- WARRANTY.
-- FOR DEMONSTRATION PURPOSES ONLY
--Step 01: Generate Temp table to store source data
DECLARE @NamesTable TABLE ([Id] INT,
[Name] NVARCHAR(50)
);
--Step 02: Generate test data
INSERT INTO @NamesTable
VALUES (1, 'A'),
(2, 'D'),
(2, 'C'),
(3, 'E'),
(3, 'H'),
(3, 'G');
--Step 03: Using STRING_AGG to generate comma-separated strings
SELECT STRING_AGG(tbl.Name, '-*-') AS [CustomSeparatorString]
FROM @NamesTable AS tbl;
GO
/RESULTS**
CustomSeparatorString
A--D--C--E--H--G /

A minor challenge

As with every new feature, there may be a small usability challenge with STRING_AGG. One cannot use keywords like DISTINCT to ensure that only distinct values are used for generating the comma-separated string. There is however a Azure feedback item open where you can exercise your vote if you feel this feature is useful.

Further Reading

  • Different ways to generate a comma-separated string from a table [Blog Link]
  • STRING_AGG() Aggregate Function [MSDN BOL]

Until we meet next time,

Be courteous. Drive responsibly.

Import Event Viewer Logs into Excel

#0414 – Analyzing Event Viewer Logs in Excel


When troubleshooting issues, the Event Viewer is one of the most handy of all tools. Assuming that appropriate coding practices were used during application development, the Event Viewer contains a log of most problems – in the system, in the configuration or in the application code.

The only problem is analyzing the Event Viewer logs when you have a thousand events. It becomes extremely difficult to try and answer questions like the following while going through events serially:

  1. Events logged by type for each source
  2. Events by severity
  3. Events by category
  4. And many more such analytical questions…

These analytical requirements are best achieved with tools like Microsoft Excel. And so, I went about analyzing Event Viewer logs in Microsoft Excel in just 2 steps.

Step #1: Export the Event Viewer Logs to XML

  1. Once the Event Viewer is launched, navigate to the Event Log to be evaluated
  2. Right-click on the Event Log and choose “Save All Events As” option
  3. In the Save As dialog, choose to save the Events as an XML file
    • If asked to save display information, you can choose not to store any or choose a language of your choice

And that’s it – it completes the 1st step!

Screenshot showing how to Save the Event Viewer Logs
Save the Event Viewer Logs
Screenshot showing how to save the Event Viewer Logs as an XML file
Choose to save the Event Viewer Logs as an XML file

Step #2: Import the XML file into Excel

  1. Launch Microsoft Excel
  2. In the File -> Open dialog, choose to search files of “XML” type
  3. Select the exported Event Viewer Log file
  4. In the Import Options, you can choose to import as an “XML Table”
    • Excel will prompt to create/determine the XML schema automatically. It’s okay to allow Excel to do so

And that’s it – the Event Viewer Logs are now in Excel and you can use all native Excel capabilities (sort, filter, pivot and so on).

Choose to import the Event Viewer Logs into Excel as an XML table
Import the Event Viewer Logs as an XML table
Image showing the successfully imported Event Viewer data into Microsoft Excel
Event Viewer Logs successfully imported into Excel

I do hope you found this tip helpful. If you have more such thoughts and ideas, drop in a line in the Comments section below.

Until we meet next time,

Be courteous. Drive responsibly.

Output of the sp_help command showing negative signs for a few columns.

#0413 – SQL Server – Interview Question – Why are some columns displayed with a negative sign in sp_help?


One of the first things I do when I start work on a new database is to use “sp_help” to go through each table and study their structure. I recently noticed something that would make an interesting interview question.

Here’s what I saw during my study.

Output of the sp_help command showing negative signs for a few columns.

Output of the sp_help command

The interview question that came to my mind was:

Why is there a negative “(-)” sign in the sp_help output?

The answer

The answer is quite simple – the negative sign simply indicates the columns are in a different sort order. By default, when a sort order is not specified for a column on an index, Microsoft SQL Server arranges it in ascending order. When we explicitly specify a descending sort order of the column on the index, it will be reported with the negative “(-)” sign.

Here is the script I used to capture the screenshot seen above:

USE tempdb;
GO
--Safety Check
IF OBJECT_ID('tempdb..#StudentSubject','U') IS NOT NULL
BEGIN
    DROP TABLE #StudentSubject;
END
GO

--Create a temporary table to demonstrate the point under discussion
CREATE TABLE #StudentSubject 
    (StudentId          INT          NOT NULL,
     SubjectId          INT          NOT NULL,
     DayNumber          TINYINT      NOT NULL,
     SequenceNumber     TINYINT      NOT NULL,
     IsCancelled        BIT          NOT NULL 
                        CONSTRAINT df_StudentSubjectIsCancelled DEFAULT (0),
     Remarks            VARCHAR(255)     NULL,
     CONSTRAINT pk_StudentSubject 
                PRIMARY KEY CLUSTERED (StudentId      ASC,
                                       SubjectId      ASC,
                                       DayNumber      DESC,
                                       SequenceNumber DESC
                                      )
    );
GO

--Notice the DESC keyword against the DayNumber & SequenceNumber columns
--These columns will be reported in index with negative values
sp_help '#StudentSubject';
GO

--Cleanup
IF OBJECT_ID('tempdb..#StudentSubject','U') IS NOT NULL
BEGIN
    DROP TABLE #StudentSubject;
END
GO

Until we meet next time,

Be courteous. Drive responsibly.

#0412 – SQL Server – SSIS – Error – The value type (__ComObject) can only be converted to variables of type Object. Variables may not change type during execution.


Recently, we were manipulating a string in an “Execute SQL” task inside a SSIS package, when we ran into the following sequence of errors.

[Execute SQL Task] Error: The value type (__ComObject) can only be converted to variables of type Object.
[Execute SQL Task] Error: An error occurred while assigning a value to variable "MyStringVariable": "The type of the value (DBNull) being assigned to variable "User::MyStringVariable" differs from the current variable type (String). Variables may not change type during execution. Variable types are strict, except for variables of type Object.".
Error: The type of the value (DBNull) being assigned to variable "User::MyStringVariable" differs from the current variable type (String). Variables may not change type during execution. Variable types are strict, except for variables of type Object.

The Execute SQL was similar to something that we had done hundreds of times before, and therefore we were stumped by the error. I found the root cause interesting and hence wanted to write about it right away.

The Test Setup

Before we go ahead, allow me to walk through the sample SSIS package which we used to reproduce the issue. As I mentioned, it is a simple SSIS package with a single “Execute SQL Task”.

0412_01_SSISExecuteSQLTask

The Execute SQL task in the sample SSIS package

The “Execute SQL” task simply executes a T-SQL statement that returns a single-row result set and sets a package variable of type “string“.

DECLARE @myVariable VARCHAR(MAX);

SET @myVariable = 'SQLTwins';

SELECT @myVariable AS myVariable;

0412_02_SSISVariable

User Variable of type “string” in the test package

0412_03_SSISExecuteSQLDetails

Execute SQL task details showing sample T-SQL script

0412_04_SSISResultSetVariableMapping

Variable Mapping in the Execute SQL Task

When we execute this SSIS package, it fails with the error referenced above.

0412_05_ExecuteSQLFailure

Failed Execute SQL Task

0412_06_ExecuteSQLFailureDetails

Execute SQL Task Failure Details

The Solution

The solution was right there in our faces, but we failed to notice it for a while. If we read the error message carefully, we can isolate the following points:

  • The data-type of the variable from the Result Set output of the Execute SQL task is different from the data-type of the target user variable
  • SSIS detects this as an attempt to change the data-type, which is not allowed because variables types are strict unless defined as an “object”

Based on this, we set about looking at differences between the single-row result set and the SSIS user variable of type “string”. We soon realized that the result set was returning a VARCHAR(MAX).

It appears that the (MAX) was causing problems in the SSIS engine. As soon as we changed it to a fixed-length variable the package worked as expected.

DECLARE @myVariable VARCHAR(8000);

SET @myVariable = 'SQLTwins';

SELECT @myVariable AS myVariable;

0412_07_ExecuteSQLSuccess

Successful execution of Execute SQL after changing to a fixed-length data-type

Hope this little tip helps in your development efforts someday.

Until we meet next time,

Be courteous. Drive responsibly.