Author Archives: nakulvachhrajani

Unknown's avatar

About nakulvachhrajani

Nakul Vachhrajani is a SQL Server MCTS, TOGAF certified Technical Architect with Capgemini having a total IT experience of more than 16 years. Nakul is an active blogger, and has been a guest columnist for SQLAuthority.com and SQLServerCentral.com. Nakul presented a webcast on the “Underappreciated Features of Microsoft SQL Server” at the Microsoft Virtual Tech Days Exclusive Webcast series (May 02-06, 2011) on May 06, 2011. He is also the author of a research paper on Database upgrade methodologies, which was published in a CSI journal, published nationwide. In addition to his passion about SQL Server, Nakul also contributes to the academia out of personal interest. He visits various colleges and universities as an external faculty to judge project activities being carried out by the students. Disclaimer: The opinions expressed herein are his own personal opinions and do not represent his employer’s view in anyway.

#0430 – SQL Server – Tips: Working with JSON to build payload samples


As a system integration architect, I often deal with generating & processing payloads in SQL server. In a previous post, I showed how to avoid JSON array wrappers when generating payloads and samples from data using T-SQL.

Based on the feedback received from a few colleagues of mine, I will be sharing some tips I use to build various JSON payload samples from test data. For these examples, we will be using the [AdventureWorks2022] sample database.

Nesting/Embedding JSONs

The true value of FOR JSON lies in the fact that it can return complex hierarchical data as nested JSON objects or arrays. In the example below, we have a single object corresponding to one (1) customer but they have multiple child orders – which are represented as a JSON array.

USE [AdventureWorks2022];
GO
SELECT [Customer].[FirstName]   AS [Name], 
       [Customer].[PhoneNumber] AS [Phone], 
	   [Customer].[City]        AS [City],
       [Orders].[SalesOrderID]  AS [OrderId],
	   [Orders].[OrderDate]     AS [OrderDate],
	   [Orders].[DueDate]       AS [ExpectedDeliveryDate]
FROM [Sales].[vIndividualCustomer] AS [Customer]
INNER JOIN [Sales].[SalesOrderHeader] AS [Orders] ON [Customer].[BusinessEntityID] = [Orders].[CustomerID]
WHERE [Customer].[BusinessEntityID] = 12630
FOR JSON AUTO, WITHOUT_ARRAY_WRAPPER;
GO
{
	"Name": "Clayton",
	"Phone": "1 (11) 500 555-0145",
	"City": "Roncq",
	"Orders": [
		{
			"OrderId": 49271,
			"OrderDate": "2013-01-09T00:00:00",
			"ExpectedDeliveryDate": "2013-01-21T00:00:00"
		},
		{
			"OrderId": 54524,
			"OrderDate": "2013-08-16T00:00:00",
			"ExpectedDeliveryDate": "2013-08-28T00:00:00"
		},
		{
			"OrderId": 56797,
			"OrderDate": "2013-09-26T00:00:00",
			"ExpectedDeliveryDate": "2013-10-08T00:00:00"
		}
	]
}

Build complex JSON structures

As an extension to the thought process, suppose you want to embed a JSON into another object. Here’s a simple way to do that:

SELECT 
  JSON_QUERY(
    (SELECT '404' AS [HttpStatusCode], 
	        'Page not found' AS [Message]
     FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
  ) AS [ProcessingStatus],
  20 AS [TransactionId]
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER;
GO
{
	"ProcessingStatus": {
		"HttpStatusCode": "404",
		"Message": "Page not found"
	},
	"TransactionId": 20
}

Similarly, you can even combine 2 JSON messages together:

SELECT 
  JSON_QUERY(
    (SELECT 'HttpTimeout' AS [name], 
	         '60'         AS [value] 
	 FOR JSON PATH, WITHOUT_ARRAY_WRAPPER)
    ) AS [ConfigSettings],
  JSON_QUERY(
    (SELECT 'SQLTwins' AS [username]
	 FOR JSON PATH, WITHOUT_ARRAY_WRAPPER
	)
  ) AS [UserInfo]
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER;
{
	"ConfigSettings": {
		"name": "HttpTimeout",
		"value": "60"
	},
	"UserInfo": {
		"username": "SQLTwins"
	}
}

I trust you will find these tips and samples useful. Do share if you have any tricks that you use when working with JSONs and payloads.

Until we meet next time,

Be courteous. Drive responsibly.

#0429 – SQL Server – Working with JSON using WITHOUT_ARRAY_WRAPPER


When working on system integrations, JSON is often the format of choice used to exchange data. As with any data contract, the clarity and structure of the schema matter. Default behaviors (like wrapping JSON into arrays) can complicate things when you’re expecting a single object. This is especially true when processing feeds or doing asynchronous operations.

I dabble a lot in data operations and often use SQL Server queries to build my JSON payloads. As we know, we can use the FOR JSON PATH clause to generate the output as a JSON.

USE [AdventureWorks2022];
GO
SELECT [FirstName], [LastName], [PhoneNumber], [City]
FROM [Sales].[vIndividualCustomer]
WHERE [BusinessEntityID] BETWEEN 1600 AND 1700
FOR JSON PATH;
GO
[
	{
		"FirstName": "Rebecca",
		"LastName": "Robinson",
		"PhoneNumber": "648-555-0100",
		"City": "Seaford"
	},
	{
		"FirstName": "David",
		"LastName": "Robinett",
		"PhoneNumber": "238-555-0100",
		"City": "Solingen"
	}
]

If you observe carefully, the output is a JSON array. This is true even if one (1) object is being returned – which then requires me to edit the payload manually. Here’s the example:

USE [AdventureWorks2022];
GO
SELECT [FirstName], [LastName], [PhoneNumber], [City]
FROM [Sales].[vIndividualCustomer]
WHERE [BusinessEntityID] = 1700
FOR JSON PATH;
GO
[
	{
		"FirstName": "Rebecca",
		"LastName": "Robinson",
		"PhoneNumber": "648-555-0100",
		"City": "Seaford"
	}
]

Introducing WITHOUT_ARRAY_WRAPPER

Starting SQL Server 2016, a new option (WITHOUT_ARRAY_WRAPPER) is now available. This option can be used with a single-row result to generate a single JSON object as output instead of an array with a single element.

Here’s the same example as above, but with the WITHOUT_ARRAY_WRAPPER clause applied.

USE [AdventureWorks2022];
GO
SELECT [FirstName], [LastName], [PhoneNumber], [City]
FROM [Sales].[vIndividualCustomer]
WHERE [BusinessEntityID] = 1700
FOR JSON PATH, WITHOUT_ARRAY_WRAPPER;
GO
{
	"FirstName": "Rebecca",
	"LastName": "Robinson",
	"PhoneNumber": "648-555-0100",
	"City": "Seaford"
}

Practical use-cases

Below are some practical use-cases where the WITHOUT_ARRAY_WRAPPER option has been very handy for me:

  1. Creating Samples – for API requests, responses and writing documentation!
  2. Nesting/embedding JSON in another JSON
  3. Logging or auditing single events

Word of caution

If you use this option with a multiple-row result, no error will be returned. However, the resulting output is not valid JSON because of the multiple elements and the missing square brackets.

Call to Action

I trust you found this quick nugget helpful. Do let me know if you already knew about it before. I would be interested to read about the various cases in which you have used the FOR JSON PATH, WITHOUT_ARRARY_WRAPPER option.

Until we meet next time,

Be courteous. Drive responsibly.

Disclaimer: The [AdventureWorks2022] sample database (https://learn.microsoft.com/en-us/sql/samples/adventureworks-install-configure?view=sql-server-ver17&tabs=ssms) has been used for all test data used in this post.

#0428 – SQL Server – Demystifying the CURRENT keyword: Why it exists and when to use it


Recently, I was reviewing a T-SQL script and saw a ALTER DATABASE statement with the CURRENT keyword. It seems redundant at first glance. After all, if you are already connected to a database, why would you want to refer to it as CURRENT?

In this post, I try to uncover the purpose, possible use cases and potential mistakes that one can make when using the CURRENT keyword.

The Purpose

First, let’s understand the purpose of the CURRENT statement. Put very simply, the CURRENT keyword in the ALTER DATABASE statement refers to the database that you are connected to. Instead of hard-coding the database name, you can use CURRENT to make your script more portable and dynamic.

-- Instead of this:
ALTER DATABASE [SQLTwins] SET COMPATIBILITY_LEVEL = 150;

-- You can write:
ALTER DATABASE CURRENT SET COMPATIBILITY_LEVEL = 150;

This is especially useful in deployment pipelines, maintenance scripts, or reusable procedures where the database context may vary.

Typical Use-cases

Here are some practical scenarios where CURRENT keyword is very useful:

  1. Automated Scripts: Avoid hard-coding database names when deploying changes across multiple databases.
  2. Cross-Environment Compatibility: Whether you’re running the script on dev, test, or prod, CURRENT targets the database that you are connected to.
  3. Contained Databases: In Azure SQL Database or Managed Instances, where scripts run within a specific database context, CURRENT ensures the command targets the right scope.

CURRENT v/s DB_NAME()

You might wonder: why not dynamically fetch the database name using the very familiar DB_NAME() and build the statement?

DECLARE @dbName NVARCHAR(128) = DB_NAME();
EXEC('ALTER DATABASE [' + @dbName + '] SET COMPATIBILITY_LEVEL = 150');

While DB_NAME() works, it’s great for dynamic SQL. Dynamic SQL is often complicated to manage (esp. in deployment pipelines) and introduces potential quoting issues. CURRENT is cleaner, safer, and easier to read – more so for simple configuration changes.

Please consider

Before you start using CURRENT in your scripts, be on the lookout for the following common mistakes:

  1. Context Awareness: If your script runs in a multi-database environment, double-check that you’re in the right context before executing. Remember: CURRENT refers to the database you’re connected to, not necessarily the one you intend to modify.
  2. Permissions: CURRENT doesn’t bypass permission checks and the scripts would fail if your user is not allowed to ALTER the database
  3. Forgetting Test Coverage: Because CURRENT abstracts the database name, it’s easy to overlook edge cases in testing—especially when deploying across environments. Always validate that your script behaves as expected in all environments & configurations that you intend to use.
  4. Mixing with Dynamic SQL: Using CURRENT with dynamic SQL is unnecessary and can be confusing or redundant (unless it’s a code generator)
  5. The CURRENT keyword is fully supported for Azure SQL Database and Azure SQL Managed Instances. However, for on-premise installations SQL Server 2012 (11.x) and above (Why are you on a lower version anyway?)

In conclusion

The CURRENT keyword may seem trivial. However, it plays a subtle role in making T-SQL scripts more portable. It also contributes to scripts being context-aware and cloud-friendly. Whether you’re managing databases in Azure or on-prem, using ALTER DATABASE CURRENT can streamline your operations. It also reduces hard-coded dependencies.

Further Reading

Until we meet next time,

Be courteous. Drive responsibly.

#0427 – SQL Server – Msg 8152, Level 16: Which column is causing “String or binary data would be truncated.”?


As a database administrator or developer, I’m quite sure that you’ve encountered the following error when working with your database queries in SQL Server.

Msg 8152, Level 16, State 30, Line 27
String or binary data would be truncated.

The statement has been terminated.

As is quite obvious by the text, this error occurs when you try to insert or update data that exceeds the maximum length of a column. While this error is common and quite straightforward to understand, it can be frustrating to troubleshoot – especially when you’re not sure which column is causing the issue (e.g. in the case of integrations or data imports).

In newer versions of SQL Server (2019 and above), the error message has become much more descriptive and I have found it very helpful to quickly identify and resolve the problem.

Let’s check it out with a code example. I am creating a test database and then changing the compatibility level of that database to 130 to simulate SQL 2016 equivalent behaviour. Then I will attempt to insert data (~42 characters) that exceeds the maximum length of the [SomeColumn1] column (25 characters).

SET ANSI_WARNINGS ON;
GO

USE [master];
GO
CREATE DATABASE [SqlTwinsDB];
GO
USE [master];
GO
ALTER DATABASE [SqlTwinsDB]
SET COMPATIBILITY_LEVEL = 130; --Simulating SQL 2016 on my SQL 2022 instance
GO


USE [SqlTwinsDB];
GO
--Safety Check
BEGIN
	IF OBJECT_ID('[dbo].[stringLengthError]','U') IS NOT NULL
	BEGIN
		DROP TABLE [dbo].[stringLengthError];
	END
END

--Declarations
BEGIN
	CREATE TABLE [dbo].[stringLengthError]([Id]          INT         NULL,
	                                       [SomeColumn1] VARCHAR(25) NULL,
										   [SomeColumn2] VARCHAR(25) NULL
						    		      );
END

--Run the test
BEGIN
	INSERT INTO [dbo].[stringLengthError] ([Id],
	                                       [SomeColumn1],
										   [SomeColumn2]
										  )
	VALUES (1, 
	        REPLICATE('nav',14), --14*3 = 42 characters, can't fit into [SomeColumn1]
			REPLICATE('nav',5)   --14*3 = 42 characters,  will fit into [SomeColumn2]
		   );
END

/* Expectation: Get the following error:

Msg 8152, Level 16, State 30, Line 27
String or binary data would be truncated.
The statement has been terminated.

Problem is: We don't know which column is generating the error, 
            unless we know the data and do some calculations.
*/

--Cleanup
BEGIN
	IF OBJECT_ID('[dbo].[stringLengthError]','U') IS NOT NULL
	BEGIN
		DROP TABLE [dbo].[stringLengthError];
	END
END
GO

The error message we get is quite generic and doesn’t provide any information about which column is causing the issue. This would mean that we would have to dump the data into another temporary table/file and begin the tedious task of comparing the lengths and finding which field/column is being hit with the offending data.

Msg 8152, Level 16, State 30, Line 27
String or binary data would be truncated.

The statement has been terminated.

Let us repeat this test with Compatibility Level set to 150 or higher (I will go with the default (160) for my version. Since the database is already created, I will simply change the compatibility level and try again.

SET ANSI_WARNINGS ON;
GO

USE [master];
GO
ALTER DATABASE [SqlTwinsDB]
SET COMPATIBILITY_LEVEL = 160;
GO

USE [SqlTwinsDB];
GO
--Safety Check
BEGIN
	IF OBJECT_ID('[dbo].[stringLengthError]','U') IS NOT NULL
	BEGIN
		DROP TABLE [dbo].[stringLengthError];
	END
END

--Declarations
BEGIN
	CREATE TABLE [dbo].[stringLengthError]([Id]          INT         NULL,
	                                       [SomeColumn1] VARCHAR(25) NULL,
										   [SomeColumn2] VARCHAR(25) NULL
						    		      );
END

--Run the test
BEGIN
	INSERT INTO [dbo].[stringLengthError] ([Id],
	                                       [SomeColumn1],
										   [SomeColumn2]
										  )
	VALUES (1, 
	        REPLICATE('nav',14), --14*3 = 42 characters, can't fit into [SomeColumn1]
			REPLICATE('nav',5)   --14*3 = 42 characters,  will fit into [SomeColumn2]
		   );
END

--Cleanup
BEGIN
	IF OBJECT_ID('[dbo].[stringLengthError]','U') IS NOT NULL
	BEGIN
		DROP TABLE [dbo].[stringLengthError];
	END
END
GO

This time, we still get the “String or binary data would be truncated” error, but with:

  • With a different message code (2628 instead of 8152)
  • A more verbose error which shows:
    • The field that is causing the error
    • The value that is being truncated
Msg 2628, Level 16, State 1, Line 31
String or binary data would be truncated in table 'SqlTwinsDB.dbo.stringLengthError', column 'SomeColumn1'. Truncated value: 'navnavnavnavnavnavnavnavn'.
The statement has been terminated.

Personally speaking, I find this to be a significant enhancement & a real time-saver when troubleshooting large data imports. Tip: If you’re working on an integration project and frequently encounter these types of errors, it may be worth considering upgrading your database engine to take advantage of this feature.

Sometimes, you may not even get this error and the truncation would happen silently. Do read my previous post https://nakulvachhrajani.com/2014/08/04/0341-sql-server-random-string-or-binary-data-would-be-truncated-errors-during-bulk-data-loads/ that talks about connection configuration parameters (more specifically, ANSI_WARNINGS) that may affect this behaviour.

Untill we meet next time,

Be courteous. Drive responsibly.

#0426 – Microsoft Teams – Quick fix for highly reduced/attenuated audio in Teams when sharing videos


If you are using Microsoft Teams, you may have observed that the video volume suddenly drops off when sharing the screen. This is quite annoying for both – the presenter and the attendees. I have finally found (and tested) a solution to this and thought of sharing with you all.

Root cause: Based on various forums that I visited, it seems that a change has been made to the architecture of Teams which reduces the sound of notifications & alerts while on a Teams call. This also reduces the sound of the audio being played.

Solution: As such the solution is outlined in the Microsoft support article here (Share sound from your computer in Microsoft Teams meetings or live events – Microsoft Support)

  1. In the Windows Search bar, type “Sound Settings”
  2. Under Advanced -> Go to “More Sound Settings”
  3. Click on the “Communications” tab
  4. Set it to “Do Nothing”

IMPORTANT: You may want to reset it back to 50% or 80% (depending upon your preference) after your presentation is over – else it becomes quite annoying when other calls are going on.

Until we meet next time,

Be courteous. Drive responsibly.