Tag Archives: Debugging

Articles related to Microsoft SQL Server debugging

#0372 – SQL Server -SSIS – VSTA, Could not load file or assembly ‘Microsoft.VisualStudio.Tools.Applications.Core’


We use SQL Server Integration Services (SSIS) for all of our data movement and upgrades. Recently, when running a particular SSIS package, one of our new servers threw a strange error:

There was an exception while loading Script Task from XML: System.IO.FileNotFoundException: Could not load file or assembly ‘Microsoft.VisualStudio.Tools.Applications.Core’

I was on call and one of the first things I did was to cover the basics – checking out the environment. The SSIS package file (.dtsx) was located on the file system exactly where I had expected it to be and hence, I was stumped to see the “File Not Found” exception. The strange thing was that the same package worked fine when run on the staging environment and also in my local development environment.

After checking out various things and a break, I refocused my attention to reading the error message again. That’s when the words simply popped out at me – “…while loading Script Task from XML…” – the script task is where the problem was. Script tasks are most commonly used to set connections on the connection managers and that’s exactly what we were doing.

The problem

Script tasks in SSIS leverage the Visual Studio Tools For Automation (VSTA). I therefore headed over to Add/Remove Programs to confirm that VSTA is indeed installed. I only had the 64-bit version installed on the affected server. As with all things SSIS, on a 64-bit system, I had expected to see both the x64 and the x86 version of VSTA installed. Below is a screenshot of the Add/Remove programs from one of my test VMs which is a 32-bit environment (I could not get the actual screenshot because it was a production server).

Screenshot showing the installation of VSTA in a 32-bit environment. In a 64-bit environment, one would expect to see 2 entries - one for the 64-bit version and one for the 32-bit version

Sample VSTA installation on a 32-bit environment (non-production)

The Solution

I immediately contacted the on-call IT team, who kindly shared the SQL Server media from which the installation was done. Just navigating to the following path on the media gave me the 32-bit installable for VSTA. We installed 32-bit version of VSTA and the issue was resolved.

Path on the SQL Server media to get VSTA: ..\redist\VSTA\runtime\x86

Have you ever encountered this or other interesting issues in your deployment experience? How did you troubleshoot them? Do share via the blog comments – I would love to know!

Until we meet next time,
Be courteous. Drive responsibly.

#0371 – SQL Server – Best Practices – Always use precise decimal data-types


I recently came across a concrete example of something that has always been maintained as a best practice – when working on items that are sensitive to decimal precision, always use precise data-types like DECIMAL instead of FLOAT or REAL.

The following example clearly demonstrates that using a FLOAT in mathematical calculations may change the outcome of the computation. For financial calculations, this would mean incorrect reporting of revenues.

USE tempdb;
GO
DECLARE @floatValue FLOAT = 1.5,
        @decimalValue DECIMAL(5,2) = 1.5,
        @multiplicationFactor INT = 159;

SELECT @floatValue AS FloatValue,
       @decimalValue AS DecimalValue,
       (@multiplicationFactor*(@floatValue/100)) AS FloatBasedValue,
       (@multiplicationFactor*(@decimalValue/100)) AS DecimalBasedValue,
       ROUND((@multiplicationFactor*(@floatValue/100)),2) AS RoundedFloatBasedValue,
       ROUND((@multiplicationFactor*(@decimalValue/100)),2) AS RoundedDecimalBasedValue;

The output of this simple computation is provided in the screenshot below:

Result shows that we have a rounding error when a value based off a FLOAT is rounded to 2 decimals v/s when a fixed precision DECIMAL is rounded

Impact of using imprecise datatypes in mathematical computations

As can clearly be seen, the computations performed using an imprecise datatype (FLOAT) result in a rounding error. When compounded over multiple derived calculations, this can have a serious impact on the end result.

The conclusion therefore is simply that it is always a best practice to use precise datatypes when working with financial calculations.

Further Reading

Until we meet next time,
Be courteous. Drive responsibly.

#0366 – SQL Server – SSMS – Simulating a TCP/IP connection on the same machine as the server


Most development environments that I have worked in have a developer edition of Microsoft SQL Server on the local machine of the developer. Hence, both the client (SSMS) and the database server are often on the same machine.

What this means is that the default connection mechanism used for all connections will be “Shared Memory”. However, when working on features like auditing, the ability to simulate a TCP/IP connection becomes important. Today, I document a mechanism that you can use to initiate a TCP/IP connection even when both server and client (SSMS) are the on the same machine, provided the server has been configured to allow TCP/IP connections in the Configuration Manager.

SQLTwins, Post #0366 - Ensuring that TCP/IP connections to the server are enabled via the SQL Server Configuration Manager

SQL Server Configuration Manager – ensuring that TCP/IP connections to the server are enabled

When initiating a new connection using the SQL Server Management Studio, simply click on the “Options” button in the “Connect to Database Engine” window, and navigate to the “Connection properties” tab.

SQLTwins, Post #0366 - SQL Server Management Studio (SSMS) - The "Connect to Database Engine" window

SQL Server Management Studio (SSMS) – The “Connect to Database Engine” window

SQLTwins, Post #0366 - SQL Server Management Studio (SSMS) - Using the Connection Properties tab to change the Network protocol when connecting to a SQL Server instance

SQL Server Management Studio (SSMS) – Using the Connection Properties tab to change the Network protocol

In the “Network” group, locate the “Network Protocol” drop-down. Change the value from “<default>” to “TCP/IP“.

Finally, verify the credentials in the “Login” tab and click “Connect“. That’s it!

Use the CONNECTIONPROPERTY() function described in my previous post to confirm that you are indeed connected via TCP/IP.

Further Reading

  • Testing your connection strings using SSMS [Link]
  • Using CONNECTIONPROPERTY to determine connection properties [Link]

Until we meet next time,
Be courteous. Drive responsibly.

#0362 – SQL Server – Change Detection in Microsoft SQL Server – Limitations of T-SQL: BINARY_CHECKSUM and CHECKSUM


Identification of changes made to the data in a system is an important aspect of data storage design and data cleanup/quality improvement activities. For most enterprise systems, the need to implement change detection is driven by some sort of auditing requirements. A couple of years ago, I authored a series of articles on SQLServerCentral.com and on this blog around data change and tamper detection mechanisms available in Microsoft SQL Server. These are:

  • An in-depth look at change detection in SQL Server – Part 01 [Link]
  • An in-depth look at change detection in SQL Server – Part 02 [ Link]
  • HASHBYTES: Is CHECKSUM really required? [Link]
  • HASHBYTES-String or binary data would be truncated: Msg 8152 [ Link]

A recent coincidence at work prompted me to write this post. I was working on comparing a set of records from one table to another after a data cleanup exercise when I realized that a couple of my checksums were coming up as 0, i.e. a blank string (as disccussed in my article on Chage Detection, part 01). The twist to the tale was that there were no blank strings in the sample that I was using.

The Problem

In order to demonstrate the issue clearly, I have prepared the following sample. As can be seen from the sample, both CHEKSUM and BINARY_CHECKSUM work as expected as long as the string under evaluation is less than 26,000 characters in length. As soon as the string hits the 26,000 mark, the functions stop working.

USE tempdb;
GO

DECLARE @stringPatternToReplicate VARCHAR(MAX) = 'a'
DECLARE @stringPatternToEvaluate VARCHAR(MAX)
DECLARE @replicateTillLength INT = 25999
 
SELECT @stringPatternToEvaluate = REPLICATE(@stringPatternToReplicate,@replicateTillLength);
 
SELECT LEN(@stringPatternToEvaluate) AS StringLength,
       CHECKSUM(@stringPatternToEvaluate) AS CheckSumForString,
       BINARY_CHECKSUM(@stringPatternToEvaluate) BinaryCheckSumForString,
       @stringPatternToEvaluate AS StringPatternToEvaluate;

--Repeat after incrementing the @replicateTillLength by 1
SELECT @replicateTillLength += 1
 
SELECT @stringPatternToEvaluate = REPLICATE(@stringPatternToReplicate,@replicateTillLength);
 
SELECT LEN(@stringPatternToEvaluate) AS StringLength,
       CHECKSUM(@stringPatternToEvaluate) AS CheckSumForString,
       BINARY_CHECKSUM(@stringPatternToEvaluate) BinaryCheckSumForString,
       @stringPatternToEvaluate AS StringPatternToEvaluate;
GO

Solution?

The quick solution that I moved ahead with was to perform a direct comparison of the strings involved.

Now, we know that CHECKSUM and BINARY_CHECKSUM will not work if the datatype being evaluated is one of: text/ntext/image/cursor/xml. But, in the example provided above, the strings were the classic – VARCHAR with the MAX keyword to allow storage > 8000 characters.

However, I would like to invite views from you, the kind reader on whether you have faced a similar issue in the past or whether you have any other ideas to resolve this issue.

Summary:

Checksum and BINARY_CHECKSUM can fail to detect a change if:

  • The characters involved are not standard ASCII characters, i.e. have an ASCII value greater than 255
  • The string is a blank string
  • The string is more than 25,999 characters in length (as demonstrated above)

Open Item

I would like to invite views from you, the kind reader on whether you have faced a similar issue in the past or whether you have any other ideas to resolve this issue.

I have written up a Microsoft Connect ticket for this issue to look for an official explanation [MS Connect item #2021430].

Further Reading

  • An in-depth look at change detection in SQL Server – Part 01 [Link]
  • An in-depth look at change detection in SQL Server – Part 02 [ Link]
  • HASHBYTES: Is CHECKSUM really required? [Link]
  • HASHBYTES-String or binary data would be truncated: Msg 8152 [ Link]

Until we meet next time,
Be courteous. Drive responsibly.

#0358 – SQL Server – Best Practices – Avoid implicit conversions (Part 02) – Implicit conversions, COALESCE and ISNULL


Last week, I talked about how implicit conversions can cause unexpected issues. In this post, I will take the example forward. Implicit conversion can create problems not only during basic data load, but also during data retrieval and manipulation operations. Today, we will see how implicit conversions can cause COALESCE to error out in a rather unexpected way.

The script below creates the test data similar to what we saw in the previous post.

USE tempdb;
GO
--Demo objects & data
IF OBJECT_ID('dbo.BewareOfImplicitConversion','U') IS NOT NULL
    DROP TABLE dbo.BewareOfImplicitConversion;
GO

CREATE TABLE dbo.BewareOfImplicitConversion
    ( RecordId                   INT         NOT NULL IDENTITY(100,10),
      RowDescription             VARCHAR(20) NOT NULL,
      IntMasqueradingAsCharacter VARCHAR(20)     NULL,
      IntValue                   INT             NULL
    );
GO

INSERT INTO dbo.BewareOfImplicitConversion (RowDescription, IntMasqueradingAsCharacter, IntValue)
VALUES ('One',   NULL, 1),
       ('Two',   NULL, 4),
       ('Three',  '3', 6),
       ('Four',   '4', 8),
       ('Five',   'V', 10);
GO

Now, let us try to select the data from the test table – we will extract the data for all rows except the last one (where we have inserted a character value (V) instead of a numeric value in the column: dbo.BewareOfImplicitConversion.IntMasqueradingAsCharacter.

USE tempdb;
GO
--With COALESCE, everything will work fine untill a character data is encountered
SELECT ic.RecordId,
       ic.RowDescription,
       COALESCE(ic.IntMasqueradingAsCharacter, ic.IntValue) AS CalculatedValue
FROM dbo.BewareOfImplicitConversion AS ic
WHERE ic.IntValue < 10;
GO

image

As we can see from the screenshot above, the SELECT worked and COALESCE was able to successfully use the values from the integer column if the column IntMasqueradingAsCharacter was NULL. Now, let us try to fetch data for the last row.

USE tempdb;
GO
SELECT ic.RecordId,
       ic.RowDescription,
       COALESCE(ic.IntMasqueradingAsCharacter, ic.IntValue) AS CalculatedValue
FROM dbo.BewareOfImplicitConversion AS ic
WHERE ic.IntValue = 10;
GO

The following error is returned:

image

Msg 245, Level 16, State 1, Line 1
Conversion failed when converting the varchar value ‘V’ to data type int.

The reason is quite simple – as discussed in my previous post, data type precedence comes into effect and the  character data is implicitly converted to an integer.

Root Cause

Per MSDN (see references below), COALESCE follows the CASE expression rules and returns the data type of value with the highest precedence. This means that COALESCE would have attempted to convert supplied values to the data-type with highest precedence, which in this case is INT (the data-type of the column IntValue). Because the value ‘V’ cannot be implicitly converted to an integer, the COALESCE fails and returns the data-type conversion error.

A possible solution

One of the solutions is to use the ISNULL function instead. ISNULL() uses the data-type of the 1st supplied parameter (and all other parameters are converted to it accordingly). What this means is that in this case, all parameters will be treated as character values even though a data-type with higher precedence exists in the inputs.

Here’s the output when we use ISNULL() instead of COALESCE().

USE tempdb;
GO
--With ISNULL, the return datatype is always same as the data-type of the first expression.
SELECT ic.RecordId,
       ic.RowDescription,
       ISNULL(ic.IntMasqueradingAsCharacter, ic.IntValue) AS CalculatedValue
FROM dbo.BewareOfImplicitConversion AS ic
WHERE ic.IntValue < 10;

SELECT ic.RecordId,
       ic.RowDescription,
       ISNULL(ic.IntMasqueradingAsCharacter, ic.IntValue) AS CalculatedValue
FROM dbo.BewareOfImplicitConversion AS ic
WHERE ic.IntValue = 10;
GO

image

Conclusion

As I end this 2-part series, the only point I want to make is that one should be very, very careful in the initial database design and during code review in choosing the right data-types and data-type combinations.

These issues are very data centric and the analogy that I often like to quote is that these issues are like ghosts under the bed – they lie dormant for most of the time, but when the right data conditions are available, they raise their head  – simply because the development teams did not keep the necessary conversions and checks in place.

References

  • COALESCE and it’s comparison with ISNULL [MSDN Link]

Until we meet next time,
Be courteous. Drive responsibly.