Tag Archives: Guidance

Articles intended to provide guidance to the readers based on my past experiences.

#0423 – SQL Server – Exporting Database Diagrams for offline viewing


We a colleague of mine used Database Diagrams to explain our database structure to a new team member. Now typically, we would have started to “draw” the relationships on a white board, but in a world where everyone was working remotely, this was the only option.

The Visual Aspect

I am sure almost everyone in the audience has also used database diagrams at some point as a means of database documentation before switching to methods (like Extended Properties) more suited to modern database development and deployment techniques.

Unfortunately, these techniques do not have the ability to demonstrate the relationships visually. Database diagrams provide this unique ability and hence, warrant a rethink in terms of usage.

Now, the goal of this post is not to show “how” to build a database diagram, but to:

  1. Demonstrate effective ways of providing as much detail as possible on the diagram
  2. Show how to export the diagram for offline reference

Increasing the level of detail in a Database Diagram

For effective database diagraming, the recommendation is to group the tables/information shown on the diagram by one of the following two (2) strategies:

  1. By schema: If you use database schemas, group objects by schema and have at least one database diagram per schema
  2. By Use-case: Alternatively, tables that are related from a domain perspective (or for a particular use-case, e.g. Authentication) can be selected and be the subject of a diagram

Once you have put all the required tables on the diagram, you may want to right-click on the canvas and choose “Show Relationship Labels”. Additional annotations may also be applied as necessary by using the “New Text Annotation” functionality.

Image showing how to enable visibility of relationship labels on the diagram by right-clicking on the canvas and choosing "Show Relationship Labels"
Image showing how to enable visibility of Relationship labels on a database diagram

By default, the database diagram will only show the table name and list of columns. For maximum details, you can right-click on the table name -> select “Table View” -> select “Standard”.

Screenshot showing how to select the "Standard" view of tables on a database diagram. This will add more details (like datatype and null-ability of columns).
Screenshot showing how to select the “Standard” table view, which increases the level of detail on the diagram

As you will notice, using the “Standard” table view will add more details (like datatype and null-ability of columns) on the diagram. The columns can be added/removed by using the “Modify Column” option of the same menu.

Screenshot showing the "Modify Columns" screen which allows the user to select/choose columns on the table that may be necessary to review the design.
Column selector for the “Standard” view
Image showing how the standard view adds more details (like datatype and null-ability of columns) on the diagram.
“Standard” view of database tables on a diagram.

This process will need to be done for all tables. Once done, arrange the diagram on the canvas manually.

Exporting the Database Diagram

Once a database diagram is prepared, it can be saved in the database. However, there is no way to export or save a diagram into a file that can be sent via E-mail or stored on a collaboration tool for offline viewing. There is however, a very simple way by which the ultimate goal can be achieved – by storing it as an image!

Now, I am now talking about taking multiple screenshots and stitching them together in an image editing app. It is very simple to copy the diagram as an image.

Simply right-click on the canvas and choose “Copy Diagram to Clipboard”

Screenshot showing how to copy the database diagram to the clipboard.
Image showing how to copy the diagram to clipboard.

Once the diagram is on the clipboard, it can be pasted as an image to any image editing application or document!

Further Reading

I trust this little tip comes to your assistance someday.

Until we meet next time,

Be courteous. Drive responsibly.

#0418 – SQL Server – How to disable Shared Memory connections and configure a SQL Server instance to accept connections only via TCP/IP?


As part of my normal development activities, I use my trusted developer instance on my local machine. However, one of the main things I realized was that while I could connect to the instance using SQL Server Management Studio (SSMS), I could not connect to my instance via some of my SSIS packages.

I realized shortly afterwards that this was because the instance only has “Shared Memory” enabled as the protocol for connections by default. I turned on TCP/IP as well and was able to complete my work. However, I thought it best to document the steps I took for future reference.

What is my current connection using – Shared Memory or TCP/IP?

Before we go any further, let us investigate the default connection mechanism used by client applications running on the same machine as the SQL Server instance.

To do this, I have opened connections to the SQL Server via both – SSMS and SQLCmd and am then querying the DMV (sys.dm_exec_connections) to investigate the protocol being used for the connection. Session Ids used by each connection have been highlighted in the image below.

A connection to my local SQL Server instance via SQL Server Management Studio
Another connection using SQLCMD to the same SQL Server instance

Now, because my connections are active, I can take their session Ids and query the DMV – sys.dm_exec_connections which will give me the physical transport protocol that is used by this connection.

As can be clearly seen, the physical transport protocol used when connecting to a SQL Server on the same machine is “Shared Memory” by default.

If I explicitly try to connect to the instance using TCP/IP, note that I get an error #26 (Error Locating Server/Instance Specified):

Error 26 (Error Locating Server/Instance Specified) when connecting to the SQL server using TCP/IP network protocol in the SSMS “Connection Properties” window

Enabling TCP/IP

In order to change the connection, one needs to use the SQL Server Configuration Manager.

In the Configuration Manager, when we navigate to the SQL Server Network Configuration -> Protocols for <SQL Server Instance>, we notice that TCP/IP and Named Pipes are disabled – only the Shared Memory protocol is enabled.

Notice that by default, only the “Shared Memory” physical transport protocol is enabled

Now, all that needs to be done is to enable TCP/IP from the Protocol properties (right-click -> Enable or simply double-click to open the properties window) and restart the SQL Server service.

TCP/IP connections are now enabled.

Specifying Transport Protocol when connecting to the database

Now that I have reconfigured the SQL Server instance, I can now specify the protocol when connecting to a SQL server:

  1. In SSMS, when connecting to a SQL Server, click on “Options”
  2. Under “Connection Properties”, choose “TCP/IP” as the connection protocol
Accessing the “Connection Properties” screen when connecting to an instance using the Management Studio
Choosing the Network Protocol as “TCP/IP”

When I use the DMV (sys.dm_exec_connections) to check the session, I can see that it is now using TCP/IP and not Shared Memory.

Connections to the SQL Server are now using TCP/IP as the physical network protocol

How to disable Shared Memory?

One of the questions that we started with was how to disable “Shared Memory” for connections?

This can be achieved in the same way as we enabled TCP/IP. Simply use the SQL Server Configuration Manager to disable the “Shared Memory” protocol.

Disabling the Shared Memory protocol

I hope you will find this post helpful.

Disclaimer: Please DO NOT try this on your production SQL Server instances.

References:

Until we meet next time,

Be courteous. Drive responsibly.

#0417 – SQL Server – Select row count of local temp tables


I was recently contacted by a fellow team member who was interested in finding out the number of records in a temporary table which was being used as part of a long-running script.

As I had mentioned in one of my previous posts, local temporary tables are only accessible to the connection that created them. Hence, if a script is already running, the only connection that can execute queries against the local temporary table is the connection itself making this requirement a tricky one to work with.

The Solution

The solution for this is to realize that all tables – permanent or local consume storage either in a system/user database or in the tempdb. Hence, we can access this meta-data to fulfill our requirement. Let’s check it out with a demo.

Demo

In any window, we can access the dynamic management view (DMV): [sys].[dm_db_partition_stats]. As we know, this DMV returns page and row-count information for every partition in the current database.

So, let’s open a new query window in the SQL Server Management Studio and run the following query:

--Create a new test database
IF (DB_ID('SQLTwins') IS NULL)
BEGIN
    CREATE DATABASE [SQLTwins];
END
GO

USE [SQLTwins];
GO

--Window 01
BEGIN
    IF OBJECT_ID('tempdb..#SQLTwinsDemo','U') IS NOT NULL
    BEGIN
        DROP TABLE [dbo].[#SQLTwinsDemo];
    END

    CREATE TABLE [dbo].[#SQLTwinsDemo] ([Number] INT         NOT NULL,
                                        [Value]  VARCHAR(50) NOT NULL
                                       );

    INSERT INTO [dbo].[#SQLTwinsDemo] ([Number],
                                       [Value]
                                      )
    VALUES (9, 'Nine' ),
           (8, 'Eight'),
           (7, 'Seven'),
           (6, 'Six'  ),
           (5, 'Five' ),
           (4, 'Four' ),
           (3, 'Three'),
           (2, 'Two'  ),
           (1, 'One'  );
END

Now, in another window, try to run a simple row count query. As expected, it would return an error.

USE [SQLTwins];
GO
SELECT COUNT(*) FROM [dbo].[#SQLTwinsDemo];
GO
Msg 208, Level 16, State 0, Line 3
Invalid object name '#SQLTwinsDemo'.

Now, let’s use the DMV: [sys].[dm_db_partition_stats] in another window to get the row count information.

USE [SQLTwins];
GO
--Now, do this in Window #2
BEGIN
    SELECT [st].[name] AS [TableName],
           [partitionStatistics].[row_count] AS [RowCount]
    FROM [tempdb].[sys].[dm_db_partition_stats] AS [partitionStatistics]
    INNER JOIN [tempdb].[sys].[tables] AS [st] ON [st].[object_id] = [partitionStatistics].[object_id]
    WHERE [st].[name] LIKE '%SQLTwinsDemo%'
      AND ([partitionStatistics].[index_id] = 0  --Table is a heap
           OR
           [partitionStatistics].[index_id] = 1  --Table has a clustered index
          )
END
Fetching RowCount for local temporary tables using SQL Server DMVs

Hope it helps!

Until we meet next time,

Be courteous. Drive responsibly,

Import Event Viewer Logs into Excel

#0414 – Analyzing Event Viewer Logs in Excel


When troubleshooting issues, the Event Viewer is one of the most handy of all tools. Assuming that appropriate coding practices were used during application development, the Event Viewer contains a log of most problems – in the system, in the configuration or in the application code.

The only problem is analyzing the Event Viewer logs when you have a thousand events. It becomes extremely difficult to try and answer questions like the following while going through events serially:

  1. Events logged by type for each source
  2. Events by severity
  3. Events by category
  4. And many more such analytical questions…

These analytical requirements are best achieved with tools like Microsoft Excel. And so, I went about analyzing Event Viewer logs in Microsoft Excel in just 2 steps.

Step #1: Export the Event Viewer Logs to XML

  1. Once the Event Viewer is launched, navigate to the Event Log to be evaluated
  2. Right-click on the Event Log and choose “Save All Events As” option
  3. In the Save As dialog, choose to save the Events as an XML file
    • If asked to save display information, you can choose not to store any or choose a language of your choice

And that’s it – it completes the 1st step!

Screenshot showing how to Save the Event Viewer Logs
Save the Event Viewer Logs
Screenshot showing how to save the Event Viewer Logs as an XML file
Choose to save the Event Viewer Logs as an XML file

Step #2: Import the XML file into Excel

  1. Launch Microsoft Excel
  2. In the File -> Open dialog, choose to search files of “XML” type
  3. Select the exported Event Viewer Log file
  4. In the Import Options, you can choose to import as an “XML Table”
    • Excel will prompt to create/determine the XML schema automatically. It’s okay to allow Excel to do so

And that’s it – the Event Viewer Logs are now in Excel and you can use all native Excel capabilities (sort, filter, pivot and so on).

Choose to import the Event Viewer Logs into Excel as an XML table
Import the Event Viewer Logs as an XML table
Image showing the successfully imported Event Viewer data into Microsoft Excel
Event Viewer Logs successfully imported into Excel

I do hope you found this tip helpful. If you have more such thoughts and ideas, drop in a line in the Comments section below.

Until we meet next time,

Be courteous. Drive responsibly.

#0404 – SQL Server – Interview Question – What is logical data integrity?


Recently, I encountered an interesting question in one of the forums:

What is logical data integrity?

The person who posted the question was reading about SQL Server and databases in general, when this term was encountered. Because the answer to this question can help clarify one’s understanding of data design  concepts, I thought it would also make a very interesting interview question as well.

Today, I try to describe that data integrity is.

What is data integrity?

Data is a critical part of any business. But, data by itself holds no value. For data to be information of business value, it needs to be valid with respect to the business domain.

A piece of data may be perfectly acceptable from the physical design perspective, but may be still be invalid for the domain.

Let’s take an example – a rate of 2000 is perfectly acceptable for an integer. That is physical data integrity – the value is valid with respect to the physical design of the database. But, if  we are talking  about an application that captures and analyzes patient/medicinal data, the rate of 2000 is totally invalid and indicates some sort of logical bug/corruption.

Other examples would be a meeting end date that’s less than the meeting start date or a business/person without a name.

A data point may not be acceptable within the business rules defined for a domain. Similarly, what’s valid as a data point for one domain may be invalid for another domain. Ensuring that your database only accepts valid values with respect to your domain is what I call logical data integrity”.

Types of Data Integrity

Logical data integrity can be enforced in two ways:

Declarative Data Integrity

If data  integrity is enforced via the data model (implemented via the Data-Definition-Language, i.e. DDL), it is declarative data  integrity. One would enforce declarative integrity via the elements of the table definition:

  • Appropriate Data-Types
    • In our example for the medical domain, it would limit the possibility of corruption if a TINYINT is used to store the heart rate instead of an INT
  • Primary Keys
    • Avoid the insertion of duplicate data!
  • Foreign Keys
    • Ensures that all references are known (it is a valid primary key in another table)
  • Default, Check, Unique and Not-NULL constraints
    • Unique and Not-NULL constraints help maintain uniqueness and avoid insertion of unknown (NULL) data
    • Usage of default constraints ensure that by default unknown (NULL) values are replaced by valid default values
    • Check constraints help ensure that data meets the valid range defined by the business (e.g. a check constraint would help ensure that the meeting end date is greater than or equal to the start date)

Procedural Data Integrity

Legacy applications (I have worked on a few that match this description) which were originally developed in the days of flat-file databases, often used procedural code to enforce data integrity.

When these were migrated to Microsoft SQL Server, the integrity was enforced via stored procedures and triggers to avoid re-engineering the database structure and changing the application code to match the new structure.

Data integrity enforced via code, i.e. via stored procedures, triggers and/or functions is called procedural data integrity.

My take: Procedural code can be disabled, fail or have bugs. This may cause the application code to generate bad/invalid data rather than prevent it.

I believe procedural data integrity is acceptable as long  as it is used as a “fail-safe” mechanism. The primary mechanism to ensure logical data integrity should be declarative in nature, in my humble opinion.

The above is my take on logical data integrity. I welcome your thoughts on the subject in the space below.

Until we meet next time,

Be courteous. Drive responsibly.