Category Archives: Imported from BeyondRelational

These posts are imported from my old blog page: http://beyondrelational.com/modules/2/blogs/77/nakuls-blog.aspx

#0250-SQL Server 2012-High Availability & Disaster Recovery [HADR] related DMVs and Catalog Views


SQL Server Catalog Views & Dynamic Management Views (DMVs) take an administrator to the very heart of SQL Server. DMVs expose vital diagnostic information which are not only useful to indicate the overall health of the SQL Server, but also of the databases hosted by the given SQL Server instance. The list of DMVs that changed in SQL Server 2008 R2/2012 is available in one of my previous posts here: http://bit.ly/XobXup.

Microsoft SQL Server 2012 comes with a very wide variety of changes designed to make the SQL Server more available. These features are collectively known as High Availability & Disaster Recovery [HADR] features and are also often recognized via their code name – “Always ON”. Some of the key enhancements introduced in SQL Server 2012 are:

  • AlwaysON extends Windows clustering
  • A shared storage disk is no longer required to configure clustering
  • One Primary database, four secondary replicas, and two secondary synchronous replicas can be configured
  • Readable secondaries, supporting DBCC and backups
  • Log backup and Full backup with COPY_ONLY option can be performed on any replicas
  • Log backups done on all replicas does not break across replicas and therefore can be used as part of recovery strategy

It would be next to impossible to cover all of HADR in a single post. Therefore, I would redirect you to the posts by Balmukund Lakhani (B|T) who is an expert in all matters SQL. He demonstrated HADR features of SQL Server during :

A complete list of the HADR related DMVs and Functions is available at the links below:

  • AlwaysOn Availability Groups Catalog Views: Link
  • AlwaysOn Availability Groups Dynamic Management Views and Functions: Link

Until we meet next time,

Be courteous. Drive responsibly.

#0249-SQL Server – sp_can_tlog_be_applied-How to determine if a given log backup can be restored?


Microsoft SQL Server databases support three recovery models – Simple, Bulk-logged and Full Recovery – each targeted to provide unique recovery capabilities. For this post, it is assumed that the reader is aware about the various recovery models.

When restoring a database under the FULL recovery model, the recovery involves restoring the databases from upto 3 different kinds of backups:

  • Full Database Backup
  • Differential Database Backup
  • Transaction Log Backup

As can be understood from the post on backup timelines and understanding the database restore process from Pinal Dave, a.k.a SQLAuthority (B|T), the fastest way to restore a database in the full recovery model is:

  • Restore the last known, good full backup
  • Restore the latest of all known differential backups
  • Restore each transaction log, in sequence backed up after from the last differential backup was performed

This works fine when we have a handful of transaction log files to restore. But, it becomes difficult to identify which particular transaction log should be restored when we have a collection of log files (on a SAN storage, tape or any other form of storage) – some of which would be invalid based on the backups already restored. The question therefore is:

Can a given transaction log backup be restored on a database?

To help answer this question, Microsoft SQL Server provides a built-in stored procedure – sp_can_tlog_be_applied (http://msdn.microsoft.com/en-us/library/ms187769.aspx).

This system stored procedure can be used to determine if a given transaction log backup can be restored on a database in recovery (If the database is not in recovery, no backups can be restored). Allow me to present an example to demonstrate the usage of this function.

Preparing the backup set

The following set of scripts performs the following tasks:

  1. Create a test database
  2. Create a test table within the newly created database
  3. Change the database recovery model to FULL
  4. Take a full backup of the database
  5. Modify some data and take two log backups
USE master;
GO
--1. Create database
CREATE DATABASE LogVerificationDB
ON PRIMARY (NAME = PrimaryFile,
            FILENAME = 'C:SQL DatabasesLogVerificationDBPrimaryFile.mdf'
           )
LOG ON (NAME = LogFile,
        FILENAME = 'C:SQL DatabasesLogVerificationDBLogFile.ldf'
       );
GO

--2. Create test data
USE LogVerificationDB;
GO
CREATE TABLE LogVerificationTable (ObjectId INT, ObjectName VARCHAR(200));
GO

--3. Change recovery mode to FULL
ALTER DATABASE LogVerificationDB SET RECOVERY FULL;
GO

USE master;
GO
BACKUP DATABASE LogVerificationDB 
TO DISK = 'C:SQL DatabasesLogVerificationDBLogVerificationDB_Full.bak';
GO

--4. Make some modifications
USE LogVerificationDB;
GO
INSERT INTO LogVerificationTable (ObjectId, ObjectName)
SELECT so.object_id, so.name
FROM sys.objects AS so;
GO

--5. Take a log backup
USE master;
GO
BACKUP LOG LogVerificationDB
TO DISK = 'C:SQL DatabasesLogVerificationDBLogVerificationDB_Log1.bak';
GO

--6. Make some more modifications
USE LogVerificationDB;
GO
INSERT INTO LogVerificationTable (ObjectId, ObjectName)
SELECT so.object_id, so.name
FROM sys.objects AS so;
GO

--7. Take another log backup
USE master;
GO
BACKUP LOG LogVerificationDB
TO DISK = 'C:SQL DatabasesLogVerificationDBLogVerificationDB_Log2.bak';
GO

sp_can_tlog_be_applied in action

Now, let us assume that the database crashed and needs to be restored from the available backups. To do so, I manually dropped the database.

--08. Drop the test datbase
USE master;
GO
ALTER DATABASE LogVerificationDB SET SINGLE_USER;
GO
DROP DATABASE LogVerificationDB;
GO

First, I restore the full backup with NORECOVERY specified, indicating that other backups are yet to be restored.

--09. Restore WITHOUT Recovery
USE master;
GO
RESTORE DATABASE LogVerificationDB
FROM DISK = 'C:SQL DatabasesLogVerificationDBLogVerificationDB_Full.bak'
WITH  FILE = 1,  
      NOUNLOAD ,  
      STATS = 10,  
      NORECOVERY ,  
MOVE N'PrimaryFile'    TO N'C:SQL DatabasesLogVerificationDBPrimaryFile.mdf',
MOVE N'LogFile' TO N'C:SQL DatabasesLogVerificationDBLogFile.ldf';
GO

--09A. Check status
SELECT state_desc, state, user_access_desc FROM sys.databases WHERE name = 'LogVerificationDB';
GO

image

Now that the full backup has been restored, we know that we can start restoring the differential and/or the transaction log backups. But, which one of the backup file goes first? We use the sp_can_tlog_be_applied system stored procedure to check which one of Log backup #1 and #2 can be applied at this stage.

--10. Check if the transaction logs can be applied
--    Since we restored the database with no recovery, transaction log backups can be applied
--    However, can Log Backup #01 be applied?
DECLARE @canTxLogBeApplied BIT = 0;
EXEC sp_can_tlog_be_applied 
         @backup_file_name = 'C:SQL DatabasesLogVerificationDBLogVerificationDB_Log1.bak', 
         @database_name = 'LogVerificationDB', 
         @result = @canTxLogBeApplied OUTPUT;

SELECT @canTxLogBeApplied AS CanTransactionLog#1BeApplied;
GO

--11. Check if the transaction logs can be applied
--    Since we restored the database with no recovery, transaction log backups can be applied
--    However, can Log Backup #02 be applied?
DECLARE @canTxLogBeApplied BIT = 0;
EXEC sp_can_tlog_be_applied 
         @backup_file_name = 'C:SQL DatabasesLogVerificationDBLogVerificationDB_Log2.bak', 
         @database_name = 'LogVerificationDB', 
         @result = @canTxLogBeApplied OUTPUT;

SELECT @canTxLogBeApplied AS CanTransactionLog#2BeApplied;
GO

Because the transaction log backup #2 is newer than #1, it cannot be applied directly. Log backup #1 has to be applied first. This can be confirmed in the screen-shot below where it is seen that the sp_can_tlog_be_applied returns a 1 for Log backup #1, but a 0 for backup #2.

image

Conclusion:

As can be seen from the example above, a simple check using sp_can_tlog_be_applied can help administrators determine the sequence in which transaction log backup files are to be restored on a database.

Did you know?

Did you know that sp_can_tlog_be_applied was available in SQL Server 2000 also? I will admit that I did not know about it before researching for this blog.

Further Reading

  • sp_can_tlog_be_applied (BOL Link)
  • Restore Sequence and Understanding NORECOVERY and RECOVERY (Link)
  • Backup Timeline and Understanding of Database Restore Process in Full Recovery Model (Link)

Until we meet next time,

Be courteous. Drive responsibly.

#0248-SQL Server 2012-Column Store related Catalog Views


SQL Server Catalog Views & Dynamic Management Views (DMVs) take an administrator to the very heart of SQL Server. DMVs expose vital diagnostic information which are not only useful to indicate the overall health of the SQL Server, but also of the databases hosted by the given SQL Server instance. The list of DMVs that changed in SQL Server 2008 R2/2012 is available in one of my previous posts here: http://bit.ly/XobXup.

Traditional SQL Server indexes are “row-centric”, i.e. they are focused on grouping and storing of each record. Most applications process only one record at a time, and therefore, this mechanism works. However, some kinds of queries need to process/read all records of a particular set of columns. Columnstore indexes, introduced with Microsoft SQL Server 2012 group and store data for each column and then join with rest of the columns to complete the index. The columnstore index can significantly improve the query execution times for such queries.

An introduction to the Columnstore indexes is available in the following posts from Vinod Kumar, a.k.a. “Extreme Experts” (B|T) and Pinal Dave, a.k.a. “SQLAuthority” (B|T):

The resources above offer a great start to columnstore index usage. The following two catalog views available for columnstore indexes.

sys.column_store_segments

Introduced In: SQL Server 2012

Purpose: Contains a record for each column in the columnstore index.

Permissions: VIEW DEFINITION permission to the underlying table. For the columns has_nulls, base_id, magnitude, min_data_id, max_data_id, and null_value to be populated, the user also require the SELECT permissions.

BOL Link: http://msdn.microsoft.com/en-us/library/gg509105.aspx

sys.column_store_dictionaries

Introduced In: SQL Server 2012

Purpose: Contains a record for each column in a memory-optimized columnstore index.

Permissions: VIEW DEFINITION permission to the underlying table. For the columns last_id, entry_count and data_ptr to be populated, the user also require the SELECT permissions.

BOL Link: http://msdn.microsoft.com/en-us/library/gg492082.aspx

Until we meet next time,

Be courteous. Drive responsibly.

#0247-SQL Server – sp_refreshview – Refresh metadata for non-schema bound view


Database objects often undergo changes to their structure over the life-cycle of the product. These changes include, but are not limited to, the addition and removal of columns and parameters. Whenever such changes are made, the meta-data for any dependent database objects also needs to be updated if they are not schema-bound. Here’s a small example.

Let us create a data view on the HumanResources.Employee table as shown below:

USE AdventureWorks2012 ;
GO

--Create a non-schema bound view for the HumanResources.Employee table
CREATE VIEW HumanResources.uview_Employee
AS  SELECT  he.*
    FROM    HumanResources.Employee AS he ;
GO

--Select data from the data-view
SELECT  TOP 1 *
FROM    HumanResources.uview_Employee ;
GO

image

Now, let us add a column to the HumanResources.Employee table. This would be a common scenario is a service release – as new functionality gets introduced to the system, tables are modified to accommodate to the changes.

USE AdventureWorks2012 ;
GO
--Add a dummy column to the underlying table
ALTER TABLE HumanResources.Employee
ADD ExternalReferenceNumber VARCHAR(20);
GO

--Add some default data to the table
UPDATE HumanResources.Employee
SET ExternalReferenceNumber = ('EXT' + CAST(BusinessEntityID AS VARCHAR(10)));
GO

Because the data view has been created using a SELECT * clause, one would assume that the newly added column will be available in the results. Let us execute the view to confirm this assumption:

USE AdventureWorks2012 ;
GO
--Select data from the data-view
--Notice that the newly added column is not available
SELECT  TOP 1 BusinessEntityID, ExternalReferenceNumber
FROM    HumanResources.uview_Employee ;
GO

Msg 207, Level 16, State 1, Line 3

Invalid column name ‘ExternalReferenceNumber’.

As can be seen from the message above, the newly added column is in fact NOT included in the results. This is because the view is not schema-bound and the underlying meta-data is not yet updated.

To explicitly refresh the view meta-data (in cases where the view cannot be rebuilt), a system stored procedure sp_refreshview needs be used.

--Now, execute sp_refreshview
EXEC sp_refreshview N'HumanResources.uview_Employee' ;
GO

Now, let us attempt to select data from the view again:

--Select data from the data-view
--Notice that the newly added column is now available
SELECT  TOP 1 BusinessEntityID, ExternalReferenceNumber
FROM    HumanResources.uview_Employee ;
GO

image

As can be seen from the screenshot above, the newly added column is now available in the results of the view.

A word of caution

While this particular example uses SELECT * in the view definition for demonstration purposes, it is recommended to always define the columns explicitly. Explicitly defining columns required will cause errors if a column is dropped and will prevent any unexpected behavior if new columns are added. Alternatively, schema binding can be used.

Until we meet next time,

Be courteous. Drive responsibly.

#0246 – SQL Server 2012 – FileStream and FileTable related DMVs and Catalog Views


SQL Server Catalog Views & Dynamic Management Views (DMVs) take an administrator to the very heart of SQL Server. DMVs expose vital diagnostic information which are not only useful to indicate the overall health of the SQL Server, but also of the databases hosted by the given SQL Server instance. The list of DMVs that changed in SQL Server 2008 R2/2012 is available in one of my previous posts here: http://bit.ly/XobXup.


The storage of large object data like images, videos and other such non-relational objects in a relational databases has always been challenging. The biggest of these challenges come in the form of controlling the size of the transactions, storage and also the performance. Storing these object types on the file system overcomes these issues, but fails to provide basic data integrity and other management features that databases provide. Microsoft SQL Server 2008 introduced a new concept called FileStream which allows the SQL Server to work with large object data directly from the file-system in a transparent fashion.


A complete list of the FileStream and FileTable related DMVs and Functions is available at the link below:



  • FileStream and FileTable Catalog Views (Link)
  • FileStream and FileTable Dynamic Management Views (Link)

Let us first take a look at the newly introduced FileStream and FileTable related catalog views first.


Catalog Views


sys.database_filestream_options


Introduced In: SQL Server 2012


Purpose: For each database hosted on the given SQL Server instance, this catalog view contains one record showing the level of non-transactional access to FILESTREAM data in FileTables that is enabled by the administrator.

SELECT database_id AS DatabaseId,
DB_NAME(database_id) AS DatabaseName,
non_transacted_access AS NonTransactedAccessEnum,
non_transacted_access_desc AS NonTransactedAccessDescription
FROM sys.database_filestream_options
WHERE database_id <> 32767;

Permissions: Visibility is limited to securables that the user owns or has been granted permission to.


BOL Link: http://msdn.microsoft.com/en-us/library/gg492071.aspx


sys.filetables


Introduced In: SQL Server 2012


Purpose: Returns a record for each FileTable in SQL Server 2012.

SELECT object_id,
is_enabled,
directory_name,
filename_collation_id,
filename_collation_name
FROM sys.filetables;

Permissions: Visibility is limited to securables that the user owns or has been granted permission to.


BOL Link: http://msdn.microsoft.com/en-us/library/gg492214.aspx


sys.filetable_system_defined_objects


Introduced In: SQL Server 2012


Purpose: When file tables are created, objects such as constraints and indexes are also created. This catalog view provides a list of all the system defined objects that are related to FileTables.

SELECT object_id,
parent_object_id
FROM sys.filetable_system_defined_objects;

Permissions: Visibility is limited to securables that the user owns or has been granted permission to.


BOL Link: http://msdn.microsoft.com/en-us/library/gg492092.aspx


Dynamic Management Views


sys.dm_filestream_non_transacted_handles


Introduced In: SQL Server 2012


Purpose: This DMV displays one row per currently open non-transactional file handle associated with FileTable data.


Permissions: No documentation available on BOL


BOL Link: http://msdn.microsoft.com/en-us/library/ff929168.aspx


Further Reading



  • The Art of SQL Server FILESTREAM [Amazon.com, book by Jacob Sebastian & Sven Aelterman – Link]
  • FILESTREAM Wait Types [Link]
  • Working with FileTables in SQL Server 2012 – Part 01 [Link]
  • Working with FileTables in SQL Server 2012 – Part 02 [Link]

Until we meet next time,


Be courteous. Drive responsibly.