Wednesday, January 18, 2017

SQL Server Identity value suddenly became 1000+

I noticed that values of a column related to newly inserted records which is an identity property enabled column in one of the tables, suddenly started from 1000+. The last values before this sudden jump were 27, 28, 29 but suddenly it started from 1000+ values. How did it happen without resetting the seed or increment of the identity property?

This is not a bug or something related to SQL Server 2016. It a result of a new implementation related to SQL Server 2012. As per my readings, since SQL Server 2012, it caches 1000 values for an identity property if the data type is int and it caches 10,000 values for big int and numeric. If an unexpected restart happens (in my case, I believe that an update caused the restart), there is a possibility to lose the cache, hence it starts from next available value. This is normal and you do not need to worry if you experience it.

Enabling Trace flag 272, you can get the old behavior or you can use Sequence instead. Read this thread for understanding these two options:  http://stackoverflow.com/questions/17587094/identity-column-value-suddenly-jumps-to-1001-in-sql-server

Friday, January 13, 2017

SQL Server - Adding an Authenticator when encrypting data

Among multiple methods given for securing data stored in the SQL Server database, even though the latest one which is Always Encrypted is available, we still use Keys. Passphrases and Certificates for encrypting data. When keys such as Symmetric Keys or Asymmetric Keys, or Passphrases are used for encrypting data, an additional parameter can be supplied which is called Authenticator. Since I recently used this for one of my database solutions, thought to make a note on it.

What is the usage of Authenticator? Why we should use it. Let's take an example and understand with it.

The following code creates a database and a table that holds Customers. The Security Code of the customer will be encrypted.

-- Create a database
CREATE DATABASE Sales;
GO

-- Connect with newly cerated database
USE Sales;
GO

-- Create a master key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Pa$$w0rd';
GO 

-- Create a certificate for protecting the our key 
CREATE CERTIFICATE SalesCertificate WITH SUBJECT = 'Sales Certificate';
GO

-- Create the key for encrypting data
-- Note that the created certificate protects it.
CREATE SYMMETRIC KEY SalesKey
WITH ALGORITHM = AES_128  
ENCRYPTION BY CERTIFICATE SalesCertificate;
GO

-- Creating a table that holds customers
-- Note the Securiry Code Column, it is varbinary 
-- because code will be encrypted and stored
CREATE TABLE dbo.Customer
(
 CustomerId int identity(1,1) PRIMARY KEY
 , Name varchar(100) NOT NULL
 , SecurityCode varbinary(256) NOT NULL
);
GO

Let's insert some records.

OPEN SYMMETRIC KEY SalesKey DECRYPTION BY CERTIFICATE SalesCertificate ;
GO

INSERT INTO dbo.Customer
 (Name, SecurityCode)
VALUES
 ('Dinesh', ENCRYPTBYKEY(KEY_GUID('SalesKey'), 'CUS005XZ'))

INSERT INTO dbo.Customer
 (Name, SecurityCode)
VALUES
 ('Yeshan', ENCRYPTBYKEY(KEY_GUID('SalesKey'), 'CUS02ANX3'))


Once inserted, data will be looked like below;


And if I try to retrieve records, I need to decrypt encrypted values;


So far so good. Now let's try understand the usage of Authenticator. Assume that Yeshan needs to access some Securables that can be accessed only by Dinesh through an application. For that, all Yeshan needs is, log in to the application using Dinesh's Security Code. Since he does not know Dinesh's Security Code, one way of accessing the application using Dinesh's account is, replacing the Dinesh's code with his code. Let's see whether it is possible.

The following code updates Dinesh's account with Yeshan's code. And as you see, it gets updated and now Yeshan can use Dinesh account as he knows the code.


This should not be allowed and even if it is possible, what if we make sure that encrypted code cannot be replaced like that. That is what we can do with the Authenticator.

Look at the following code. It passes two additional values for encrypting. The third one which is 1 says that this has an Authenticator. The forth parameter is the data from which to derive an Authenticator.

OPEN SYMMETRIC KEY SalesKey DECRYPTION BY CERTIFICATE SalesCertificate ;
GO

-- Update Security codes with CustomerId as the Authenticator
UPDATE dbo.Customer
 SET SecurityCode = ENCRYPTBYKEY(KEY_GUID('SalesKey'), 'CUS005XZ', 1, Convert(varbinary(256), CustomerId))
WHERE CustomerId = 1;

UPDATE dbo.Customer
 SET SecurityCode = ENCRYPTBYKEY(KEY_GUID('SalesKey'), 'CUS02ANX3', 1, Convert(varbinary(256), CustomerId))
WHERE CustomerId = 2;


Just like the previous code, values are encrypted now. However, if Yeshan tried to do the same, see the result;


As you see, even though Dinesh's code has been replaced with Yeshan's code, when try to decrypt value of Dinesh, it results null because Authenticator is different. This is the usage of the Authenticator.

Note that we used CustomerId as the Authenticator but you can use something else, something uncommon as the Authenticator to make it more secured and avoid malicious activities like this.


Thursday, January 12, 2017

Incorrect syntax near 'TRIPLE_DES'. - SQL Server throws an error when try to use algorithms

Assume that you use SQL Server 2016 and trying to create a Symmetric Key or Asymmetric Key for encrypting data. If you try use an algorithm like TRIPLE_DES, you will get the mentioned error;

Msg 102, Level 15, State 1, Line 20
Incorrect syntax near 'TRIPLE_DES'.

Here is a sample code for seeing this error;

-- Create a database
CREATE DATABASE Sales;
GO

-- Connect with newly cerated database
USE Sales;
GO

-- Create a master key
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Pa$$w0rd';
GO 

-- Create a certificate for protecting the our key 
CREATE CERTIFICATE SalesCertificate WITH SUBJECT = 'Sales Certificate';
GO

-- Create the key for encrypting data
-- Note that the created certificate protects it.
-- However, this will throw an error
CREATE SYMMETRIC KEY SalesKey
WITH ALGORITHM = TRIPLE_DES  
ENCRYPTION BY CERTIFICATE SalesCertificate;
GO

Now, what is the reason for this. The reason for this is, this algorithm is deprecated in SQL Server 2016. Not only that, All Algorithms except AES_128, AES_192, and AES_256 are deprecated in SQL Server 2016

What if you still need one of these deprecated algorithms? Yes, it is possible, but you need to downgrade the database by changing the Compatibility Level 120 or below.

This code shows the way of doing it.


USE master;
GO

-- Change the compatibility level to 120
ALTER DATABASE Sales
SET COMPATIBILITY_LEVEL = 120;

-- And check again
USE Sales;
GO

-- This will work now without any error
CREATE SYMMETRIC KEY SalesKey
WITH ALGORITHM = TRIPLE_DES  
ENCRYPTION BY CERTIFICATE SalesCertificate;
GO

Even though you can use the algorithm after changing the Compatibility Level, remember lower Compatibility Level might not let you to use all functionalities available with SQL Server 2016. Therefore, if possible, use allowed algorithms only.

Sunday, January 8, 2017

Table Design Considerations - SQL Server Brain Basher of the Week #063

He is the Brain Basher or the interview question of the week. This is something every database engineer should know and it is not limited to database administrators.

What would be the considerations you make when you are asked to create a table for holding data?

Let me elaborate the question more. How do you start designing of the table when you are asked to design it on a particular entity. If you are an ordinary engineer who thinks traditionally, your answer could be;



"Need to check the requirements and create a script with relevant columns, and apply normalization as need."

Do you think that the answer is similar to cartoon shown above? 

Being an interviewer, I prefer to hear something more, something that explains the interviewee's experience. Let's point out some valuable considerations we have to make when a table has to be designed.
  • Read the attributes identified by the BA and understand the business requirement - yes, you can think that this not something we should concern but trust me, you need to know.
  • Analyze and see whether this needs as a normalized table or denormalized table. Based on the requirement, decide the level like how far we need to go through normalization rules.
  • Decided the data types of the attributes and size of them. Need to pick the best, thinking present and future too. Should we think about past data as well. Yes, we should if we need to load legacy data.
  • Consider the datetime format and whether it needs to maintain Unicode characters.
  • Need to analyze whether it needs partitions and partitions need to be distributed among multiple file groups (preferably multiple volumes).
  • Constraints required. In addition to the Primary Key, we may add additional constraints such as Foreign Key, Unique Key and some rules.
  • Whether Artificial or Surrogate Key is required.
  • Number of indexes required, whether we need Columnstore or not, order of them, whether they need any filters.
  • Whether compression is required.
There are few other tiny things as well but these things show that the Interviewee is smart and knowledgeable. Remember, it is not just these points, knowing details of them is a must.

Saturday, January 7, 2017

SQL Server - Adding Code Snippet and Using Existing Code Snippet

Once I wrote a post on SQL Server Template Explorer that describes available TSQL templates and how they can be used. Just like templates, we have been given some ready-made Code Snippets that help us to construct the statements easily. Not only that, it allows us to add our own snippets using adding Code Snippet Manager.

First of all, let's see how we can use existing code snippets. Assume that you need to create a SQL Login and you cannot remember the syntax. What you can do is;

    1. Either select Insert snippet... context menu in the Query Window or press Ctrl+K and Ctrl+X  (Press Ctrl and hold, and then press K and X).

    

    2. Select Login folder and then select Create SQL Authentication Login.
    
    3. Change the code as you need.

    
Note that, like adding codes using Template Explorer, you do not get another interface for changing values. Values need to be manually changed.

If you need to add your own code snippet, you can take copy of an existing one, change as you want and save with your own name. Assume that you need to add Azure Firewall Setting as a code snippet. If so, here are the steps;

    1. Open the SQL Server Code Snippet folder. If you have selected the default location when installing SQL Server, the path would be C:\Program Files (x86)\Microsoft SQL Server\130\Tools\Binn\ManagementStudio\SQL\Snippets\1033.

    2. Create a folder called Azure (or name as you need).


    3. Update the SnippetsIndex.xml. Add the following node to the file. This node is for the newly created folder. (** Note that this file cannot be modified if you have not opened the editor as Administrator. If change this using Notepad, open the Notepad as an Administrator and then open the file for modifictions).


    4. Take a copy of existing snippet and place in Azure folder. I have taken Create SQL Authentication Login.snippet and renamed as Create Azure Server Level Firewall Rule.

    5. Open Create Azure Server Level Firewall Rule file (Open as an Administrator) and modify , <description> and <author> under <header> with your details.</p> </div> <div data-blogger-escaped-style="text-align: left;" style="text-align: left;"> <p> <br></p> <p>     </p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://2.bp.blogspot.com/--87E4fPLYno/WHDpkERA-6I/AAAAAAAAEk0/Qva74b-oX8gc3g6rKTZ2mDISkj4Vnv8egCLcB/s1600/Code%2BSnippet%2B06.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://2.bp.blogspot.com/--87E4fPLYno/WHDpkERA-6I/AAAAAAAAEk0/Qva74b-oX8gc3g6rKTZ2mDISkj4Vnv8egCLcB/s640/Code%2BSnippet%2B06.png" border="0" width="640" height="360"></a></p> <p> <br></p> <p>     6. Modify <declaration> node and <code> node as per the snippet you need to add. In this example, the required code is EXEC sp_set_firewall_rule and it needs three parameters: <i>Rule name, Starting parameter </i>and<i> Ending parameter</i>. Parameters have to be added as <literal> and the code has to be added in the <code> node. Here is the way of adding this SP.</p> <p> <br></p> <p>     </p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://3.bp.blogspot.com/-Iol1Gfy3KfE/WHDqlB8EqmI/AAAAAAAAEk8/9-gbd2RyAH0nBzHjGCUtBEdqjBORaFnHgCLcB/s1600/Code%2BSnippet%2B07.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://3.bp.blogspot.com/-Iol1Gfy3KfE/WHDqlB8EqmI/AAAAAAAAEk8/9-gbd2RyAH0nBzHjGCUtBEdqjBORaFnHgCLcB/s640/Code%2BSnippet%2B07.png" border="0" width="640" height="410"></a></p> <p> <br></p> <p>     7. Done. Now the folder that contains the snippet has to be added to the <i>Code Snippet Manager</i>. Open <i>Management Studio</i> and select <i>Code Snippet Manager</i> menu item in the <i>Tools</i> menu.</p> <p> <br></p> <p>     8. Click on <i>Add</i> and add the <i>Azure Folder (Or the folder you created)</i>.</p> <p> <br></p> <p>     </p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://4.bp.blogspot.com/-w30Z265_l6U/WHDr5oTW2SI/AAAAAAAAElE/eAlLFs5tW8Mz2DLiQaTD3_0InFvoH3iuwCLcB/s1600/Code%2BSnippet%2B09.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://4.bp.blogspot.com/-w30Z265_l6U/WHDr5oTW2SI/AAAAAAAAElE/eAlLFs5tW8Mz2DLiQaTD3_0InFvoH3iuwCLcB/s400/Code%2BSnippet%2B09.png" border="0" width="400" height="298"></a></p> <p>     </p> <p>     9. Now the code snippet is available.</p> <p> <br></p> <p>     </p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://3.bp.blogspot.com/-ejhU_7MaRu0/WHDsII2OA8I/AAAAAAAAElI/NcnbK6GNiCIXaoQmZWFlfUdDpCC6u9f-ACLcB/s1600/Code%2BSnippet%2B08.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://3.bp.blogspot.com/-ejhU_7MaRu0/WHDsII2OA8I/AAAAAAAAElI/NcnbK6GNiCIXaoQmZWFlfUdDpCC6u9f-ACLcB/s640/Code%2BSnippet%2B08.png" border="0" width="640" height="210"></a></p> <p>  </p> <header><!--data-blogger-escaped-<title> - Name of the snippet.</p> <p style="text-align: left;">         ii. <header><description> - Description of the snippet.</p> <p style="text-align: left;">         iii. <header><author> - Your name</p> <p style="text-align: left;"> <br></p> <p style="text-align: left;">         This what I have done.</p> <p style="text-align: left;"> <br></p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://3.bp.blogspot.com/-eDN9dJcXlqU/WHCLDx2OynI/AAAAAAAAEkQ/2CDXsMpsdA8zo1XjxUtGLTz_oBkpXw9VgCLcB/s1600/Code%2BSnippet%2B06.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://3.bp.blogspot.com/-eDN9dJcXlqU/WHCLDx2OynI/AAAAAAAAEkQ/2CDXsMpsdA8zo1XjxUtGLTz_oBkpXw9VgCLcB/s640/Code%2BSnippet%2B06.png" border="0" width="640" height="360"></a></p> <p style="text-align: left;">     </p> <p style="text-align: left;">     6. Scroll-down and change the <i>Snippet Section.</i>  </p> <p style="text-align: left;">         i. Add all parameters required as <declaration><literal></p> <p style="text-align: left;">         ii. Add the code in <code> node.</p> <p style="text-align: left;"> <br></p> <p style="text-align: left;">         See the way I have added the <i>sp_set_firewall_rule</i> stored procedure. You can see, I have added three <literal> nodes for handling three parameters and have configured <i>Name (ID), Tooltip </i>and <i>Default</i> value.</p> <p style="text-align: left;"> <br></p> <p style="text-align: left;">     </p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://1.bp.blogspot.com/-kXr9-bAVYmo/WHCMiEg7_9I/AAAAAAAAEkc/7r2FIBQQ1r8Piq8hG9koEqG0oQYPNs-aACLcB/s1600/Code%2BSnippet%2B07.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://1.bp.blogspot.com/-kXr9-bAVYmo/WHCMiEg7_9I/AAAAAAAAEkc/7r2FIBQQ1r8Piq8hG9koEqG0oQYPNs-aACLcB/s640/Code%2BSnippet%2B07.png" border="0" width="640" height="410"></a></p> <p style="text-align: left;">     </p> <p style="text-align: left;">     7. Now you can use the Snippet Shortcut when you need to set a Azure Firewall Rule.</p> <p style="text-align: left;"> <br></p> <p style="text-align: justify;">     </p> <p class="separator" style="text-align: center; clear: both;"> <a imageanchor="1" href="https://4.bp.blogspot.com/-PT0E40pcG_Q/WHCNGhdYsuI/AAAAAAAAEkk/dchull844FMCTMHhu-aE7emIS_1rqM6SQCLcB/s1600/Code%2BSnippet%2B08.png" style="margin-left: 1em; margin-right: 1em;"><img src="https://4.bp.blogspot.com/-PT0E40pcG_Q/WHCNGhdYsuI/AAAAAAAAEkk/dchull844FMCTMHhu-aE7emIS_1rqM6SQCLcB/s640/Code%2BSnippet%2B08.png" border="0" width="640" height="210"></a></p> </div> <div> <p style="text-align: justify;">     </p> </div> </div> </div> -->

Friday, January 6, 2017

SQL Server - Adding IF, BEGIN/END, WHILE statement easily to the code

Do you know that you can add, specifically surround your code with BEGIN/END, IF, and WHILE statements using a menu item without writing them? Yes, with Management Studio, it is possible. Here are the steps for doing it;

Assume that you have the below code that needs to be repeated based on a condition;


All you have to do is, select the code needs to be surrounded and press Ctrl+K and Ctrl+S (Just press Ctrl and hold, and press K first and then S). This is what happen when you do it;


Select WHILE and press Tab key. This is what you should see;


Set the condition as you need now. This shortcut can be opened using the context menu as well.

Thursday, January 5, 2017

SQL Server - Rounding Off Decimal to 0.5

This is something I had to use with one of my projects today. As a part of the implementation, an average value had to be rounded off to 0.5 and result should be something like 3.0, 3.5, 4.0, etc. For example, if the value is 3.6, it has to be saved as 3.5, if the value is 3.8, it has to be saved as 4.0.

Since it is not something we always do, thought to share the code. Here is the code I wrote using few functions;

DECLARE @Number decimal(2,1) = 3.2

-- This will return the value as 3.0
SELECT CASE WHEN @Number % 1 IN (.3, .4, .6, .7) THEN Convert(decimal(2,1), Floor(@Number) ) + .5
   WHEN @Number % 1 IN (.8, .9) THEN Convert(decimal(2,1), Ceiling(@Number))
   WHEN @Number % 1 IN (.1, .2) THEN Convert(decimal(2,1), Floor(@Number))
   ELSE @Number END;


Wednesday, January 4, 2017

SQL Server Job History is missing

If you are 100% sure that the job configured with SQL Server Agent has been executed successfully but you do not see the execution record when the Job History is opened, what could be the reason?

This is something I noticed yesterday with one of companies I consult. The client has many jobs configured with SQL Server Agent and all work fine without any issue. When the job history of the job is viewed using View History context menu, for some jobs, it shows the history but not for all. If the jobs have been successfully executed, it means that execution records should be exist in the msdb database as it is the database that stores all these information. So, we immediately checked the database;

USE msdb;
GO

select * from sysjobhistory;

It returned only 1000 records, means it maintains only thousand records. This is based on a setting of SQL Server Agent which is called Current job history log size (in rows). When it was checked, the setting was something like below;


This setting deletes all old records to make sure that it maintains only 1000 records. That is the reason we see the history for some records, not for all. What we saw from the above query is, details of jobs executed recently.

Best is, select Remove agent history checkbox and set to 4 weeks or more without using Limit size of job history log. This makes sure that we see the history for last one month. However, if the msdb database is getting larger, then you need to configure a lower value.

Monday, January 2, 2017

Should we use Compress function on string values to reduce the storage cost?

While designing my new database in Azure as an Azure SQL Database, was thinking to manage the cost for the storage, hence thought to apply COMPRESS function where-ever possible even though it does not drastically reduce the cost in terms of finance. However, there are two main things to consider if data is going to be compressed using COMPRESS function;
  • Compressed data cannot be indexed.
  • Compressed data has to be stored in a varbinanry(max) column
If above facts violates the business requirements, then it is not possible to use the function for compressing data. But assume that the column does not need to be indexed and no harm of storing data in compressed format, then it can be simply applied. Now the question is, will it be useful with all values stored in this particular column?

For that, I made a simple test. See code below and the output of it.

DECLARE @Value1 varchar(8000) = 'Compresses the input expression using the GZIP algorithm.'
DECLARE @Value2 varchar(8000) = 'Compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max)'
DECLARE @Value3 varchar(8000) = 'Compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max). Compressed data cannot be indexed. The COMPRESS function compresses the data provided as the input expression and must be invoked for each section of data to be compressed. For automatic compression at the row or page level during storage, see Data Compression.'
DECLARE @Value4 varchar(8000) = 'Compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max). Compressed data cannot be indexed. The COMPRESS function compresses the data provided as the input expression and must be invoked for each section of data to be compressed. For automatic compression at the row or page level during storage, see Data Compression.Compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max). Compressed data cannot be indexed. The COMPRESS function compresses the data provided as the input expression and must be invoked for each section of data to be compressed. For automatic compression at the row or page level during storage, see Data Compression.Compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max). Compressed data cannot be indexed. The COMPRESS function compresses the data provided as the input expression and must be invoked for each section of data to be compressed. For automatic compression at the row or page level during storage, see Data Compression.Compresses the input expression using the GZIP algorithm. The result of the compression is byte array of type varbinary(max). Compressed data cannot be indexed. The COMPRESS function compresses the data provided as the input expression and must be invoked for each section of data to be compressed. For automatic compression at the row or page level during storage, see Data Compression.'
 
 SELECT DATALENGTH(@Value1) LengthOfOriginalValueOfValue1
 , DATALENGTH(COMPRESS(@Value1)) LengthOfCompressedValueOfValue1
 , DATALENGTH(@Value2) LengthOfOriginalValueOfValue2
 , DATALENGTH(COMPRESS(@Value2)) LengthOfCompressedValueOfValue2;

 SELECT DATALENGTH(@Value3) LengthOfOriginalValueOfValue3
 , DATALENGTH(COMPRESS(@Value3)) LengthOfCompressedValueOfValue3
 , DATALENGTH(@Value4) LengthOfOriginalValueOfValue4
 , DATALENGTH(COMPRESS(@Value4)) LengthOfCompressedValueOfValue4;


This clearly shows us that we do not get benefits with all types of values stored and it shows and compression works well only with larger values. Therefore I decided to apply this only for few columns. If you have the same requirements, make sure you apply the compression only for columns that has larger values.

Sunday, January 1, 2017

Another new year for dinesql.blogsposts.com……… HAPPY NEW YEAR 2017


I have been blogging for years but I think that the most important and success year is 2016. I have made 171 posts in 2016 based on my experiences had with SQL Server, based on questions asked by many and based on some experiments I did. Comparing with 2015 that has 200 posts, I still think that 2016 is the best.

I can see more than 50,000 unique visits per month (excluding subscriptions), making it as over 600,000 unique visits per year. When comparing with other blogs by SQL Server experts, still the number is small but with the feedback and compliments received on my blog, I can see how significant and useful some of the posts I have made and it always encourages and makes me happy. 
Thank you very much for reading my posts and of course helping me to share something I know. Not only that, thank you very much for supporting me for improving my knowledge by giving feedback with your replies and solutions.

I wish my readers Very Happy New Year 2017! Let’s learn more on Microsoft SQL Server, Azure, Database Management System, Business Intelligence and Data Analytics with Big Data.


Saturday, December 31, 2016

Azure SQL Databases - Prevent accidental deletes - Azure Locks

Have you ever deleted a database accidentally? If you have done, then you know very well that how it makes you uncomfortable :). If you are managing a very important database and you should make sure that even authorized users cannot easily delete the database without taking an additional action, Azure has a way of configuring it. It is called Azure Locks.

Azure Locks is part of Azure Resource Manager. It allows us to set locks on two ways;
  • Read-only - makes sure that authorized users can read the resource but editing and deleting are not possible.
  • Delete - makes sure that authorized users CANNOT DELETE the resources.
For example, if you set a Delete Lock at one of your Azure SQL Servers, no resources available under the SQL Server such as A Database cannot be deleted until the Lock is released. You can do the same for a database instead setting the lock at the server level as well.

Here is an example. If we need to set a Delete Lock for one of SQL Servers, get the Server Properties Blade opened and click on the Locks property.


Once the Locks Blade is opened, we can add a lock by clicking the Add Button. As shown in the above image, I have set the name of it as DeleteLock, Lock type as Delete and a small note. Once this is saved, this lock gets applied to all resources that come under the selected SQL Server.

If I try to delete a database in that server, Azure Resource Manager makes sure that I cannot delete the database until I remove the lock.



Friday, December 30, 2016

Inserting records, ignoring duplicates, recording them

A column like email address in en entity like Customer or Employee is always set with an UNIQUE Constraint to make sure that no duplicates values are inserted. This is achieved through an Unique Index setting up with the column. This does not annoy us when we accept only one record at a time for inserting or updating but it surely annoys us when we perform a Bulk Operation.

There are instances that we load large number of records as a single batch. For an example, if we get 1 million records to be inserted to a table like Customer that has Email Column, if one duplicate is detected within the batch, the whole batch is getting rolled back. Sometimes, we need to continue with all other records ignoring duplicates but the standard Unique Index does not allow us to do it.

Here is an example.

USE tempdb;
GO

-- Creating a table for Customers
IF OBJECT_ID('dbo.Customers') IS NOT NULL
 DROP TABLE dbo.Customers;
GO
CREATE TABLE Customers
(
 Title char(5) NOT NULL
 , FirstName varchar(50) NOT NULL
 , MiddleName varchar(50)  NULL
 , LastName varchar(50) NOT NULL
 , EmailAddress varchar(100) NOT NULL
 
)

-- Addin the unique index on the Email Column
CREATE UNIQUE INDEX IX_Customers ON dbo.Customers (EmailAddress)

-- Inserting one customer for testing
INSERT INTO dbo.Customers
 (Title, FirstName, MiddleName, LastName, EmailAddress)
VALUES
 ('Mr.', 'Dinesh', NULL, 'Priyankara', 'dinesh@dinesql.com')

If we try to insert records as a batch that has duplicates, none of them will be inserted.

-- This batch contains 4 records, note the 2nd and 3rd record
-- They have the same emial address
INSERT INTO dbo.Customers
 (Title, FirstName, MiddleName, LastName, EmailAddress)
SELECT 'Mr.', 'Yeshan', NULL, 'Santhush', 'yeshan@dinesql.com'
UNION ALL
SELECT 'Ms.', 'Jane', NULL, 'Western', 'jane@dinesql.com'
UNION ALL
SELECT 'Ms.', 'Jane', NULL, 'Black', 'jane@dinesql.com'
UNION ALL
SELECT 'Mr.', 'Jack', NULL, 'Knight', 'jack@dinesql.com'

The above operation results this;

Msg 2601, Level 14, State 1, Line 31
Cannot insert duplicate key row in object 'dbo.Customers' with unique index 'IX_Customers'. 
The duplicate key value is (jane@dinesql.com).
The statement has been terminated.

If we need to make sure that the operation ignores duplicates and continues, all we have to do is, add IGNORE_DUP_KEY option to the index.

-- dropping and recreating the index with IGNORE_DUP_KEY option
DROP INDEX IX_Customers ON dbo.Customers 
GO

CREATE UNIQUE INDEX IX_Customers ON dbo.Customers (EmailAddress) WITH (IGNORE_DUP_KEY = ON)

-- checking the insert again. It inserts all except the duplicate
INSERT INTO dbo.Customers
 (Title, FirstName, MiddleName, LastName, EmailAddress)
SELECT 'Mr.', 'Yeshan', NULL, 'Santhush', 'yeshan@dinesql.com'
UNION ALL
SELECT 'Ms.', 'Jane', NULL, 'Western', 'jane@dinesql.com'
UNION ALL
SELECT 'Ms.', 'Jane', NULL, 'Black', 'jane@dinesql.com'
UNION ALL
SELECT 'Mr.', 'Jack', NULL, 'Knight', 'jack@dinesql.com'

Though this works fine, an obvious question comes up is, how do we know what records have been ignored? There is no direct method to capture ignored records with this solution however following can be used with some additional coding;
  1. Load all email addresses to a Temporary Table or Table Variable before performing the main operation. After main Insert Operation, do a comparison using LEFT OUTER JOIN to find out records that were not inserted.
  2. Use Integration Services for the operation. It has allows you to capture problematic records using error handlers or can duplicate the flow, record all emails addresses in another container, and do the comparison at the end of the task.

Thursday, December 29, 2016

Setting a higher Priority Base value to SQL Server - are we optimizing the process or taking a risk?

There are set of server level configurations related to optimizing SQL Server but not all guarantee that the result will be as expected and friendly. The Boost SQL Server Priority is one of such settings and though we have been given it for setting it to a higher value, prioritizing SQL Server processes in the Operating System, it is not recommended to change it.

I was asked about this last week, whether we can optimize SQL Server processes by setting the property to true. Theoretically, it should optimize but it can result unexpected result as well.


In simple term, this setting allows us to set the Priority Level (or Priority Base) to 13. The default is 7, not 0 or a higher value. I am sure that the value 7 has been set for this for many reasons, hence it should not be changed. Assume that we changed this and now it is 13. This makes Operating System to give SQL Server processes to high priority than other requested processes. Experts say that this might bring you to a situation where you see something like no response from keyboard or no response from mouse

Therefore, remember, this is something you need to think twice if you are going to change it.

Wednesday, December 28, 2016

Reporting Services (SSRS) 2016 - Could not load folder contents Error - Even with SP1

I have been continuously hitting with this when the Reporting Services portal is opened first time;

Could not load folders contents
Something went wrong, Please try again later.


But if I refresh the page or reload the portal, everything starts working fine and it does not appear until I restart the service.

Not sure the reason, searched and read many posts, contacted some consultants but so far no luck. I tried with Recreating the database, Changing the credentials, Changing the TimeZone to UTC 0, But none of them worked for me.

As per the log, it fails first few attempts to connect with the database but I am not sure how it connects after (because my second attempt loads the portal).

Still expecting some answers for experts, will be updating this post once I have the answer, appreciate if can share your solution, if you have already faced this and solved.

Tuesday, December 27, 2016

Reporting Services (SSRS) - Upload Multiple Files

Although we have a new version of SQL Server 2016, multiple file upload facility is still not given as a built-in function and it annoys us when we have multiple RDL files to be uploaded manually. I am currently working on a Reporting Services project and have the same requirement, hence thought to make a simple application to upload multiple files.

If you make a search, you find related codes, written in many ways. I did a study on some of the code published and made a Windows Application for uploading my files. This is how it looks like;


I can select multiple files and upload;


It works fine for me but need to do more test, I will be continuous enhancing and testing it, if you need the same, please use the download link given below.

Link for the Windows Application: https://1drv.ms/u/s!Ahey1yiqH_9AgZwd-FOt5Qg3ej35oQ 

Monday, December 26, 2016

SQL Server Agent Error are logged in SQL Server Error Log or somewhere else?

I have written two posts on SQL Server Error Log; How to delete current SQL Server error log? My current log is in GBs and Reading SQL Server Error Log using XP_READERRORLOG, and they explain how the log is maintained, how log files are created and how the content can be read. I was asked a question on this but not exactly on SQL Server errors but SQL Server Agent errors.

Can we read/see SQL Server Agent errors in SQL Server Error Logs? Or should we see somewhere else for this? Generally, you should see all errors related to SQL Server but errors related to Agent are maintained somewhere else called Agent Error Log.

Just like the way SQL Server maintains its error logs, multiple files are maintained for Agent error logs. By default there are 10 files and they are saved in Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Log folder.


The file named as SQLAGENT.OUT is the current one and just like the SQL Server Error logs, this gets recycled when the service is restarted. Once recycled, current one becomes SQLAGENT.1 and old SQLAGENT.1 becomes SQLAGENT.2, and so on. However, using Congfigure menu option in SQL Server Error Log node, we can adjust the number of files to be maintained but that facility is not exist with Agent Error Log.


If you see that the current log file is larger than you expect, it can be recycled by either by clicking the Recylce menu item in the context menu (see above image) or executing dbo.sp_cycle_agent_errorlog stored procedure.

Sunday, December 25, 2016

Automate adding common objects to new databases when they are created - SQL Server Brain Basher of the Week #062

Thought to share a specific conversation of one of the interviews did last month on a DBA role. The interview was done over the Skype call as the Interviewee was at a different place. We were having a conversation on past experiences of Interviewee and I learnt that he has worked for a company that maintains many databases for many clients and of course, he was explaining the work he had to do.

Here is the Brain Basher of the week based on the conversation I had. What is the best way of adding common objects to new databases when they are created?

Let me explain this by adding the conversation we had on it (Note that I have obtained permissions from the client and Interviewee for adding this small part of the interview to my blog, this is not the exact conversation had, have slightly adjusted it for writing purposes). let's call Interviewee as Jack.

Me: Okay Jack, since you have worked with a company that maintain large number of databases for clients, I am sure that you have many common objects like Tables, Stored Procedures in all databases.

Jack: Yes Dinesh. There are set of Tables and Procedures that we need to create when a new database is created to a new client.

Me: How do you do it? Are you adding them manually? Or have you done some automation on it?

Jack: I had automated it, yes, up to some extent. I have a script for creating a database and that script not only had the database creation part, I had creation code of all other required objects. So, all I had to do is, just run this script with few modification for setting up a new database.

Me: Great, do you know any other technique or method to make sure that all required common objects are available in the newly created database without adding them through the script?

Jack: There may be but, since I was comfortable with what I had, I did not look for something else. I had no issue with the ready-made code.

Me: Agree, but what if I need to make sure even I can create a database in your environment with all common objects, without your script? Or assume that one of your colleagues need to create a new database for a new client, and you are absent and he cannot find your script.

Jack: Yes, we experienced it several times and I accept that I did not have a document that explains how to do it :). But it was not a major issue at that time.

Me: Yes, it is always great to maintain a run-book (read more on Run Book: SQL Server Brain Basher of the Week #021 - Documentation). Anyway, we can use some components of the SQL Server for handling that, any idea how to use existing SQL Server components to automate some of those without having a script?

Jack: Like SQL Server Agent, using a job?

Me: How SQL Server starts the database creation when we ask to create one for us?

Jack: I am not sure about the internal operation but it requests the space from the OS based on the way we have configured the files and do some other operations, must be, and complete it.

Me: It actually takes a copy from a template and use it for creating the database. Do you know where is the template store?

Jack: Must be in one of the system folders. Sorry Dinesh, not sure where it is actually located.

Me: No problems. It is actually one of the system databases. It is Model Database.

Jack: Yes, I have heard about it but did not check much on that.

Me: You can actually add all your common objects to the Model Database and keep. Then, when anyone create a database, since it takes a copy of Model Database, the newly created database will automatically contain all required components. So, we do not need to add them manually or via a script.

Jack: Yes, I have missed that part...........

Conversation continued...........

Remember this, Model is the template and it is used by SQL Server to create databases when we ask.




Saturday, December 24, 2016

SQL Server: Calculate TimeSpan using TSQL - dd:hh:mm:ss

While coding on my new project, wanted to show the duration between two DateTime values as a Time Span like Remaining Days : Remaining Hours : Remaining Minutes : Remaining Seconds (dd:hh:mm:ss).


There are multiple solutions available in the net and you can certainly use one of them as long as they satisfy your requirements. However, this is what I wrote, it may be useful if you need the same.

DECLARE @d1 datetime = '2016-12-24T00:44:01.000'
DECLARE @d2 datetime = '2016-12-25T01:43:00.000'

SELECT 
 CASE WHEN LEN(DATEDIFF(ss, @d1, @d2) / 86400) = 1 THEN '0' + Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) / 86400)) ELSE Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) / 86400)) END 
 + ':' +
 CASE WHEN LEN((DATEDIFF(ss, @d1, @d2) % 86400) / 3600) = 1 THEN '0' + Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) % 86400) / 3600) ELSE Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) % 86400) / 3600) END  
 + ':' +
 CASE WHEN LEN((DATEDIFF(ss, @d1, @d2) % 3600) / 60) = 1 THEN '0' + Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) % 3600) / 60) ELSE Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) % 3600) / 60) END 
 + ':' +
 CASE WHEN LEN(DATEDIFF(ss, @d1, @d2) % 60) = 1 THEN '0' + Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) % 60)) ELSE Convert(varchar(2), (DATEDIFF(ss, @d1, @d2) % 60)) END 

Here are some examples used with above code;


Friday, December 23, 2016

Delete All Azure SQL Server Firewall Rules using TSQL

In order to access Azure SQL Server databases, firewall rules have to be set at either server level or database level by adding client IP addresses. Server level rules can be configured using the Azure Portal but database level rules have to be done using TSQL. If you have many added many rules and you need to delete all or some, and if you try to use the portal for deleting, it will take long time because portal allows you to delete one at a time.

Assume that you have many number of server level rules configured as below;


I was asked about this and here is the script I wrote to delete selected server level firewall rules.

DECLARE @name nvarchar(200)
WHILE EXISTS (SELECT name FROM sys.firewall_rules WHERE name LIKE 'clientIPAddress_%')
BEGIN

 SELECT TOP (1) @name = name FROM sys.firewall_rules WHERE name LIKE 'clientIPAddress_%'
 EXEC sp_delete_firewall_rule @name;
END

It is a simple one but you can use it without using the portal. Make sure you have written the WHERE Condition properly otherwise it might delete rules you may need.

Thursday, December 22, 2016

Azure SQL Database - Enable TDE - Does it encrypt the bacpac file?

We use Transparent Data Encryption (TDE) for protecting our databases, making sure that no one can get our databases added to a different instance even though they manage to get either data files (.mdf) or backup files (.bak). TDE makes sure that data in data files is encrypted and it is called as encrypting data at rest. The standard way of enabling this is;
  1. Create a master key in master database.
  2. Create (or add an existing one from a copy) a certificate protected by the master key.
  3. Create a database encryption key protected by the certificate, in the database needs to be protected.
  4. Enable encryption in the database.

This is different with Azure SQL Database. Azure allows us to enable TDE with Azure SQL Database. It does not require steps like above, it can be simply enabled using the portal.


Although we do not create the certificate and the database encryption key manually, the storage of the database is encrypted using a symmetric key that is protected by a built-in server certificate in the SQL Server
.
Not only that, the main exception comparing with TDE-Enabled On-Premises database is, copying this database from the portal to local environment. If the database is not an Azure SQL Database and if you take a backup of it, you will not be able to restore it in another instance without certificate created for TDE. But if the database is an Azure SQL Database, when you export it to a bacpac file, it can be loaded to any instance without the Azure SQL Server level Certificate used to protect it. See below images;

1. Exporting the bacpac to classic storage.


2. Loading the database to on-premises server using Import Data-tier application... wizard.


3. Getting the database successfully added.



The reason for this is, when exporting a TDE-Enabled Azure SQL Database, the bacpac file is not getting encrypted, hence we need to make sure that it is secured.

Wednesday, December 21, 2016

Get SQL Server Trace automatically started at service startup

Although SQL Trace and SQL Server Profiler have been marked as deprecated and Microsoft recommends to use Extended Events instead, many of us still use SQL Profiler. You may run the Profiler for small traces or long traces but but if you need to make sure that it runs continuously even at the server restart, you need to consider following points;
  • SQL Server Profiler is not recommended for long-term monitoring and large traces.
  • Profiler cannot be automatically started when the server is restarted.
If you still need to implement it, you can consider;
  • Use SQL Trace instead.
  • Use sp_procoption to start the trace at the SQL Server Service start.
This makes sure that the trace configured is continuously running even when the server is restarted and the output is saved and queried anytime.

In order to implement this, you need to follow the steps below;
  1. Start with SQL Server Profile and select all events you need with columns and filters for trace.
  2. Get the definition of the created using the Profiler and use it for creating the SQL Trace.
  3. Include the code of SQL Trace with a stored procedure and add the stored procedure as a startup procedure.
Here is an sample implementation with details.

First let's make a Trace using the Profiler. The following Trace is created with TSQL_Duration Template and All Columns is selected for both Events.


Let's click on Run and then immediately stop. In order to get the code generated for SQL Trace, click File -> Export -> Script Trace Definition -> For SQL Server 2005 - 2016... menu item.


Save it as Trace01.sql. Then open the saved script in Query Window for modification. This code has a place to modify which sets the trace output file path. Since we need to make sure that this code can be executed at each server restart, a unique output file name is required. Therefore, let's add a variable called @FilePath and set with sp_trace_create statement. This is how the first part of the code is changed now;

/****************************************************/
/* Created by: SQL Server 2016 Profiler          */
/* Date: 12/21/2016  10:28:14 PM         */
/****************************************************/


-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @maxfilesize = 5 

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

DECLARE @FilePath nvarchar(200);
SET @FilePath = N'E:\TraceOutput\SQLDurationTrace_' 
 + CONVERT(nvarchar(20), GetDate(), 112) + N'_' + REPLACE(CONVERT(nvarchar(20), GetDate(), 108), ':', N'')

exec @rc = sp_trace_create @TraceID output, 0, @FilePath, @maxfilesize, NULL 
if (@rc != 0) goto error

Note that, this creates a file named as SQLDurationTrace_YYYYMMDD_HHMMSS.

It is better to record the TraceID generated from this because it is required to stop and close the trace if required. Since this is going start automatically, the generated TraceID cannot be taken out as it is written. Therefore we need to change it as well. Let's create a table for holding generated trace ids. Let's create this table in master database.

CREATE TABLE dbo.TraceID
(
 ServerStartDateTime datetime NOT NULL 
  PRIMARY KEY DEFAULT(GetDate())
 , TraceID int  NULL
);
GO


Scroll down the Trace code generated and comment the select TraceID=@TraceID statement and add the following instead.

-- display trace id for future references
--select TraceID=@TraceID
INSERT INTO dbo.TraceID
 (ServerStartDateTime, TraceID)
VALUES
 (DEFAULT, @TraceID);

goto finish


We have done the necessary changes to the Trace code. Let's place the entire code into a stored procedure and name it as StartTrace. Here is the entire code now;

USE master;
GO

CREATE OR ALTER PROC dbo.StartTrace
AS
BEGIN

 -- Create a Queue
 declare @rc int
 declare @TraceID int
 declare @maxfilesize bigint
 set @maxfilesize = 5 

 DECLARE @FilePath nvarchar(200);
 SET @FilePath = N'E:\TraceOutput\SQLDurationTrace_' 
  + CONVERT(nvarchar(20), GetDate(), 112) + N'_' + REPLACE(CONVERT(nvarchar(20), GetDate(), 108), ':', N'')

 exec @rc = sp_trace_create @TraceID output, 0, @FilePath, @maxfilesize, NULL 
 if (@rc != 0) goto error

 -- Client side File and Table cannot be scripted

 -- Set the events
 declare @on bit
 set @on = 1
 exec sp_trace_setevent @TraceID, 10, 1, @on
 exec sp_trace_setevent @TraceID, 10, 9, @on
 exec sp_trace_setevent @TraceID, 10, 2, @on
 exec sp_trace_setevent @TraceID, 10, 66, @on
 exec sp_trace_setevent @TraceID, 10, 10, @on
 exec sp_trace_setevent @TraceID, 10, 3, @on
 exec sp_trace_setevent @TraceID, 10, 4, @on
 exec sp_trace_setevent @TraceID, 10, 6, @on
 exec sp_trace_setevent @TraceID, 10, 7, @on
 exec sp_trace_setevent @TraceID, 10, 8, @on
 exec sp_trace_setevent @TraceID, 10, 11, @on
 exec sp_trace_setevent @TraceID, 10, 12, @on
 exec sp_trace_setevent @TraceID, 10, 13, @on
 exec sp_trace_setevent @TraceID, 10, 14, @on
 exec sp_trace_setevent @TraceID, 10, 15, @on
 exec sp_trace_setevent @TraceID, 10, 16, @on
 exec sp_trace_setevent @TraceID, 10, 17, @on
 exec sp_trace_setevent @TraceID, 10, 18, @on
 exec sp_trace_setevent @TraceID, 10, 25, @on
 exec sp_trace_setevent @TraceID, 10, 26, @on
 exec sp_trace_setevent @TraceID, 10, 31, @on
 exec sp_trace_setevent @TraceID, 10, 34, @on
 exec sp_trace_setevent @TraceID, 10, 35, @on
 exec sp_trace_setevent @TraceID, 10, 41, @on
 exec sp_trace_setevent @TraceID, 10, 48, @on
 exec sp_trace_setevent @TraceID, 10, 49, @on
 exec sp_trace_setevent @TraceID, 10, 50, @on
 exec sp_trace_setevent @TraceID, 10, 51, @on
 exec sp_trace_setevent @TraceID, 10, 60, @on
 exec sp_trace_setevent @TraceID, 10, 64, @on
 exec sp_trace_setevent @TraceID, 12, 1, @on
 exec sp_trace_setevent @TraceID, 12, 9, @on
 exec sp_trace_setevent @TraceID, 12, 3, @on
 exec sp_trace_setevent @TraceID, 12, 11, @on
 exec sp_trace_setevent @TraceID, 12, 4, @on
 exec sp_trace_setevent @TraceID, 12, 6, @on
 exec sp_trace_setevent @TraceID, 12, 7, @on
 exec sp_trace_setevent @TraceID, 12, 8, @on
 exec sp_trace_setevent @TraceID, 12, 10, @on
 exec sp_trace_setevent @TraceID, 12, 12, @on
 exec sp_trace_setevent @TraceID, 12, 13, @on
 exec sp_trace_setevent @TraceID, 12, 14, @on
 exec sp_trace_setevent @TraceID, 12, 15, @on
 exec sp_trace_setevent @TraceID, 12, 16, @on
 exec sp_trace_setevent @TraceID, 12, 17, @on
 exec sp_trace_setevent @TraceID, 12, 18, @on
 exec sp_trace_setevent @TraceID, 12, 26, @on
 exec sp_trace_setevent @TraceID, 12, 31, @on
 exec sp_trace_setevent @TraceID, 12, 35, @on
 exec sp_trace_setevent @TraceID, 12, 41, @on
 exec sp_trace_setevent @TraceID, 12, 48, @on
 exec sp_trace_setevent @TraceID, 12, 49, @on
 exec sp_trace_setevent @TraceID, 12, 50, @on
 exec sp_trace_setevent @TraceID, 12, 51, @on
 exec sp_trace_setevent @TraceID, 12, 60, @on
 exec sp_trace_setevent @TraceID, 12, 64, @on
 exec sp_trace_setevent @TraceID, 12, 66, @on


 -- Set the Filters
 declare @intfilter int
 declare @bigintfilter bigint

 exec sp_trace_setfilter @TraceID, 35, 0, 6, N'AdventureWorks2014'
 -- Set the trace status to start
 exec sp_trace_setstatus @TraceID, 1

 -- display trace id for future references
 --select TraceID=@TraceID
 INSERT INTO dbo.TraceID
  (ServerStartDateTime, TraceID)
 VALUES
  (DEFAULT, @TraceID);

 goto finish

 error: 
 select ErrorCode=@rc

 finish: 
END
GO


Now the last step. We need to add this stored procedure as a startup procedure. Use the following code to add StartTrace as a Startup procedure;

USE master;
GO

EXEC sp_procoption StartTrace, 'STARTUP', 'ON';
GO


Note that, in order get this work, we need to make sure scan for startup procs server setting is enabled. Generally, when the sp_procoption is execute, this is getting enabled but it is always better to get it enabled before using sp_configure. Done. Let's restart the SQL Server Service and check the dbo.TraceID table.


As you see, a record is inserted with an ID. This means that trace has been started. Let's check the folder.


File is created. For testing purposes, let's run some queries in AdventureWorks2014 database and restarted the service.

USE AdventureWorks2014;
GO

SELECT * FROM Production.Product;
SELECT * FROM Sales.Customer;


Once restarted, we should see a new record in the table and new file in the folder.


This means that our Trace is getting automatically started when the service is started and trace continues. You can query the trace, current or older ones using fn_trace_gettable function and if you need, you can stop the current Trace using value 1 and lose the current Trace using value 2 with sp_trace_setstatus stored procedure.



Tuesday, December 20, 2016

Finding SQL Server Startup Stored Procedures

When I was discussing about getting Stored Procedures automatically executed at service start, a question raised; How can we find Stored Procedure added as Startup Procedures. This is simple, all we have to do is, query the sys.procedure system view and filter the result for ExecIsStartup Property.

Here is the code;

USE master;
GO

SELECT *
FROM sys.procedures
WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1;

For more info on how to add procedures as startup procedures using sp_procoption, read this: https://msdn.microsoft.com/en-us/library/ms181720.aspx

Monday, December 19, 2016

Reading SQL Server Error Log using XP_READERRORLOG

We use sp_readerrorlog for reading the content of the error log of SQL Server and it actually uses xp_readerrorlog extended stored procedure. Since I had to do some troubleshoot with one of servers, I use this for finding some info, and thought share it.

Here are some ways of reading the error log;




In case of you need the code;

USE master;
GO

-- without parameters, load the current log
EXEC xp_readerrorlog;

-- Load the specific error log 
-- 0 is current one and by default there are 6 more old logs
EXEC xp_readerrorlog 0;

-- Second parameter is for log type
-- 1 is for engine and 2 is agent
EXEC xp_readerrorlog 0, 1;

-- Two more additional parameters can be passed 
-- for searching, 1st one is for searching
-- and second one for filter the search - or further searchin within the result
EXEC xp_readerrorlog 0, 1, N'Server', N'Listening';

-- And this can take three more parameters
-- for setting the date range and sorting order
EXEC xp_readerrorlog 0, 1, N'Database', N'resource'
 , '2016-12-21 23:00:18.860', '2016-12-21 23:00:18.930', N'DESC';

Sunday, December 18, 2016

Moving a table with millions of records to another within a second - SQL Server Brain Basher of the Week #061

This is a very common Interview Question and many come with different solutions but most forget some easy ways of doing it. Sometime we need to move data from one table to another (Note that not copying) but depend on the method we use and the number of records, it will take few seconds to many hours. 

However if the structure of the second table is same as first and need to move all records, then what is the best way?

Here is the solution. I have a table called dbo.InternetSales and it has 6 millions records. Let's see how long it takes to move the entire record set from this table to a table called dbo.InternetSales2 that has the same structure.

See the below code;

SELECT COUNT(*) InternetSalesCount FROM dbo.InternetSales;
SELECT COUNT(*) InternetSales2Count FROM dbo.InternetSales2;

ALTER TABLE dbo.InternetSales SWITCH TO dbo.InternetSales2;

SELECT COUNT(*) InternetSalesCount FROM dbo.InternetSales;
SELECT COUNT(*) InternetSales2Count FROM dbo.InternetSales2;


As you see, all records have been transferred from first table to second table within a second :).

How do we do it? This is actually called as Partition Switching. Read more on this with post: Fact Loading: Switching a loaded temporary table into an empty partition of the fact table

Saturday, December 17, 2016

Locking selected records until the transaction is completed

SQL Server handles locks efficiently and it places relevant locks based on the way we access records. If you perform an action such as INSERT, UPDATE, DELETE against records, SQL Server makes sure that records for the operation are exclusively locked and no one can access them. However, what if you need to make sure that no one can modify your selected records until you complete the transaction?

Generally, SQL Server obtains shared locks when we retrieve records using SELECT statement but it does not keep them. The obtained locks are immediately released once all records are sent to the client.

We can use REPEATABLE READ Isolation Level for handling this. This Isolation Level makes sure that records selected during the transaction cannot be modified by other users until we complete the transaction.

There is another easy way of handling this. We can use UPDLOCK Hint with the SELECT statement, locking the records exclusively. This hint takes Update Locks for Read Operation only at the row level or page level. See the underlined words, Page Level, means you might see that few number of records are locked even though you have selected only one record, because if SQL Server takes a page lock, then all records in the page will be locked.

However, for most cased, this helps to protect the record until we complete the process on selected records. Here is the way of doing it.

Execute this with a new Connection
USE AdventureWorks2014
GO

-- Starting a transaction
BEGIN TRAN

-- Selecting a record
SELECT * FROM Production.Product
WITH (UPDLOCK)
WHERE ProductID = 4

Now execute this with another connection and see.
USE AdventureWorks2014;
GO

-- This is possible, can get the record
SELECT * FROM Production.Product
WHERE ProductID = 4;

-- This is not possible until the 
-- first tranaction is completed
UPDATE Production.Product
 SET Color = 'Black'
WHERE ProductID = 4;