Monday, October 24, 2016

Disable PRIMARY KEY before data loading

In order to improve the loading performance, specifically when a data warehouse or a large table is loaded, it is recommended to disable constraints such as PRIMARY KEY, UNIQUE KEY and CHECK and indexes if the data is already cleansed and validated. Although we heavily practice this in data warehousing, we do have some limitations on OLTP databases because the way we structure tables is different from the way we do with data warehouses.

Can we disable primary key of the table and continue with data loading? The answer is, Yes and No.

Remember, when you mark one of your table columns as the primary key, it creates a Clustered Index for the column you picked. If the Clustered Index is already created (before creating the primary key), then it creates a Non-Clustered Index. If the primary key is created with a Clustered Index and if you disable it, table will not be able to accessed because the main structure is disabled. However, if the primary key is created with a non-clustered index, you can disable it and still work with it.

USE tempdb;
GO

-- create a table with a primary key which is clustered
CREATE TABLE dbo.Customer
(
 CustomerID int identity(1,1) not null
 , FirstName varchar(100) not null
 , Constraint PK_Customer PRIMARY KEY CLUSTERED (CustomerID)
);

-- create a table with a primary key which is nonclustered
CREATE TABLE dbo.Employee
(
 EmployeeID int identity(1,1) not null
 , FirstName varchar(100) not null
 , Constraint PK_Employee PRIMARY KEY NONCLUSTERED (EmployeeID)
);

-- Insert some records
INSERT INTO dbo.Customer
 (FirstName) VALUES ('Dinesh'), ('Yeshan');
INSERT INTO dbo.Employee
 (FirstName) VALUES ('Jane'), ('Jack');

-- checking records
-- This should return all recorded we inserted
SELECT * FROM dbo.Customer;
SELECT * FROM dbo.Employee;

-- Disable the primary key of the Customer
-- by disabling associated index
ALTER INDEX PK_Customer ON dbo.Customer DISABLE;

-- Now following statements are not possible
INSERT INTO dbo.Customer
 (FirstName) VALUES ('Kate'), ('Mihen');
SELECT * FROM dbo.Customer;

-- Disable the primary key of the Employee
-- by disabling the associated index
ALTER INDEX PK_Employee ON dbo.Employee DISABLE;

-- These statements work without any issue
INSERT INTO dbo.Employee
 (FirstName) VALUES ('Nazir'), ('Daniel');
SELECT * FROM dbo.Employee;

-- Enabling both
ALTER INDEX PK_Customer ON dbo.Customer REBUILD;
ALTER INDEX PK_Employee ON dbo.Employee REBUILD;

Same goes to Unique Constraint as well.

Note that this is possible with data warehouse because, as a best practice, we do not (always, there are exceptions) make the column marked as primary key as the clustered key. For an example, we may create a primary key using all foreign keys columns but we create the clustered index using DataKey column.

Sunday, October 23, 2016

How to replace empty string with NULL - SQL Server Brain Basher of the Week #058

Everyone asks how to check whether the value is NULL and replace it with a different value if it is NULL because it is something that we commonly do with our applications. It can be easily done and many aware of it: All we have to do is, use IsNull function for checking and replacing it if it is NULL. However, I was asked a similar question today but it is bit different;

How can we check the value whether it is empty or not and replace it with a NULL if it is empty?

This is uncommon but important, hence it is the Brain Basher of this week.

This can be achieved by using NULLIF function. It does not check for NULLs but it returns a NULL if both supplied values are equal. See below code;

DECLARE @Variable int = 0

-- This returns NULL as both @Variable and 0 are equal
SELECT NULLIF(@Variable, 0)

-- This retuens 0 because values supplied are not equal
-- and returns the first expression
SELECT NULLIF(@Variable, 1)




Saturday, October 22, 2016

How to restirct access to DMVs and DMFs for users

We heavily use Dynamic Management Views and Functions for retrieving insight of inner operations of SQL Server and they are very useful. These DMVs and DMFs are based on Dynamic Management Objects (DMOs).

Generally, if user has permission on VIEW SERVER STATE and VIEW DATABASE STATE, user can execute almost all DMVs and DMFs. If you are a member of sysadmin server role, then you automatically get permission on VIEW SERVER STATE and if your login is a standard login and it has been added to one of the databases as a member of db_owner role, then you have VIEW DATABASE STATE permissions. These permission can be granted explicitly without adding logins or users to the specific roles.

However, if you need to restrict accessing specific DMVs or DMFs to users on one or more databases, there is an easy way of doing it. All we have to do is, restrict access on the specific DMVs and DMFs in the master database to user. It makes sure that user cannot access them regardless of the database context. 

Let's assume that user Jane should not be able to execute sys.dm_sql_referenced_entities DMF in any database. Here are the steps for doing it;

1. Create a user in master database for Jane's login.
2. Open properties of User Jane and click on Securables for listing required objects.


3. Click on Search for finding objects. Select All objects of the types... and click OK.


4. Select Table-valued function and click OK.


5. Select sys.dm_sql_referenced_entities DMF and set Deny permission on SELECT.


6. Click OK to save settings.

Now if you Jane tries to execute this DMF, she will get an error on it as below;


Friday, October 21, 2016

How to find all tables and columns used in the Stored Procedures?

As a part of modulerizing business logic, we create stored procedures and functions with SQL Server databases and in most cases, stored procedures are used. A stored procedure is named collection of TSQL statements that is created for implementing a business logic. 

When implementing a stored procedure, you will use one or more tables within it and later you may need to find out all tables along with columns that have been used with your specific stored procedures. Yes, you can open the stored procedure and see the code for finding them but what if you need to do it programmitically and not only one, for few stored procedures? Or what if you need to check and see a particular table has been used with your specific stored procedures? How can we find out these info, or in other words, how can we find out dependencies?

There are few ways of finding them using system views and functions given. We can use  sys.sql_expression_dependencies  view for finding objects used with a user-defined object and it can be used with stored procedures. However, since it returns one-row-per-one-object, you will not be able to get the column list of a table if you use this with a stored procedure. The dynamic management function that is sys.dm_sql_referenced_entities can be used as well. This gives more details on referenced objects. For an example, if you search for a stored procedure, it returns all tables along with column names used with the stored procedure.

Following code shows how to get the tables and columns used in dbo.uspGetBillOfMaterials stored procedure. As you see, sys.sql_expression_dependencies does not provide column details but sys.dm_sql_referenced_entities dynamic management function provides all the details we need.

USE AdventureWorks2014;
GO

SELECT * FROM sys.sql_expression_dependencies
WHERE referencing_id = OBJECT_ID(N'dbo.uspGetBillOfMaterials');  

SELECT * 
FROM sys.dm_sql_referenced_entities(N'dbo.uspGetBillOfMaterials', N'OBJECT');


In case of you need to find out whether the object is being used other objects, we have another dynamic management function named sys.dm_sql_referencing_entities. This returns details of all referencing objects.

SELECT * 
FROM sys.dm_sql_referencing_entities(N'Production.Product', N'OBJECT');

In case of you need to know how your object has been used by others, here is the way. This shows how to use sys.dm_sql_referencing_entities dynamic management function to see how the Production.Product table is used by other objects.


Sunday, October 16, 2016

Memory-Optimized table memory limit - SQL Server Brain Basher of the Week #057

Let's talk about memory-optimized tables as a part of Brain Basher in this week. This question was raised during one of my workshops and it is an obvious question comes to your mind when planning on memory-optimized tables. Here is the question/

How much memory memory-optimized table can consume maximally?

If the version is SQL Server 2014, then there is a limitation on durable tables which is 256GB and no limitation on non-durable tables. However, this has been addressed with SQL Server 2016 and now NO LIMITATIONS on both durable and non-durable tables.

You can read more info on this enhancement at: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/06/23/increased-memory-size-for-in-memory-oltp-in-sql-server-2016/

Read some post made on memory-optimized tables;
In-Memory OLTP: Inserting 3 millions records in 3 seconds
How to check tables and stored procedures for in-memory migration


Saturday, October 15, 2016

SQL Server needs memory optimized file group even for non-durable memory-optimized tables?

We know that SQL Server requires an additional file group that is marked as MEMORY_OPTIMIZED_FILEGROUP and a data file associated with it if memory-optimized tables need to be created. We know for sure that memory-optimized tables that are created as SCHEMA_AND_DATA require data files to write data to disk because they are coexist with disk-based tables but tables that are created as SCHEMA_ONLY require the same?

Memory-optimized tables that are created as non-durable (SCHEMA_ONLY) maintain both data and indexes in memory. Therefor, theorically it does not need anything to be written to disk. However, if you try to create a non-durable memory-optimized table without adding a memory-optimized-file-group, you get the following message;

Cannot create memory optimized tables. To create memory optimized tables, the database must have a MEMORY_OPTIMIZED_FILEGROUP that is online and has at least one container.

This says even for non-durable tables, it needs the file group. Although it does not maintain data in the disk, it has to create the table as the schema has to be maintained. Therefore, regardless of the type of memory-optimized tables, file group with MEMORY_OPTIMIZED_FILEGROUP option should be added to the database with a data file before creating memory-optimized tables.

Thursday, October 13, 2016

SQL Server 2016 Partial backup operation on a READONLY database backs up all file groups

While I was writing a code on backing up operation for SQL Server 2016, I noticed that there is an issue with partial backup. As per documentations, when we take a partial backup using READ_WRITE_FILEGROUPS option, it backs up;

  • only PRIMARY, all READ/WRITE file groups and mentioned READONLY file groups.
  • only PRIMARY if the database is READONLY.

However, if you perform this operation with SQL Server 2016 RTM, it backs up;
  • all file groups; PRIMARY, all READ/WRITE file groups including READONLY file groups - which is NOT the expected behavior.
You can apply the latest cumulative update (I applied CU2), then it backs up;
  • only PRIMARY and all READ/WRITE file groups - which is still NOT the expected behavior.
Here is the code I tested, you also can run the same and see;

USE master;
GO
-- creating the database
CREATE DATABASE [TestDatabase]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'TestDatabase_Data_1', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data1.mdf' 
 , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), 

 FILEGROUP [FileGroup_Main] 
( NAME = N'TestDatabase_Dat_2', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data2.ndf' 
 , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ),
 
 FILEGROUP [FileGroup_History] 
( NAME = N'TestDatabase_Dat_3', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data3.ndf' 
 , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )

 LOG ON 
( NAME = N'TestDatabase_Log', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Log.ldf' 
 , SIZE = 5120KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

-- creating tables and inserting records
USE [TestDatabase]
GO

CREATE TABLE [dbo].[Configuration]
(
 [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
 [ConfigurationName] [varchar](100) NOT NULL,
 [ConfigurationValue] [varchar](100) NOT NULL,
 [Settings] char(4000) NOT NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[Sales]
(
 [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
 [SalesDate] [date] NOT NULL,
 [Amount] [decimal](16, 2) NOT NULL,
 [Settings] char(4000) NOT NULL
) ON [FileGroup_Main]
GO

CREATE TABLE [dbo].[Sales_History]
(
 [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
 [SalesDate] [date] NOT NULL,
 [Amount] [decimal](16, 2) NOT NULL,
 [Settings] char(4000) NOT NULL
) ON [FileGroup_History]
GO

-- Insert set of records
INSERT INTO dbo.Configuration (ConfigurationName, ConfigurationValue, Settings) VALUES ('A', '1', 'a');
GO 5000
INSERT INTO dbo.Sales (SalesDate, Amount, Settings) VALUES ('01/01/2016', 10000, 'a');
GO 5000
INSERT INTO dbo.Sales_History (SalesDate, Amount, Settings) VALUES ('01/01/2010', 10000, 'a');
GO 5000


USE master;
GO

-- Make one FG as read-only
ALTER DATABASE TestDatabase MODIFY FILEGROUP [FileGroup_History] READ_ONLY;
GO

-- take a full database backup
-- this takes all three files
-- backup size is 63 MB
BACKUP DATABASE TestDatabase 
TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\BACKUP\TestDatabaseBackup01.bak'
WITH INIT
GO

-- take a backup with READ_WRITE_FILEGROUPS
-- this takes only TestDatabase_Data_1 and TestDatabase_Dat_2
-- backup size is 43 MB
BACKUP DATABASE TestDatabase READ_WRITE_FILEGROUPS
TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\BACKUP\TestDatabaseBackup02.bak'
WITH INIT
GO

-- Make the database as read-only
ALTER DATABASE [TestDatabase] SET  READ_ONLY 
GO

-- take a backup with READ_WRITE_FILEGROUPS
-- as per MSDN, this should take only PRIMARY
-- But this takes all three files with RTM and exlude read-only files with CU2
-- backup size is 63 MB
BACKUP DATABASE TestDatabase READ_WRITE_FILEGROUPS
TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\BACKUP\TestDatabaseBackup03.bak'
WITH INIT
GO

Result before applying CU2;

Processed 2840 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_3' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE successfully processed 7898 pages in 0.275 seconds (224.357 MB/sec).
Processed 2840 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE...FILE= successfully processed 5370 pages in 0.191 seconds (219.611 MB/sec).
Processed 2840 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_3' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.

BACKUP DATABASE...FILE= successfully processed 7898 pages in 0.271 seconds (227.674 MB/sec).



Result after applying CU2.

Processed 2872 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_3' on file 1.
Processed 3 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE successfully processed 7931 pages in 0.289 seconds (214.379 MB/sec).
Processed 2872 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE...FILE= successfully processed 5402 pages in 0.202 seconds (208.890 MB/sec).
Processed 2872 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE...FILE= successfully processed 5402 pages in 0.205 seconds (205.852 MB/sec).


Be aware on this, this may be the expected behavior of SQL Server 2016 though I see it as an issue, working on it, I will update the post if I hear something new on it.

Wednesday, October 12, 2016

Find SQL Server Databases that are not accessed after last reboot

I have written a post tilted When was my SQL Server Database last accessed? that talks about how to find the last accessed date and time of databases. As per one of the comments, it is important to find out databases that were not accessed after last reboot as well, hence wrote this query for that.

It may be helpful to you.

SELECT name DatabaseName
FROM sys.databases
WHERE name NOT IN ('master', 'tempdb', 'model', 'msdb') -- and if you other system DBs
EXCEPT
SELECT DISTINCT
 DB_NAME(database_id) DatabaseName
FROM sys.dm_db_index_usage_stats
ORDER BY 1


Tuesday, October 11, 2016

How to find NT Service\MSSQLSERVER and NT Service\SQLSERVERAGENT accounts?

You have installed SQL Server and it is up and running without any issue. Later, you need to change some permission given to either SQL Server engine or agent. You look for accounts, searched in local users, searched in local groups but you cannot find them.

Are you experiencing above issues? If yes, first understand the what are these accounts. These are called Virtual Accounts that are created during the installation of SQL Server. These accounts are managed by the Operating System itself, hence they are not visible when you browse Local Users and Groups window. Similarly, there is another type of accounts called Managed Service Accounts that are created at domain level and assigned to SQL Server services.


Now, for some reason, if you changed the service account of your SQL Server service to another account, and later you want to use the same Virtual Account, this is what you have to do.

1. Get the properties of the services.


2. Easiest way is, just type the account and leave the password blank. If the instance is default, type it as NT Service\MSSQLSERVER or if it is a named instance, type NT Service\MSSQL$.


3. Click on to get the service restarted. It will work as you expected.

4. Or, if you want to search the account, click on Browse to open Select User or Group window. Type nt service\ms in Enter the object name to select input box and click on Check Names. If you are setting the Agent Service, look for nt service\sql word.


5. You get Multiple Names Found window opened. Select the account from the list and continue. Do not enter a password, click on OK and get the service restarted.


Just like this, if you need to add these accounts to some other groups for granting more permissions, example, adding Agent Service Account to Administrators Group (not recommended), follow the same steps.


Monday, October 10, 2016

How to enable compression on all SQL Server Database Backups

Backup compression is not something new to SQL Server, it has been there with few versions. However, many are unaware on setting the default compression on all backups, not explicitly for specific backups.

How can we instruct SQL Server to compression all backups by default?
There is a server configuration called backup compression default that can be set at the instance level. If you enable it, all backups will be compressed.

Check the following code;

-- Taking a backup before enabling compression
BACKUP DATABASE AdventureWorks2014
 TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\Backup\AdventureWorks_BeforeCompressionEnabled.bak'
 WITH INIT;
GO

-- Enabling compression
EXEC sp_configure 'backup compression default', 1;
GO
RECONFIGURE;
GO

-- Taking a backup after enabling compression
BACKUP DATABASE AdventureWorks2014
 TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\Backup\AdventureWorks_AfterCompressionEnabled.bak'
 WITH INIT;
GO


Sunday, October 9, 2016

Determine the backup strategy - SQL Server Brain Basher of the Week #056

Determining the backup strategy for a SQL Server database is not an easy task as there are many things to be considered. As per my experience, many implement a strategy without considering anything, example, just a daily full database backup even without asking the business requirement.


Generally, we consider two things: RTO and RPO. RTO (Recovery Time Objective) refers time it takes for recovering. In other words, how long business can survive without the system. RPO (Recovery Point Objective) refers the acceptable data lost at a recovery. In other words, business accepts to repeat the work for a certain time period.

This week question is based on backup strategies. Here is the question:

You have a database which requires a recovery (or backup) strategy based on the given requirements;
  1. Size of the database is 25 GB.
  2. The database must never be unavailable for longer than 8 hours.
  3. At the recovery from a failure, no more than 1 hour of transactions may be lost.
  4. Total backup throughput is 100 MB/minute.
  5. Total restore throughput is 80 MB/minute.
  6. Average rate of changes during office hours is 500 MB/hour.
  7. Average amount of new data is 200 MB /day.
  8. Office hours - 09:00 - 17:30.
What would be your plan the requirements given?

How do you start this. You should start with #2 and #3 because they talk about RTO and RPO. Whatever backup strategy we planned, we must make sure the restoration can be done within 6 hours. Not only that, we have to make sure that we must not lose more than 1 hour data. Okay, let's work on this.

We cannot survive without a full database backup. Let's calculate and see how much time it needs for backing up and restoring the database.

Database size = 25 GB = 25,600 MB

Backup throughput = 100 MB/minute
Time takes for backing up the database = 25600/100 = 256 minutes = 4 hours 26 minutes

Restore throughput = 80 MB/minute
Time takes for restoring the database = 25600/80 = 320 minutes = 5 hours 33 minutes

As you see, it takes about 4-5 hours for backing up and we can easily set it up during off peak hours. However, it takes 5-6 hours for restoring that is something we need to consider.

Since the RPO is 1 hour, we need to make sure that Database Recovery Model is Full and it needs Transaction Log backup every hour. Let's calculate the time it needs.

Changes per hour - 500 MB
Time takes for backing up the log = 500/100 = 5 minutes
Time takes for restoring the log = 500/80 = 7 minutes

Let's see whether we can have strategy like - Daily full backup and hourly Transnational Log Backup during office hours. If we plan the full backup at 00:00, it can complete it before 05:00. And we can have hourly Transnational Log Backup starting at 10:00 and ending at 18:00. In worse case, if something happens at 17:30, we have to restore the full backup and 8 log backups. Let's calculate the time and see.

Restoration time for the full database backup = 5 hours 33 minutes
Restoration time for log backups = 7 x 8 = 56 minutes
Total duration = 5 hours 33 minutes + 56 minutes =  6 hours and 29 minutes.

Since we have been given the RTO as 8 hours, the strategy we checked works fine with it. However, we need to think about the growth as well. We add approximately 200MB daily, approximately 1GB weekly. This increases the time it takes for full database backup. Assuming that database becomes 30 GB after a month;

Time takes for restoring the database = 30720/80 = 384 minutes = 6 hours 24 minutes.

If we recalculate the time;
Total duration = 6 hours 24 minutes + 56 minutes =  7 hours and 20 minutes.

Now this has to come to closer to the given RTO. As you see, within few months, this strategy will not be able to handle the requirements. If you see something like this, then you cannot handle it with traditional backup types, you might have to go for either disk level backups or partial backups. Let's talk about more on that with another post.

Saturday, October 8, 2016

How to create a SQL Server job quickly

SQL Server Agent Jobs are commonly used for automating administrative tasks and other routine tasks. For creating a job as you need, you need to either use GUI that comes with SSMS or use stored procedures given. Yes, it takes time for adding a job and then adding steps but we can get some jobs automatically created using a simple way.

Assume that you need to create an Agent job for backing up one of your databases. Rather going through SQL Server Agent -> Job in the Object Explorer, you can open the Backup Interface just like the way you manually take backup, configure everything as you want. Once the configuration is done, without clicking OK, click on down-arrow next to Script Button and click on Script Action to Job.


This action opens a window that is used for creating Agent Jobs. Good thing is, it adds all we need with Steps and all you have to add is a schedule.


Using this technique, jobs can be easily created for most of the tasks.


Friday, October 7, 2016

How to check success and failure logins attepmts in Azure SQL Database

Who tried to access my Azure SQL Database? Were they succeeded? Can I get more details on attempts?

If you are maintaining a Azure SQL Database, you should be asking these questions yourself. Or, practically, you need to have these questions in order to protect your database. Generally, with on-premise instance, we get these information using either Audit or Logon Triggers. Can we get the same from Azure SQL Database?

Azure SQL Database has Auditing that can be easily enabled via the portal. It does not support Logon Triggers and notifications cannot be sent via emails as it does not support Database Mail. However Audit captures everything you need and there are few ways of seeing them.

In order to see required information for my databases, I enabled Audit and then use Azure Storage Explorer to see all details because it is the easiest way. If you need the same, here is the way of doing it;

1. Open the Azure Web Portal and open SQL Server blade.
2. Click on the server you need to audit and select Auditing and Threat detection.
3. Enable Auditing. Select Table as Auditing type because it is easy to read.
4. Create a new storage for storing audit records.
5. Set the Retention (Days) as you need and set the Table name as you like.


6. Select Audited Events and check both Success and Failure check-boxes.
7. Select all Threat Detection Types.


8. Click on Save to save all settings.

As I mentioned above, there are multiple ways of getting audit records to our environment. Let's use Azure Storage Explorer and see how we can see them. Let's see how we can use other channels like Power BI with another post later.

1. Open Azure Storage Explorer and configure the connection for the storage you use. Read more on this at: http://dinesql.blogspot.com/2016/09/microsoft-azure-storage-explorer-latest-version.html
2. You should be able to see all required details related to login attempts.




Thursday, October 6, 2016

What is a runaway query and how to handle it with SQL Server

Initially I thought that runaway query is another name for ad-hoc queries but it is NOT. It actually refers long running queries that takes long time than you expected or previously executed with a shorter period but not takes long time.

* image taken from www.dreamstime.com.

There can be many reasons for runaway queries. Poor statistics, poorly written codes, blocking and locking or even heavy resource consumption can be some of the reasons for this. Example, a report generally takes 1-2 minutes to open suddenly takes 15-30 minutes. Or a user runs a query expecting the result in few minutes but takes hours.

How can we handle this with SQL Sever?
You can monitor them easily and stop the execution by killing the process. You can use Activity Monitor (Active Expensive Queries window - Elapsed Time column) or sys.dm_exec_query_stats dynamic management view (see this: https://www.brentozar.com/blitzcache/long-running-queries/). However, rather killing the session, it is advisable (or always better) for detecting runaway queries and allocate limit resource consumption that makes sure other standard operations are not disturbed. Even though you cannot fully handle it, the Resource Governor can handle this up to some extent.

Remember, Resource Governor does not detect runaway queries. It can only limit resources for requests, identifying certain properties of the request such as application name, user name, etc. Example, if you want to limit resources from one of the reporting applications that has lengthy reports and requests coming from it have no higher priority, application name can be tracked via a function called Classifier User-Defined Function which we specifically write for implementing Resource Governor and assign the request to correct workload group that uses least resources. Read following articles for understanding Resource Governor and how to use it for limiting resources.


Wednesday, October 5, 2016

How to use SQL Server templates?

Unfortunately, many developers are unaware of available templates. SQL Server provides templates for almost all operations you can perform with relational databases and multidimensional databases, hence you do not need to remember all commands and the way code should be written.

For example, if you need to create a Linked Server object to an external database, you can simply take the template available and modify with values related.

You can find all templates using Template Explorer. Use either View -> Template Explorer menu or press Ctrl+Alt+T for viewing templates.


Expand the relevant node you look for and double click the template. The below image shows the template Add Linked Server Access MDB opened.


You can easily change values related by pressing Ctrl+Shift+M. It opens a window that you can use for setting values.



Tuesday, October 4, 2016

What are dbmanager and loginmanager database roles in Azure SQL Database instance

Have you noticed two new database roles appeared in master database? You might have, you might have not, because these two appear only with Azure SQL Database instance. What are they and what is the usage of them?

dbmanager database role
Azure SQL Database instance does not offer any server level roles and no user can be created similar to sa that we have seen with on-premise instances. Since we do not have administrative control over Azure SQL Database instance, the dbmanager role is given to us inside the virtual master database. Generally, when the instance is created using the portal, the account added becomes the admin of the instance and account will be added to the master database. If you need more administrators who can create databases, then you need to create a new user and add the user to this role.

Here are the steps. You need to make sure that you are connected with master database.

CREATE USER Jane WITH PASSWORD = 'Pa$$w0rd'; 
ALTER ROLE dbmanager ADD MEMBER Jane; 

dbmanager database role
Just like the above one, if you need users who can perform operations related to users such as create, alter user account, create a user and add him to this role.

CREATE USER Jack WITH PASSWORD = 'Pa$$w0rd'; 
ALTER ROLE loginmanager ADD MEMBER Jack; 

However, this is not much required and important with Azure SQL Databases as almost all databases are treated as Contained Databases. Therefore required permissions can be simply granted to user as below without creating user accounts in master and adding them to loginmanager role.

GRANT ALTER ANY USER TO Jack;


Monday, October 3, 2016

SQL Server Data Masking - Filtering masked values

SQL Server 2016 introduced Dynamic Data Masking for obfuscating certain values in records, not exactly as a security measure but for hiding sensitive data. While this was being discussed, a question raised: Can we filter data even when data is masked and no permission to unmask?

Generally, unless you are a member of db_owner role, you cannot read masked data. If need, you should be granted with UNMASK permission. Now can you masked filter data? Can you group masked data? Can you do them even if you have no permission for unmasking?

Yes, it is possible. Let's have a look on below code;

-- Create a test database
CREATE DATABASE Sales
GO

-- Connect with it
USE Sales;
GO

-- Create a sample table with masked columns
CREATE TABLE dbo.RetailCustomer
(
 RetailCustomerId int identity(1,1) PRIMARY KEY
 , Name varchar(100) NOT NULL
 , EmailAddress varchar(100) MASKED WITH (FUNCTION = 'email()') NOT NULL
 , CreditLimit decimal(16,2) MASKED WITH (FUNCTION = 'random(1, 10000)') NOT NULL
 , SecurityTokenCode char(10) MASKED WITH (FUNCTION = 'partial(2, "xxxx", 0)') NOT NULL
);
GO

-- Insert set of records
INSERT INTO dbo.RetailCustomer
 (Name, EmailAddress, CreditLimit, SecurityTokenCode)
VALUES
 ('Customer1', 'customer1@gmail.com', 10000, 'EX12345678')
 , ('Customer2', 'customer1@company.lk', 120000, 'EX11223344')
 , ('Customer3', 'customer3@abc.com', 8000, 'TX11223344')

-- Checking records;
SELECT * FROM dbo.RetailCustomer;


As you see, I do not see data as masked data because I am a member of db_owner role. But if you check with different account, he will see masked data. See the code below;

-- Creating a user
CREATE USER Peter WITHOUT LOGIN;

-- Grant read permission to Peter
-- Note that this does not mean that he can unmask
GRANT SELECT ON dbo.RetailCustomer TO Peter;

-- Check with his account
EXECUTE AS USER = 'Peter'
SELECT * FROM dbo.RetailCustomer
REVERT;


We can grant unmask permission to Peter but let's see whether he can do some aggregation and filtering with masked data.

EXECUTE AS USER = 'Peter'
-- Can he aggregate masked values
SELECT SUM(CreditLimit) FROM dbo.RetailCustomer;

-- Can he performed operations against masked values
SELECT LEN(EmailAddress) FROM dbo.RetailCustomer;
SELECT LEFT(EmailAddress, 2) FROM dbo.RetailCustomer;

-- Can he filer records with masked columns
SELECT * FROM dbo.RetailCustomer WHERE CreditLimit = 120000
SELECT * FROM dbo.RetailCustomer WHERE EmailAddress = 'customer1@company.lk';

REVERT;


See the result. He cannot do some operations but he can do filtering without any issue. Looks unsecured? Do not worry, that is why I mentioned that Dynamic Data Masking is not for securing or encrypting data, it is for obfuscating values.

Sunday, October 2, 2016

SQL Server Brain Basher of the Week #055 - Security

You are the Database Administrator and you maintain a database with following characteristics;

  • Database contains sensitive data.
  • Database authentication is set to Mixed Mode.
  • Users can access only tables they are authorized. Authorization is completely handled with SQL Server.
  • Tables related to finance data can be accessed only by users who are in Finance User-Defined Database Role.

Sales team uses their own application to access the database. There is a new requirement from the Sales team as below;
  • Sales team needs to see some financial data through the application.
  • Sales team members should NOT be able to execute impromptu SELECT, INSERT, UPDATE, and DELETE statements against financial tables.
  • The application will prompt for the second password when the screen related to finance data is opened. Sales person needs supervisor to input the second password for seeing data.
Here is the question of the week;

Being a DBA, what is your suggestion to achieve above requirement?

There are couple of things you need to consider. First, although users access the database via applications, users credentials are used for authorization, means users need permissions on finance data in order to access them. But, secondly, if users are granted permissions, they will be able to execute impromptu queries against tables, which should not be avoided. How can you grant permission to these specific tables without giving explicit permissions to users or roles?

If you are a smart DBA, now you should think about Application Roles

An Application Role is a database level role that permissions can be set with an application via the role, without setting permissions to users or other roles. You create it with a password, grant permissions as needed, in this case, let's say two tables related to finance, and then open the Application Role when two tables are required to access.

Since Supervisor enters the password, it can be used as the Application Role password. Once the application accepts the password, it calls a special stored procedure called sp_setapprole with the name of the Application Role and password. This creates New Security Context for the application connection and current user's context becomes inactive. User's security context will only be back once the sp_unsetapprole is called, until that, only Application Role and Public tokens are available for accessing resources.


Note that once the Application Role is active, user can only access other databases via guest account only.

For more info on Application Role, read this: https://msdn.microsoft.com/en-gb/library/ms181491.aspx

Saturday, October 1, 2016

SQL Server Audit shows wrong data and time?

It was bit funny, could not figure out immediately what has happened, the records related to Audit are saved with wrong data and time.

SQL Server Audit is a the primary auditing tool in SQL Server and it is based on event-driven monitoring engine called Extended Events. It is used for tracking actions performed against the database and server instance, mainly for auditing.

I have written three posts on it, have a look on if you are new to SQL Server Audit;




While discussing the topic, we witnessed that SQL Server has saved audit records with wrong date and time. It took few minutes to realize that it is NOT wrong at all, records are save with UTC time standard. If you need to get the write data and time, you need to covert it to the time based on your time zone.

In my case, in order to see the right date and time for my audit records, I have to convert the UTC to +05:30 time zone like below.

SELECT 
 convert(datetime, SWITCHOFFSET(event_time, '+05:30')) event_time
 , object_name, user_defined_event_id, statement, user_defined_information
FROM sys.fn_get_audit_file ('c:\temp\*',default,default);

Now the question is, how to see the same if Audit is recorded with Windows Application Log or Security Log. Not sure about the way but will check and see. If you have already found a solution, please share.