Monday, December 5, 2016

Maintaining Sinhala (Unicode) characters with SQL Server Database

This is one of the questions I received last week on Unicode characters. I was asked the way of maintaining Sinhala characters in one of the tables because they have been trying with nvarchar data type but it has not worked as expected.

Let's try to understand the way of maintaining different characters such as Sinhala and Tamil. If we need to maintain such characters, usually with char or varchar, we should instruct to SQL Server to use specific code page and corresponding characters. The instructions can be passed using Collation. Not only it detects the code page and set relevant characters, it decides the sorting rules, case and ascent sensitivity of our data.

The COLLATE clause can be specified at Server Level, Database Level, Column Level or in an Expression (example, with ORDER BY). SQL Server supports Windows Collations, Binary Collations, and SQL Server Collations.

Note that not all collations related your language are available with SQL Server Collation set. If the required one is not available, then you can use Windows Collation set. However, some of Windows Collations cannot be set at Server Level or Database Level.

Collations for languages like Sinhala and Tamil are not exist with SQL Server Collation Set. Therefore, we have to use Windows Collation and no way of setting it at Database Level.

To see all Windows Collation, here is the page: https://msdn.microsoft.com/en-us/library/ms188046.aspx

As per the above page, we have to use Indic_General_100_ for Sinhala characters. We can find out all collations related to it using below query;

SELECT * FROM sys.fn_helpcollations();


Let's apply this to a table and see. The following code creates a table with a nvarchar data type for holding Sinhala characters. Note the the collation set for the column. When insert records, we need to make sure that N is used for specifying the the value is Unicode.

DROP TABLE dbo.Test;

CREATE TABLE dbo.Test
(
 Id int identity (1,1) primary key
 , Message nvarchar(4000)  collate Indic_General_100_BIN not null
);

INSERT INTO dbo.Test (Message) VALUES (N'Hello World!');
INSERT INTO dbo.Test (Message) VALUES (N'සුභ උදෑසනක්!');

Here is the result if you query the table now.


Thursday, December 1, 2016

SQL Server 2016 Reporting Services Cookbook - Published!


I am happy to announce that SQL Server 2016 Reporting Services Cookbook is published and available for all SQL Server enthusiasts. It is my first book as a co-author.

It was not an easy task but with the great help from Amrita Noronha (Packt), Robert C. Cain (co-author) and Vinay Argekar (Packt), the dream came true, hence my sincere thank to all of them.

This book covers new features related to SQL Server 2016 as well as some important topics for report solution implementations. This is what you get from this book;

  • Key capabilities, architecture, and components of Reporting Services
  • New features that have been added to Reporting Services
  • Design the architecture for reporting solutions
  • Design the architecture for BI solutions
  • Implement reporting solutions using Reporting Services
  • Improve the performance, availability, and scalability of the reporting solution
  • Enhance reporting solutions with custom programming and improved security
Here is the link for more info and if you like to read;

Wednesday, November 30, 2016

Adding double quotes to string value when importing data from SQL Server to CSV

There have been many discussions on this and but still the issue exist if you try to get it done using the wizard itself. Generally, we need string values to be encased when importing data to CSV but not all columns. SQL Server Import and Export Wizard allows us to add a Text Qualifier but it adds the qualifier to all column regardless of the data type. As per some of the threads I read, this is because CSV file has no data type set for columns, hence it cannot distinguish types.

With the wizard, this is how you set the Text Qualifier.


And once it is generated from the following query against AdventureWorks database;

SELECT 
 ProductID, Name, Color, ListPrice 
FROM Production.Product;

The output will be;


However, in most cased, all we need is, encase only string values, we need to handle it via the query. If you write the query as below and continue the wizard without a qualifer;

SELECT 
 ProductID, '"'+ Name + '"' Name, '"' + Color + '"' Color, ListPrice 
FROM Production.Product;

Then you get the following output;


Check and see, let's discuss if you experience issues.

Tuesday, November 29, 2016

What if EXECUTE AS statement executes multiple times

While discussing about Impersonation, a question raised; If we mistakenly execute EXECUTE AS statement twice, do we have to call REVERT twice?.

I was not sure about it since I have not done it but realized that it is required.

EXECUTE AS statement allows us to set the execution context by impersonating another login or user. This is useful when a code that requires a different permission set for running and user logged in has no permission for running it. Once this is set, either we have to call REVERT command to go back to the original context or it will be remained in effect until the session is dropped or the module (stored procedure or trigger) where the command was executed exist.

EXECUTE AS statement can be executed multiple times either using same Principal or different Principals (read more with SQL Server Security Concepts: Securables, Pricipals and Permissions). If need to switch back to original context, then REVERT must be called for each context opened.

As per the research I made, there is no way of reverting all context using a single command and bringing the original context back to the session. Therefore, if we have mistakenly execute the statement multiple times, no other option but call REVERT multiple times.

However, we can use sys.user_token for checking whether the original context is set or not. Here is a sample code.


The current context is based on my account. Let's change the context and execute the same.

EXECUTE AS USER = 'Peter'  

SELECT * FROM sys.user_token;
SELECT ORIGINAL_LOGIN();


As you see, now the execution context is set with Peter's account. We can use ORIGINAL_LOGIN function for checking the initial login. If EXECUTE AS USER = 'Peter' statement is executed again, then it creates another context. Here is the the result;


Now if REVERT is called, it does not bring the session back to original context because it reverts only the last one created.

REVERT;

SELECT * FROM sys.user_token;
SELECT ORIGINAL_LOGIN();



Therefore, REVERT has to be called again. Here is another way of checking whether the session uses the original execution context or not.

SELECT login_name, original_login_name, * 
FROM sys.dm_exec_sessions
WHERE program_name like '%Management Studio%'


Monday, November 28, 2016

Ignore or disable string truncation error and insert or update records

String or binary data would be truncated error message is a well known error message and you get this when you try to set a value with higher characters than the column accepts. For an example, if you have configured a column with varchar(5) and you try to set a value with 6 characters, you will get the truncation error and statement will be failed.

Can we ask SQL Server to ignore this and continue with truncation and set the value with possible characters? Yes, it is possible with ANSI_WARNINGS setting. The default value of this is ON, hence INSERT and UPDATE statements work as specified by the ISO standard. That is the reason for the error we experience.

If you set this setting for OFF, then no warning is issued and statement succeeds. However value is truncated as per the column size set.

Here is an example;

USE tempdb;
GO

DROP TABLE dbo.TestTable
GO

CREATE TABLE dbo.TestTable
(
 Id int PRIMARY KEY
 , Value varchar(5) NOT NULL
);
GO

-- This statement fails as we send 11 characters
INSERT INTO dbo.TestTable
 (Id, Value) VALUES (1, 'Test value.');

-- This code succeeds but will take only first 5 characters
SET ANSI_WARNINGS OFF
INSERT INTO dbo.TestTable
 (Id, Value) VALUES (2, 'Test value.');
SET ANSI_WARNINGS ON

Sunday, November 27, 2016

TRUNCATE Table - SQL Server Brain Basher of the Week #060

Here is a simple question I asked from an candidate I interviewed last week. This was not the exact topic discussed but while discussing something else, I asked based on one his explanations. It is not about differences between TRUNCATE and DELETE or advantages over another, this is what I asked;

Is TRUNCATE statement is a DML statement or DDL statement?

Unfortunately I did not get the answer I need. First of all let's try to understand what is DML and DDL;
  • DML - Data Manipulation Language - most action queries come under this and most common ones are INSERT, UPDATE and DELETE. There is another category called DQL - Data Query Language that is the category for commands like SELECT but some consider the SELECT is a part of DML.
  • DDL - Data Definition Language - Commands that modify objects in the database come under this. Statement like CREATE TABLE, CREATE USER are example of this.
Now, TRUNCATE is DML or DDL?

It is a DDL command because it does not work with data (like DELETE command). Yes, it deletes all records but it actually does a operation similar to drop/re-create operation hence it is considered as a DDL command.

Remember, DDL does following;
  • Removes all data pages and no empty-pages are left behind.
  • Acquires only table or page level locks.
  • Resets identity property.
  • Needs Alter permission.

Friday, November 25, 2016

SSIS OLE DB Destination sends all records to Error Output

SSIS OLE DB Destination is one of the destinations we heavily used for loading data to databases. Generally we used Data access mode - fast load for loading data because it is the most efficient and fastest way of loading data. However with default setting, you may see some issues if you try to capture all error records while loading the destination unless you make some changes to some settings.

Let's make a simple data set and try to load to a table. I have following table created in tempdb.

USE tempdb

CREATE TABLE dbo.Product
(
 ProductId int IDENTITY(1,1) NOT NULL,
 ProductName varchar(200) NOT NULL,
 Class char(1) NOT NULL,
 Cost money NOT NULL,
 PRIMARY KEY CLUSTERED (ProductId ASC)
)
GO

ALTER TABLE dbo.Product  
ADD  CONSTRAINT CH_Product_Class 
CHECK  ((Class='A' OR Class='a' OR Class='B' OR Class='b'))
GO

ALTER TABLE dbo.Product  
ADD  CONSTRAINT CH_Product_Cost CHECK  ((Cost>(0)))
GO

I have a text file named Products.txt that contains 15 records as below;

ProductName,Class,Cost
P01,A,100
P02,B,150
P03,A,344
P04,A,555
P05,A,100
P06,B,150
P07,X,344
P08,A,555
P09,A,555
P10,A,100
P11,B,150
P12,A,344
P13,A,555
P14,A,344
P15,A,555

As you see, P07 record is not a valid record as it has X as the Class. Let's set up a package with a Data Flow Task and configure it to read data from this file and load into Product table as below.

I have used OLE DB Destination with its default settings.


And have configured the Error Output to Redirect Row on Error.


Then I have set a Record Destination just for capturing the error output. If you do, you can configure DataReader Destination as well. This is Data Flow looks like;


Now if I run the package, packages retrieves all records but it send all records to Recordset Destination even though there is only one record that violates a constraint.


What is the reason for this. The reason is the value we have set (or the default value) for Maximum insert commit size property. This property indicates the batch size that the OLE DB Destination tries to commit data during the operation. Since we have only 15 records and the batch size set is 2147483647, it tries to commit all at once. Since there is an invalid record, entire batch is getting failed and none of the record is getting inserted (or committed).

This is fine with a small dataset like this but this is not acceptable for a large data set. Hence we need to make sure that it does not send all records to error flow but an acceptable range.

For this, let's make it as 5 and see.


Here is the result of execution after changes;


This makes sure that at least certain batches are getting inserted without sending all records to error flow.


However, you need to decide the batch size without degrading the performance. If you set it for a small value, you may not get the expected performance form fast-load option.


Thursday, November 24, 2016

SSIS Exception deserializing the package The process cannot access the file .ispac because it is being used by another process

For some reasons, my  SQL Server Data Tools were crashing on a specific package and I could not get it up and running after that until I restarted the machine. However, I started experiencing the same again (not sure about the reason) and this is what I got when I tried to open the package and execute after the crash.

Exception deserializing the package "The process cannot access the file 'C:\Users\DineshPriyankara\Documents\Visual Studio 2015\Projects\TestDataFlowErrors\TestDataFlowErrors\bin\Development\TestDataFlowErrors.ispac' because it is being used by another process.".


When I checked my Task Manager, I noticed that few SSIS Debug Host are running;


Since I was not running any SSIS project, decided to kill all SSIS related task and of course, problem solved with it. I was able to open and run the package without any issue.

Now, not sure whether it is the best way of solving the issue, have asked from some experts, waiting for some explanations....

Wednesday, November 23, 2016

SSIS Pivot Transformation

I have written a post on this in 2011 based on old configuration which is PIVOT and UNPIVOT: Integration Services. Setting Pivot Transformation is not that easy with versions prior to 2012 but GUI given after can be easily used and implemented. If you need to understand the old way of pivoting, read the previous blog post, this post is based on the GUI given with SQL Server 2012 onward.

Pivot allows us to convert a normalized dataset into a less normalized format, transferring (pivoting) row values into columns. In simple term, it identifies distinct values in a designated column and forms columns based on identified values. Note that it is not fully dynamic and automated, it does not recognize the column names to be generated as Pivot Columns, you need to manually specify new column names but the new GUI can be used for getting the column names.

Let's see an example. The following query generates an output with Year+Customer, Month and OrderAmount;

USE AdventureWorks2014;
GO

SELECT 
    Convert(varchar(4), YEAR(OrderDate)) + '_' +  p.LastName OrderYear_Customer
    , DATENAME(mm, OrderDate) OrderMonth
    , SUM(SubTotal) OrderAmount
  FROM Sales.SalesOrderHeader h
    INNER JOIN Sales.Customer c
      ON c.CustomerID = h.CustomerID
    INNER JOIN Person.Person p    
      ON p.BusinessEntityID = c.PersonID
  GROUP BY 
 Convert(varchar(4), YEAR(OrderDate)) + '_' + p.LastName
    ,DATENAME(mm, OrderDate)
  ORDER BY 1, 2


Let's pivot this result using Integration Services as below output;


As you see, what we have to do is, transfer values in OrderMonth column into multiple columns and use OrderAmount column for calculating the relevant total for each month. Use this script for creating a table for holding above resultset. You can create it in a new database, in my case, it has been created in a database called DataWarehouse.

CREATE TABLE dbo.SalesForMonths
(
 Id int identity(1,1) primary key
 , OrderYear_Customer nvarchar(200) NOT NULL
 , January money NULL default(0)
 , February money NULL default(0)
 , March money NULL default(0)
 , April money NULL default(0)
 , May money NULL default(0)
 , June money NULL default(0)
 , July money NULL default(0)
 , August money NULL default(0)
 , September money NULL default(0)
 , October money NULL default(0)
 , November money NULL default(0)
 , December money NULL default(0)
)

Next step is creating a SSIS project and adding a package. Here are the steps for loading and pivoting data;

1. Add an OLE DB Data Source and configure it with above SELECT statement.
2. Add a Pivot Transformation and connect with OLE DB Source. Open the Editor of it (You can either double-click on it or select Edit menu item from the context menu).
3. We need to set following items;
  • Pivot Key: This needs a column that contains values for setting new columns (column names). With this example, it needs to be set with OrderMonth.
  • Set Key: This needs a sorted column that contains values for grouping. Select OrderYear_Customer for this property.
  • Pivot Value: This needs a column that contains values for new pivot output columns. Set OrderAmount to this.
4. Next is setting up output column names. Either we have to hard code this as January,February, March,..... or get the list generated from the Pivot Key Column set. Let's get the list generated. Select ignore un_matched Pivot Key values and report them after DataFlow execution checkbox and click OK.


Although we have not completed the package, let's execute;


Once the execution is done, go to Progress tab and get the values generated copied.


As mentioned above, either you can type these values for generating pivot column names or copy this and paste without typing. 

5. Let's open the Editor of Pivot Transformation again and paste the copied value. Once pasted, click on Generate Column Now button.


6. Click OK to complete the Pivot Transformation.
7. Add an OLE DB Destination and configure for the table created above. 
8. Match the columns flowing from the Pivot with it and save the OLE DB Destination.


9. Execute the package now. 


10. If you query for dbo.SalesForMonths now, you should see the loaded pivot result.



Tuesday, November 22, 2016

What is SSIS dtsx.designer file?

Have you noticed that there are additional files with extension dtsx.designer in your Integration Services 2016 projects?


You are 100% sure that you did not create a file like that and also, it can be opened; it is an XML file. If you analyze the file, you should notice that it is similar to your dtsx file but has additional nodes.

dtsx file:


dtsx.designer file:

As you see, dtsx.designer has more nodes though the content is similar. This file is automatically created when you use one or more Package Parts in your package. This is one of the new features introduced with SQL Server Integration Services 2016.

Note that this file is used only at the design-time. At the run-time, it package parts are getting added to your original package (in my case LoadProducts).


Monday, November 21, 2016

New way of opening the SSIS toolbox

Did you notice that SSIS toolbox appears at the right-corner of the package designer in SQL Server 2016 Data Tools? I can remember and I have seen as well, whenever the toolbox is missing, how we struggle to find it even though a menu item is given.

Anyway, now it really easy, this is icon you need click;


Sunday, November 20, 2016

Quickest way of creating a copy of existing database without data - SQL Server Brain Basher of the Week #059

Sometime we need to take a copy of the database without data and have it as a new database. This either requires to generate the entire script and run the script with a new database name or take a backup of the database, restore it and delete data. Both require a reasonable time base on various conditions.

Here is the Brain Basher of the week on it.

What is the quickest way of creating a copy of existing SQL Server 2016 database without data?

Both techniques mentioned above work fine but add complexities. If you have installed SQL Server 2016 Service Pack 1, this can be esaily achieved using DBCC CLONEDATABASE command.

Here is the way of creating a copy with a new name.

DBCC CLONEDATABASE(AdventureWorks2014, CloneAdventureWorks2014)
WITH NO_STATISTICS,NO_QUERYSTORE ;

Note that database is created as a read-only database;


However, changing the Database read-only property to false makes the database as a read-write database.


For more info on this command, read: DBCC CLONEDATABASE is now working with SQL Server 2016.

** Note that this does not take all objects exist in the database when copying the schema. Therefore, if your database contains many different types of objects (such as Naively Compiled Stored Procedures), this is not the best way of getting a copy of your database.

** Although we can get the schema from this, this functionlity is primarily given for getting a copy of datatbase for troubleshooting, NOT FOR CREATING A DATABASE SCRIPT OR COPY OF DATABASE.

For more info, read: https://support.microsoft.com/en-us/kb/3177838

Saturday, November 19, 2016

Now SQL Server supports CREATE OR ALTER statement

Another great enhancement with SQL Server 2016 SP1 is the support of CREATE OR ALTER statement with Stored Procedures, Views, Triggers and User-Defined functions. Now you do not need to have a statement for checking whether the object is exist before modifying and then start with either CREATE or ALTER. You can straightaway start the statement as CREATE OR ALTER and SQL Server will handle the rest.

The following example shows the way of using it;

USE AdventureWorks2014;
GO

CREATE OR ALTER VIEW Sales.GetSales
AS
SELECT h.SalesOrderNumber, h.CustomerID, SUM(d.LineTotal) TotalAmount
FROM Sales.SalesOrderHeader h
 INNER JOIN Sales.SalesOrderDetail d
  ON h.SalesOrderID = d.SalesOrderID
GROUP BY h.SalesOrderNumber, h.CustomerID;


Friday, November 18, 2016

Microsoft SQL Server Backup to Windows Azure Tool

Earlier I made a post on SQL Server Backup to URL: SQL Server Backup to URL - Now easier than before that explains how to take a backup of SQL Server 2016 database using new Management Studio. However, backup to Azure was not something new with SQL Server 2016, it was started with SQL Server 2012 (without GUI support) and improved with GUI support with SQL Server 2014. Read this post for more information on it;

There is another way of backing up SQL Server databases to Azure which is based on a tool called Microsoft SQL Server Backup to Windows Azure. Good thing with this tool is, it is not limited to a specific version of SQL Server, it can be used with any version of SQL Server. Note that it is not a new tool, it is a quite old tool but the latest version is published in June 2016.

Let's see how we can use this tool for backing up databases and restoring from it;

First thing you need to do is, download and install this. You can download this either from:
If you do not need other related tool for SQL Server, use the second URL for downloading the tool. Once downloaded, you can see the installed tool as Microsoft SQL Server Backup to Windows Azure Tool Configuration, start it. This is what you see with its first screen;


This screen allows you to add Rule. Click on Add and continue. The rule is all about, which location this service needs to monitor and what files it should consider. Specify your backup location and file name pattern. Usually, pattern name is *.bak.


Click on Next to continue. Then you need to specify the storage. Make sure the storage you have created is Classis and it as a Blob Container for holding backups. Set the Storage Name, Access Key and Container.


Once you clicked Next, you can enable Encryption and Compression and click Finish button. You should see the rule you added, and it allows you to add more rules as well as modify existing rules.


Let's see this works with backing up operations. First let's backup WideWorldImporters database to a different folder and see the size of it. Make sure you have enabled Compression.


The size of the compressed backup is 165 MB:


Let's take the same to the folder which is configured with the tool. Do NOT enable compression. Note that it will take time more that the time took with previous step as it need to transfer data to Azure.


Check the folder after backup operation is completed. You will see a small file which is NOT your backup of the database. It is called as a stub file that contains some metadata which can be used for restore operation.


If you check the Azure Storage now, you will see two files have been created and placed. One is the same stub file and the second is the blob for the database backup. Since you have the same stub file in the Azure Storage, you can take a copy of it during the restoration operation if you have lost the local file.


If you open the Stub file using a Text Editor, you should see similar content;


If you need to restore the database, you can refer the stub file as the backup through standard restoration window and SQL Server will access the backup from Azure Storage for getting the backup.

As you see, this tool is very useful, specifically when you have multiple versions of SQL Servers. Best thing is, you do not need to write specific codes for transferring files from local environment to Azure as it is automatically done by the tool whenever a backup is placed.

As you see, the size of the backup file after getting it compressed by the tool is 130 MB which is lesser than the SQL Server compressed backup. Since it supports encryption as well, in a way it is better than SQL Server Backup to URL.


Thursday, November 17, 2016

DBCC CLONEDATABASE is now working with SQL Server 2016

You might have used this command before or you might be unaware of this command. This was introduced with SQL Server 2014 SP2 and it allowed as to clone a database easily without getting data but with statistics. This is very useful when you have to troubleshoot a production database but you cannot do much with it because;
  1. You should not use it for troubleshooting during peak hours. 
  2. You cannot do all testing as you want because of some policies and security implementations.
Since this command allows you to take a copy of database without taking data, it is fast and useful for checking various things.

Here is the way of creating a clone:

DBCC CLONEDATABASE(AdventureWorks2014, CloneAdventureWorks2014);

This is the result you get when you execute against SQL Server 2016 (Note that this requires SP1, this does not run in RTM)


This what you see if you execute in RTM


Since this does not have data, you cannot perform all activities performed for troubleshooting.

For example, this is possible and it produces the required result:

USE CloneAdventureWorks2014;
GO

DBCC SHOW_STATISTICS ('Sales.SalesOrderHeader', 'PK_SalesOrderHeader_SalesOrderID')  


But this is does not provide the output as we need:

USE CloneAdventureWorks2014;
GO

SET STATISTICS IO ON
SELECT * FROM Sales.SalesOrderHeader;


Wednesday, November 16, 2016

SQL Server 2016 Service Pack 1 (SP1) released

Do not expect another standard service pack that contains all hot-fixes and updates done on the previous version, this has something else, it has a major change.

Yes, in addition to fixes and new features, Microsoft makes some of previously-unavailable-features (or Enterprise-only-features) available for Standard, Web, Express and LocalDB editions. This is amazing, specially, those who run large and heavy OLTP databases and who have data warehouses implemented in Standard Edition.

Whenever I do training and consultancy on data warehousing, this biggest hit I get is client is using Standard Edition and no partitioning can be done. Not only that, unavailability of Columnstore Indexing, In-Memory OLTP are few more. This sometime discourrages the client on BI implementation as they see many limitation with the Edition they have bought and unfortunaltely they cannot go for Enterprise because of the cost. This what Microsoft has addressed with SQL Server 2016 SP1 release.


Here are the features available now with Standard, Web and Express which were not available before;