Saturday, April 18, 2015

SQL Server Brain Basher of the Week #008

When you talk with your database, you use a language that is understandable to the database component you access and specific to the vendor. There are multiple vendor-specific languages such as T-SQL, PL-SQL and SQL PL for Microsoft SQL Server, Oracle and IBM DB2 databases respectively. Although there are different vendor-specific implementations, general categorization of the database language, in terms of command types and operation type is as follows;

DML
DDL
DCL
TCL
DQL

Do you know all of these languages? Can you describe what these acronym stand for and the purpose of them? Yes, that is the Brain Basher of the week.

Here is the explanation of each;
  • DML - Data Manipulation Language - This language is used for manipulating data stored in tables. Operations such as inserting data, updating data are done with this language and SQL Server specific commands fall under this language are INSERT, UPDATE and DELETE.
  • DDL - Data Definition Language - This helps us to define the structures used for holding data. This is used for operation like creating databases, creating tables and examples commands for this language are CREATE DATABASE, CREATE TABLE and CREATE VIEW.
  • DCL - Data Control Language - This language is mainly used for controlling the access permission on objects created in the database and operations that can be performed on them. Operations such as granting permission to a user on a table for updating records, allowing a group to execute a stored procedure and commands such as GRANT SELECT, GRAN ALTER are examples for this.
  • TCL - Transaction Control Language - Operations and commands related to transactions are come under this language. Starting a transaction with BEGIN TRAN and ending a transaction with either ROLLBACK TRAN or COMMIT TRAN  are part of this language.
  • DQL - Data Query Language - This represents SELECT SQL command that is used for retrieving data from the database.

Thursday, April 16, 2015

Reporting Services Multi-Value parameter and Oracle database

Parameterization is not a difficult implementation with Reporting Services as most of the coding are automatically added either via the data source dialog box or the wizard. It adds necessary coding as well as report parameters. Not only single-select type parameters, it handles multi-select or multi-value parameters with appropriate filtering codes. However this becomes little bit complex when parameters are loaded from different sources. It becomes more complex (for me :)) when the selected values of parameters have to be passed to Oracle as the main source of the report is Oracle.

I had couple of parameters loaded from a source, to be precious, from Analysis Services dimensions and selected values have to be passed for a function implemented in a Oracle package for loading the main dataset. Similar implementation is required for few more reports hence I looked for a generic way to handle it. Searches made opened many ways of doing this, some of them are for specific scenario, some of them are shortcuts, some of them are combination of different approaches. Finally figured out a way to handle the situation, usefulness of solution for others is assumed, hence posting it.

Note that the way I have implemented Oracle objects is based on the knowledge gathered from internet without doing a study on Oracle in a structured and ordered manner, hence this will not be the best way of implementing this.

Okay, let's talk about the scenario again. I have two parameters, let's call them as rpProduct and rpCountry and "Allow multiple values" is checked on both. They are loaded from Analysis Services dimensions. Main dataset is loaded from an Oracle function called LoadSales which is in the package called pkgSales. The WHERE clause of the SELECT statement has to be handled for values coming from two parameters and better to have a generic way as same implementation is required for multiple reports.

This is what I did. Initially, I created a Type in Oracle database for holding multiple values. It is a Table type and I used varchar2(200) as the size for an individual item.

create or replace type t_string_table is table of varchar2(200);

Then I created a function that can be used for converting comma separated string values in to a collection. As you see, return type of it is the type I created with above code. I used some coding for this from: http://www.oratechinfo.co.uk/delimited_lists_to_collections.html

create or replace function Convert_stringlist_to_table(string_list in varchar2) return t_string_table is

   l_string varchar2(32767) := string_list || ',';
   l_comma_index pls_integer;
   l_index pls_integer := 1;
   l_tab t_string_table := t_string_table();
begin
  
         loop
           l_comma_index := INSTR(l_string, ',', l_index);
           EXIT WHEN l_comma_index = 0;
           l_tab.EXTEND;
           l_tab(l_tab.COUNT) := SUBSTR(l_string, l_index, l_comma_index - l_index);
           l_index := l_comma_index + 1;
         end loop;
      return l_tab;
end Convert_stringlist_to_table;

Next, changed the SELECT of LoadSales function. Function accepts two parameters, one for products and other for countries. Values are expected to be comma-separated. As you see, values are converted to collections using the function we wrote before using them with WHERE clause.

  function LoadSales
    (v_products in varchar2, v_countries in varchar2)
     return sys_refcursor
     is cur sys_refcursor;
  begin
    
    open cur for
      select {column list}
      from {table list}
      inner join .......
      where country_name in (select column_value  from TABLE(cast (Convert_stringlist_to_table(v_countries) as t_string_table)))
            and product_name in (select column_value  from TABLE(cast (Convert_stringlist_to_table(v_products) as t_string_table)));
    
    return cur;
  end;

Last step is configuring the dataset for parameters. Since we need to send selected values from Report Parameters as comma-separated string values, expression should be written as;

=Join(rpProduct.Lable, ",")


That is all. And good thing is, whole thing can be reused with all string type parameters related to other reports that require same implementation. You might have to change the code if you expect commas in parameter values.


Wednesday, April 15, 2015

What pushes SMP Architecture data warehouse to MPP Architecture

We have been developing data warehouses, centralizing enterprise data and addressing business intelligence requirements. Generally, almost all data warehouses built were based on traditional architecture which is called SMP: Symmetric Multi-Processing. Even though we use different design strategies for designing data warehouses for improving the performance and managing the volume efficiently, the necessity on scaling up often comes up. No arguments, without much considerations on factors related, we tend to add more resources spending more money for addressing the requirement but at a certain level, we need to decide, we need understand that the existing architecture is not sufficient enough for continuation, it needs a change, SMP to MPP.

What is SMP architecture? This architecture is a tightly coupled multi-processors that share resources, connecting to a single system bus. With SMP, system bus limits scaling up beyond a certain limit and, when number of processors and data load increases, the bus can become overloaded and a bottleneck can occur.

MPP, Massively Parallel Processing is based on shared-nothing architecture. MPP system uses multiple servers called Nodes which have dedicated, reserved resources and executes distributed queries with nodes independently offering much performance than SMP.


How do we know the boundary or what factors can be used for determining the necessity of MPP? Here are some, these will help you to decide.

Here is the first one;
This is all about data growth. Yes, we expect an acceptable data growth with data warehousing but if it increases drastically, and if we need to continuously plug more and more storage, it indicates a necessity of MPP. We are not talking about megabytes or gigabytes but terabytes ore more. Can't we handle the situation just adding storage? Yes, it is possible but there will be definitely a limit on it. Not only that, the cost goes up too. We do not see this limitation with MPP and in a way, adding additional storage after the initial implementation might not be as expensive as SMP.

Here is the second;
If somebody talks about BI today, the aspect of Real-time or Near-Real-Time is definitely a concerned area. Traditional implementation of data warehouse manages this up to some extent but not fully, mainly because of the capacity, loading and complexity of the query. Generally, with Microsoft platform, we use SSIS for data loading and de-normalized, semi-normalized tables designed either as general tables or star/snowflake structured tables for holding data. Assume that user requests real-time data and same fact table that contains billions of records have to be accessed, then performance of the query might not be at the expected level. With MPP, since data can be distributed with multiple nodes, performance on data retrieval is definitely fast and real-time queries are efficiently handled.

Third one;
Traditional data warehouse requires structured, in other words known relational formatted data. However modern BI is not just based on this, data with unknown structures are not rare and often required for most of the analysis. How do we handle this? One data warehouse for known, structured data and another for unstructured data? Even though we maintain two data warehouses, how an analysis can be performed combining these two? Can traditional tools attached with exiting architecture be used for combining them efficiently, process them fast and produce required result? No, it is not possible, and it means it is high time for MPP. This does not mean that MPP handles all these area but it supports. MPP helps to process unstructured data much efficient than SMP and Microsoft platform allows to combine structured and unstructured data with user-friendly interface using its solution which based on MPP.

Here is the forth:
Mostly, we extracts data from on-premises data storage and traditional ETLing handles this well. However, data generation is not limited to on-premises applications with modern implementation, many important data is generated with cloud applications. In a way, this increases the load as well as the complexity. And sometime this changes traditional ETL into ELT. MPP architecture has capabilities to handle these complexities and improve the overall performance, hence this reason can be considered as another reason for moving from SMP to MPP.

There can be some more reasons, but I think these are the significant ones. Please comment if you see more on this.

Sunday, April 12, 2015

SQL Server Brain Basher of the Week #007

Not a new one but still I do not get proper answers when a question is asked on this. See whether you can select accurate statement(s) before reading the rest. This is all about the "GO" keyword we used with TSQL.


What do you think? Let's go through one by one. GO is not a TSQL statement. It is a command only recognized by the sqlcmd and Management Studio. Based on this fact, 1st statement is false and 3rd statements is true. If you execute a TSQL code that contains GO keyword through C#.Net, you will get an error because engine cannot recognize the keyword.

GO is treated as the batch separator in Management Studio and SqlCmd tools. This makes the 2nd statement true. All your statements either start from the top or since last GO statement till next GO statement are considered as a batch and will be executed as a single batch. Objects like Variables cannot be shared among batches. Here is an example;

DECLARE @MyVariable int = 100;

-- This statement runs without any issue
SELECT @MyVariable;
GO -- Batch separator

-- This statement throws an error:
-- "Must declare the scalar variable "@MyVariable"."
SELECT @MyVariable;

Can we replace GO  with any other word? Yes, it is possible. This option is available with Options menu item that comes under Tools menu in Management Studio. Considering this, forth statement is true too.


Let's form a team for a "Business Intelligence" project. Who do we need?

When forming a team for a Business Intelligence or Data Warehousing project, it is always advisable for filling the required roles with best and making sure they are fully allocated until the project is delivered, at least the first version. Not much differences with general IT projects but filling the roles with right person and right attitude should be thoroughly considered, organized and managed. This does not mean that other IT projects do not require same authority, but with my 14 years of IT experience, I still believe that Business Intelligence and Data Warehousing project needs an extra considerations.

How do we form the team? Who do we recruit for the team? What are the rolls that would play with this project? Before we selecting persons, before we forming the team, we should know what type of roles required for the project and responsibilities born by each role. That is what this post speaks about, here are all possible roles that would require for a Business Intelligence and Data Warehousing project.

Project team: Pic taken from http://www.usability.gov/how-to-and-tools/methods/project-team.html
  • A Project Manager
    This role requires a person who possesses good communication and management skill. I always prefer a person with a technical background because it minimizes arguments on some of the decisions specifically on resource allocation and timeframes set. Project manager coordinates project tasks and schedule them based on available resources and he needs to make sure that deliveries are done on time and within budget. Project manager has to be a full-time member initially and at the last part of the project but can play as a part-time member in the middle of the project.
  • A Solution Architect
    This roles requires a person who possesses at least 10 years experience in enterprise business intelligence solutions. Architect's knowledge on database design, data warehouse design, ETL design, model design, presentation layer design is a must and he should understand the overall process of the business intelligence project and should be able to design the architecture for the entire solution. She/he does not require to be a full-time member but project requires him as a full-time member from the beginning to completion of design phase. This role plays as a part-time member once the design is completed as she/he is responsible for the entire solution.
  • A Business Analyst
    Business Analyst plays the key role of requirement gathering and documentation. This person should hold good communication and writing skills. In addition to that, needs experience in BI projects related requirement gathering processes and related documents. Business Analyst is responsible for the entire requirement, getting confirmation and sign off, and delivery in terms of the requirements agreed. This is a full-time member role and required until project completion.
  • A Database Administrator
    Database Administrator does the physical implementation of data warehouse and maintains the data warehouse, ETL process and some models implemented. He needs a good knowledge on the platform used and specific design patterns used for data warehousing and ETLing. Generally the role of database administration does not consider as a full-time role for a specific project because in many situations, administrator is responsible for the entire environment that contains implementations of multiple projects.
  • An Infrastructure Specialist
    This role requires at least 5 years experience in planning and implementing server and network infrastructure for database/data warehouse solutions. This person responsible for selecting the best hardware solution with the appropriate specification for the solution, right set up, right tool-set, performance, high availability and disaster recovery. This role is a not a full-time member for the project.
  • ETL/Database Developers
    An engineer who possesses at least 3-4 years experience in programming and integration. He should be aware on design patterns used with ETLing and must be familiar with the tool-set and the platform used. In addition to knowledge of ETLing, this roles requires programming knowledge as some integration modules required to be written using managed codes. Experience with different database management systems and integration with them is required for this role and responsible for implementing the ETL solution as per the architecture designed. At the initial stage, this is not a full-time member role. Once this design phase is done, team includes this role as a full-time member.
  • Business Users
    Project requires this role as a full-time member and business user is responsible for providing the right requirement. This roles requires a thorough knowledge on the domain and all business processes. He is responsible of what business intelligence solutions offers for business questions and offers are accurate.
  • Data Modelers
    Data Modelers are responsible for designing data models (more on server level models) for supporting analysis done by business users. This roles requires technical knowledge on modeling and should be familiar with the tool-set used for modeling. This is not a full-time member role.
  • Data Stewards
    Data Stewards can be considered as business users but they are specialized to a subject area. This is not a technical role and this role is responsible for the quality and validity of data in the data warehouse. This is not a full-time member role and they are also referred as data governors.
  • Testers
    This role is responsible for the quality of entire solutions, specifically the output. Testers with at least 2-3 years experience are required to the project and they work closely with business analysts and business users. This role joins with the team in the middle of the project and work until the completion.

Saturday, April 11, 2015

Object Name Resolution - Stored Procedure

SQL Server does not resolve names of stored procedures just as it does on tables. This resolution is bit different. Here is the post I made on Object Name Resolution - Tablehttp://dinesql.blogspot.com/2015/04/object-name-resolution-table.html. Let;s see how this happens with Stored Procedure.

Have a look on the image below;


As per the example that shows in the image, Jane executes GetOrders stored procedure without adding its schema. In order to resolve this name, SQL Server initially adds sys schema. Since it does not success, it adds user's schema which is HR. As the stored procedure has been created under HR, Jane successfully executes the query.

Jack executes the same and SQL Server tries with sys.GetOrders. Since it is unsuccessful, SQL Server adds his schema which is FN. Again SQL Server cannot resolve an object called FN.GetOrders. SQL Server tries again with different schema which is dbo. That is the last try, and as it gives the same unsuccessful result, Jack gets an error.

Just like table names, makes sure you add schema to stored procedures when executing them. This makes sure that SQL Serve does not need to spend extra time for executing it and avoids overhead and errors.

Object Name Resolution - Table

How SQL Server resolves object names if you have not referred it is as a fully qualified object? If it is not a fully qualified object (a statement like below);

USE AdventureWorks2014;
GO

SELECT * FROM SalesOrderHeader;

it goes through additional steps for making it as a fully qualified object for resolving the name of the object. It is an additional cost for the execution engine and it adds extra time needed to execute the query. Not only that, if it cannot resolve the object, it throws an error saying Invalid Object even though the object is exist. Have a look on the image below.


Let's talk about the first example. Jane executes a query against SalesOrderHeader. She has not mentioned the schema of the table, hence it is treated as a non-fully-qualified-object. SQL Server tries to add her default Schema which is Sales. Once the schema is added, it becomes Sales.SalesOrderHeader and SQL Server could find the object. She executes the query successfully with an additional overhead.

Jack tries to execute the same query. Since Jack's schema is Finance, SQL Server initially tries as Finance.SalesOrderHeader. Since it does not success, it tries with its second Try which adds dbo schema. Again, dbo.SalesOrderHeader does not exist, hence it fails. Jack gets an error.

Realized how important it is to mention the schema? Make sure objects you have mentioned in your queries contains fully qualified object names for avoiding extra works and errors.


Wednesday, April 8, 2015

What are ROLLBACK WORK AND COMMIT WORK?

I am sure that you are familiar with ROLLBACK TRANSACTION and COMMIT TRANSACTION but have you heard about or used ROLLBACK WORK and COMMIT WORK?

ROLLBACK WORK and COMMIT WORK work exactly same way as ROLLBACK TRANSACTION and COMMIT TRANSACTION. WORK keyword is optional and this is ISO-compatible.

Can we use this instead what we have been used? Yea it is possible and no harm at all. Only missing part is, this does not accept user-defined transaction name.

Here is a sample code using ROLLBACK WORK and COMMIT WORK.

USE AdventureWorks2014;
GO

BEGIN TRAN

UPDATE Production.Product
 SET Color = 'b'
WHERE Color = 'Black';

-- simple logic to test
IF @@ROWCOUNT > 10
BEGIN

 ROLLBACK WORK;
 PRINT 'transaction rolled back';
END
ELSE
BEGIN
 
 COMMIT WORK;
 PRINT 'transaction committed.';
END

Tuesday, April 7, 2015

Some records are locked, can I skip them and read the rest? (READPAST)

Assume that you have a table with thousands of records and some of the records are being updated by another user. Now you try to read some records from this table. If your request needs to access records that are being updated by other user, your request will not be immediately satisfied, you have to wait till other user completes his transaction. This is the default behavior and this is what we referred as Blocking. Here is an example for it;

USE AdventureWorks2014;
GO

-- Starting a transaction and updating 93 records
BEGIN TRAN
 UPDATE Production.Product
  SET Color = ''
 WHERE Color = 'Black';

-- Open a new window and run this
-- You will not be able to see records
SELECT * FROM Production.Product;
GO

-- Stop the query and see locks placed
-- You should see the records exclusively locked
sp_lock;


What if you need to read only possible records? If you all need get your recordset and you have no issue with skipping locked records but get the available records without waiting, it can be done with a hint. The hint that has to be used for this is: READPAST.

READPAST instructs SQL Server Engine not to read records locked by other transactions and continue the reading process. Here is the same code using READPAST Table Hint.

-- Use the table hint.
-- This will skip 93 locked records
-- and show the rest
SELECT * FROM Production.Product (READPAST);
GO

Sunday, April 5, 2015

SQL Server Brain Basher of the Week #006

Let's talk about date and time. Have a look on following code and try to guess the value returned by last SELECT statement.

USE tempdb;
GO

CREATE TABLE dbo.TestTable
(
 Code char(4) NOT NULL
 , TimeRecorded datetime NOT NULL
)
GO

INSERT INTO dbo.TestTable 
 (Code, TimeRecorded)
VALUES 
 ('E001', '12/31/2014 23:59:59:999');
GO

SELECT TimeRecorded FROM dbo.TestTable;

The date and time we entered is a date related to 2014 but if you run the last SELECT statement, it will show a date for 2015, it will show the value as 2015-01-01 00:00:00.000.

How this can be happened? It can happen based on the "milliseconds" added to the value. What we have to remember is the Accuracy of DateTime data type. Accuracy of the datetime type is rounded to increments of .000, 003, or .007 seconds, hence .999 is rounded off to .000 increasing the second by 1. This changes the entire datetime value inserted. That is why we see 2014 date as 2015.

Considering this fact, if your application needs the accuracy at millisecond level, datetime2  should be used instead of datetime.

One more thing to remember. If you write a query like below to get some records related to 13th of June 2014 from a table;

SELECT Id
FROM dbo.Table
WHERE Date BETWEEN '06/13/2014 00:00:00:000' 
   AND '06/13/2014 23:59:59:999';


You might get records related to 14th of June too. Again the reason is same. It can be overcome by changing the query as below;

SELECT Id
FROM dbo.Table
WHERE Date >=  '06/13/2014 00:00:00:000' 
   AND Date < ‘06/14/2014 00:00:00:000’;

Friday, April 3, 2015

Cumulative updates for SQL Server 2012 and 2014

Few Cumulative updates have been released for SQL Server 2012 and 2014. Here are the links for them:

Cumulative update #6 for SQL Server 2014
Here is the link for downloading: https://support.microsoft.com/en-us/hotfix/kbhotfix?kbnum=3031047&kbln=en-us
Here is the link for the list of bugs fixed: http://support.microsoft.com/en-us/kb/3031047

Cumulative update #5 for SQL Server 2012 SP2
Here is the link for downloading: https://support.microsoft.com/en-us/hotfix/kbhotfix?kbnum=3037255&kbln=en-us
Here is the link for the list of bugs fixed: http://support.microsoft.com/en-us/kb/3037255

Cumulative update #15 for SQL Server 2012 SP1
Here is the link for downloading: https://support.microsoft.com/en-us/hotfix/kbhotfix?kbnum=3038001&kbln=en-us
Here is the link for the list of bugs fixed: http://support.microsoft.com/en-us/kb/3038001

What are hotfixes, Cumulative Updates and Service Packs?

No matter how hard the software has been tested, issues can be still encountered. Same applies to Microsoft SQL Server too and that is the reason we see numerous releases on same versions. These releases are published as a Hotfix, a Cumulative Update, or a Service Pack but what is the right time for applying these, should we apply them as soon as they are published?

Let's try to understand each of these SQL Server updates;
  • Hotfixes: These are built and released for addressing urgent customer concerns. Most of the time, this is done based on an issue reported by a customer (or few customers) and the fix is released immediately. Since this addresses a particular issue and the release is done in a urgent manner, not much testing is done hence it is not recommended to apply a hotfix to a production environment as soon as it is released unless the same issue is being experienced.
  • Cumulative Updates: Cumulative update is a bundle that holds a periodic roll-up releases of hotfixes. This goes through a test comparatively higher than testes done for each hotfix and the bundle is tested as a whole. In a way, this is more secure and advisable to apply rather than applying hotfixes individually. However, if possible, it is always better to wait till a service pack release as explained below.
  • Service Packs: Service pack is a periodic release that addresses many issues which may have (or may have not) addressed with previous hotfixes and cumulative updates. This goes through a thorough test and it is safe to apply this to production environment. 

Thursday, April 2, 2015

Should I rollback my explicit transaction or will XACT_ABORT do it for me?

Here is another question I got in this week. The question is all about rolling back transactions, whether we need to write a code with ROLLBACK TRAN for rolling back or can SQL Server automatically rollback the transaction if XACT_ABORT is on.

Many believe that once the transaction is handled using TRY/CATCH block, rolling back is guaranteed when an error or unexpected issue is thrown. Most of the cases, this is true, when something goes wrong, execution jumps to Catch block and execute the rollback tran statement. But what if execution is unable to jump to Catch block? Can it happen? Yes, it is possible. Assume that logic is implemented with a stored procedure and it is executed by an application. While it is being executed, if something happens at the application-end, application disconnects from the database. This stops continuing the code and executing the rollback tran. Not only that, this results the transaction remain in the database, keeping all the locked placed without releasing.

This type of issues can be solved using XACT_ABORT statement. All we have to do is, add SET XACT_ABORT ON inside the stored procedure because XACT_ABORT makes sure all transactions are rolled back and the batch is aborted when an error occured. For more info, read : https://msdn.microsoft.com/en-us/library/ms188792.aspx

Does this mean that we do not need to have TRY/CATCH block and handling errors? No, we still need them because it is the way of capturing errors and handling them.

Here is an example. This creates a table and stored procedure for inserting records.

Code 1
USE tempdb;
GO

DROP TABLE dbo.Account

CREATE TABLE dbo.Account
(   
 AccountId INT NOT NULL PRIMARY KEY, 
 Name    NVARCHAR (50) NOT NULL,
 Balance Money NOT NULL   
)
GO

CREATE PROCEDURE dbo.InsertAccount
AS
BEGIN

 BEGIN TRY

  BEGIN TRAN

  INSERT INTO dbo.Account(AccountId, Name , Balance) 
  VALUES(1, 'Account1', 10000)
  -- This throws an error
  INSERT INTO dbo.Account(AccountId, Name , Balance) 
  VALUES(1, 'Duplicate', 10000)
  INSERT INTO dbo.Account(AccountId, Name , Balance) 
  VALUES(2, 'Account2', 20000)
 
  COMMIT TRAN
 END TRY
 BEGIN CATCH
  SELECT ERROR_MESSAGE()
  ROLLBACK TRAN
 END CATCH
END
GO

Now let's open a new connection and execute the stored procedure.

Code 2
USE tempdb;
GO

EXEC dbo.InsertAccount;

SELECT * FROM dbo.Account;

Here is the result.


As expected it throws an error. if you check for active transactions using below code, you should NOT see the transaction we started as it has been handled.

Code 3
SELECT 
       est.transaction_id as TransactionID,
       tas.name as [Transaction Name],
       DB_NAME(tds.database_id) as DatabaseName
FROM sys.dm_tran_active_transactions tas 
 INNER JOIN sys.dm_tran_database_transactions tds 
  ON tas.transaction_id = tds.transaction_id
 INNER JOIN sys.dm_tran_session_transactions est 
  ON est.transaction_id=tas.transaction_id
WHERE est.is_user_transaction = 1 
AND tas.transaction_state = 2 
AND tds.database_transaction_begin_time IS NOT NULL

Now let's make a small modification to the stored procedure and execute the code in a different way. Insert WAITFOR statement in between first and second INSERT statements like below and alter the procedure.

Code 4
  INSERT INTO dbo.Account(AccountId, Name , Balance) 
  VALUES(1, 'Account1', 10000)

  WAITFOR DELAY '00:00:20';

  INSERT INTO dbo.Account(AccountId, Name , Balance) 
  VALUES(1, 'Duplicate', 10000)
  INSERT INTO dbo.Account(AccountId, Name , Balance) 
  VALUES(2, 'Account2', 20000)


Now open a new connection and execute Code 2 again. Open another connection immediately and execute Code 3 too. You should see following record with Code 3.


Go back to Code 2 immediately and stop execution. You need make sure you do this within 20 seconds. Once you have stopped the code, go back to Code 3 and execute it again. You should see the transaction which is still exist.

Same can happen with an application too. We stopped the application but still the transaction is exist with the database, locking some resources. Let's make another modification to the stored procedure and test the same. Add following code to the procedure and recreate it.

Code 5
ALTER PROCEDURE dbo.InsertAccount
AS
BEGIN

 SET XACT_ABORT ON
 BEGIN TRY

  BEGIN TRAN


Now do the same thing, open a new connection, execute the Code 2, and with a different connection, execute the Code 3, go back to Code 2, stop it and execute Code 3 again. You should NOT see the transaction now because it has been taken care by XACT_ABORT.



Monday, March 30, 2015

My Report takes long time to load, where should I check initially?

If you are the DBA or DBE, and reports have been deployed in production environment, question like above is very common? Usually quotes are something like "Report has been working fine, suddenly it takes long time for same data load", "My report usually takes 2-3 minutes now it takes more than 5 minutes". Now being a DBA or DBE, what should we check? CPU usage? Memory consumption? Or query related to the report?

True, we have to check all the listed area, but most of the time, if it is an unexpected and unusual slowness, I think we should checks for blocks too, may be the first thing just after CPU and memory usage. Why should we check?

Generally, if something is being updated, SQL Server places exclusive locks on objects. It can be on a row, page, extent or entire table. If the dataset related to the report is being locked by an update process, then the report query has to wait unless default timeout period has been set (then it throws an error and stop waiting) or different Isolation Level has been set. Therefore it is always better to check whether something has blocked our code or not before starting the analysis of codes.

How do we check it? Simplest way is Activity Monitor. It indicates Blocking and query that is blocking can be viewed too. If you need more info, then can try with set of TSQL statements.

Let's see how we can quickly check something like that. Open an connection and execute the following code.

USE AdventureWorks2014;
GO

BEGIN TRAN
 UPDATE Production.Product
  SET Color = ''
 WHERE ProductID = 1;

Now open a different connection and execute the code below. It tries to access the same table which is being modified by first connection, hence you should not see a result of it until the transaction of the first connection either committed or rolled back.

USE AdventureWorks2014;
GO

SELECT * FROM Production.Product;
GO

Now let's open the Activity Monitor and see. It can be opened by Right-Clicking the server instance and selecting Activity Monitor. Here is the screen.


As you see, second query is being run with SID 57 and this indicates that it is being blocked by 56. If you see something like this, what it says is, your report is blocked by some other processes. Activity Monitor allows us to see the code related to other process, in this case code related to 56. All we have to do is, right-click on 56 and click on Details.


It allows us to Kill the process but should not do it unless we are 100% sure that it does not disturb anyone or damage the dataset. But the good thing is, this shows the exact issue for the slowness and we can act on it in an appropriate manner.

SELECT r.session_id, r.status, r.blocking_session_id,
 r.command, r.wait_type, r.wait_time, t.text BlockedCode
FROM sys.dm_exec_requests  AS r
 CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) AS t
WHERE r.blocking_session_id  > 1;
GO


Here is the way of seeing Blocking Code.

SELECT t.text BlockingCode
FROM sys.dm_exec_connections  AS c
 CROSS APPLY sys.dm_exec_sql_text(c.most_recent_sql_handle) AS t
WHERE c.session_id  = (SELECT DISTINCT blocking_session_id
   FROM sys.dm_exec_requests  AS r
   WHERE NOT EXISTS (SELECT 1
    FROM sys.dm_exec_requests r2
    WHERE r.blocking_session_id  = r2.session_id
     AND r2.blocking_session_id  > 0)
    AND r.blocking_session_id  > 0);


If need to kill a process using TSQL, kill command along with SID can be used.



Sunday, March 29, 2015

SQL Server Brain Basher of the Week #005

Can I use sp_recompile with tables?

What would be your answer? Yes or no?

Answer should be Yes but it does not mean that we can recompile tables using this procedure.

Generally we use sp_recompile for invalidating existing plans in the cache for stored procedures, triggers or user-defined functions and plans will be regenerated with next run. This requires one parameter: Name of the procedure, user-defined function or trigger. Remember, this does not recompile procedures, triggers, and user-defined functions, it just marks existing plans as invalid. New plan will be created when we use the object again.

This allows us to pass a table name or view name as the parameter value. If the parameter value is either a table name or view name, it invalidates all existing plans of procedures, user-defined functions and triggers that have reference to the table or view. Similarly, new plans will be generated at the next run of referenced objects.

Let's test this. Execute the first statement few times and then execute the next statement. First statement executes a stored procedure in AdventureWorks2014 databases and second statement checks for Compiled Plans in the Cache.

-- Cleaning buffer
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
GO
   
USE AdventureWorks2014;
GO

-- First staement
-- Execute this three times
EXEC dbo.GetFrieghts 2011

-- Second statement
-- We should see 3 counts on compiled plan
SELECT p.usecounts, p.cacheobjtype, p.objtype, t.text
FROM sys.dm_exec_cached_plans p
CROSS APPLY sys.dm_exec_sql_text(p.plan_handle) t
WHERE
 TEXT LIKE '%GetFrieghts%';


Now, let's call sp_recompile for the table Sales.SalesOrderHeader which has been referenced by dbo.Getfrieghts procedure.

EXEC sp_recompile N'Sales.SalesOrderHeader';

Once done, execute the stored procedure once and see whether engine uses the same plan or a new plan.

-- Execute again - separately
EXEC dbo.GetFrieghts 2011

-- Check compiled plans
SELECT p.usecounts, p.cacheobjtype, p.objtype, t.text
FROM sys.dm_exec_cached_plans p
CROSS APPLY sys.dm_exec_sql_text(p.plan_handle) t
WHERE
 TEXT LIKE '%GetFrieghts%';


Surprise? Has the plan been invalidated and new plan has been created? It looks like no, old plan has been used again. Though we see the result looks like that, the reality is different. It has recompiled the stored procedure but not fully, only statements that have reference to the table. Let's test it.

Execute the first code set again. Once executed we have a compiled plan in the cache and it has been hit 3 times. Next, start the Profiler and add SP:CacheHit and SP:Recompile.


Run the Profiler. Execute the second code set that calls sp_recompile with the table and execute the stored procedure again.. Once executed, check the Profiler. This is what you should see.


As you see, two statements that have a reference to the table have been recompiled. Since the entire stored procedure has not been recompiled, the count of old compiled plan is getting increased, but remember, few parts of the procedure has been recompiled.

Thursday, March 26, 2015

My Application needs a new sequence number to the table without inserting the record

This is something I discussed with one of my colleagues, it was all about obtaining the next sequence number from Identity type column without inserting the record and use it for other entities inside the application, and later submit all to the database for insertion. However, if this is the requirement, it is not possible with Identity because the next value cannot be obtained from Identity-Column without inserting the record. This issue can be seen often with Entity Framwork enabled applications.

What would be the best way of handling this? Traditional way of handling this is, maintaining another table that holds key values for tables and using it for issuing next available values. It needs additional coding, extra effort and it introduces unexpected locking issues too. Modern way and best way of handling this is, using SEQUENCE object which was introduced with SQL Server 2012.

The SEQUENCE is a user-defined schema-bound object that generates sequence numbers based on the setting added. Multiple properties can be set with this such as Starting number, interval, and order. Unlike Identity, defined SEQUENCE object can be used with any table because there is no link between SEQUENCE object and a table. Although one SEQUENCE object can be used with all tables, it is advisable to create multiple SEQUENCE objects, one per table because it makes the maintainability easy and minimizes gaps between numbers added to tables.

Here is the way of creating a SEQUENCE object.

CREATE SEQUENCE Sales.OrderSequence
    START WITH 1
    INCREMENT BY 1;
GO

Once created, application can request for the next number which can be used as key of the table, calling below command;

SELECT NEXT VALUE FOR Sales.OrderSequence;

For more info on SEQUENCE object, refer: https://msdn.microsoft.com/en-us/library/ff878091.aspx



Tuesday, March 24, 2015

I declared a nvarchar(max) variable but it holds only 4000 characters

Although getting questions from SQL Server enthusiasts makes me busy, this brushes up my knowledge significantly. This is one of the questions I got today.

If the TSQL statement has to be dynamically built, the most common approach we use for forming the statement is, declaring a variable with either varchar(max) or nvarchar(max) data type and adding necessary string values for forming the statement. If you are a person who always maintain strict discipline on coding format, then you will be properly forming the statement but if you are bit lazy and want to form the statement without considering the format much, you may face the issue mentioned with the title.

Let me explain how this can happen. Have a look on below code.

DECLARE @Sql nvarchar(max)

SET @Sql = N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters,  This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters'
   
SELECT LEN(@Sql);
-- Result - 5111

If you execute the code, you get the result as 5111. It shows all characters added to the @Sql variable. Let's inject an additional character between these lines (That is how we form the dynamic SQL statement).

DECLARE @Sql nvarchar(max)

SET @Sql = N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, '
   
   + N' ' +

   N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters,  This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters'
   
SELECT LEN(@Sql);
-- Result - 4000

If you execute the code now, the result is not as we expected, it shows the count as 4000, instead of 5112. Obviously, reason is truncation but how it happened is the question. 

The reason is completely based on logic used with string concatenation. With our second code, we concatenate three Unicode literals. When literals or expressions are concatenated, if the size of it is less than 4000 (it is 8000 for non-Unicode), then it is treated as nvarchar(Nof Characters). If the value contains more than 4000 characters, then it is treated as nvarchar(max). Considering this logic, all our three literals are treated as nvarchar(Nof Characters), hence concatenation will be something like below;

@SQL = nvarchar(n) + nvarchar(n) + nvarchar(n)

Since no literal has been treated as nvarchar(max), the result of this will be limited 4000,  as a result of truncation.

How do we overcome this situation? There are two easy ways of writing this properly. One is, contatenate string using SET statement. This makes sure that we always concatenate the literal with nvarchar(max), hence result is nvarchar(max). The second way is, convert the literal explicitly to nvarchar(max) before concatenating. Here is the sample code.

DECLARE @Sql nvarchar(max)

SET @Sql = N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, '
   
SET @Sql += N' ' 

SET @Sql += N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters,  This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters'
   
SELECT LEN(@Sql);
-- Result - 5107
GO

DECLARE @Sql nvarchar(max)

SET @Sql = CONVERT(nvarchar(max), N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, ')
   
   + CONVERT(nvarchar(max), N' ') +

   CONVERT(nvarchar(max), N'This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, 
   This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters,  This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters, This value contains 5111 characters')
   
SELECT LEN(@Sql);
-- Result - 5107

Now you know the reason for this, so, do not make this mistake when forming the statement dynamically.

Sunday, March 22, 2015

SQL Server Brain Basher of the Week #004

Here is another tricky question? It is on Indexing.

Can we create an index without mentioning what column(s) to be used?

Obviously, we have to expect the answer as No, because index requires one or more columns for organizing the index; the index key. But this is possible with SQL Server 2014 :).

How is this possible? Earlier we had two types of indexes; Clustered Index and Non-Clustered Index. With SQL Server 2012, a new classification was introduced; Rowstore Index and Columnstore Index. Rowstore index represents the indexes we used to create with rows (or records) and Columnstore index represents a new way of creating indexes on columns. Only non-clustered columnstore indexes were supported with SQL Server 2012 which require column(s) to be specified.

Though 2012 does not support creating clustered columnstore indexes, SQL Server 2014 allows creating clustered columnstore indexes with tables. Once the table is set with a clustered columnstore index, no other indexes are possible with the same table. Clustered columnstore index does NOT require a column (or columns) to be specified when creating, hence we can create an index without specifying columns. So, the answer for above question is Yes.

Here is an example for Clustered Columnstore Index.

CREATE CLUSTERED COLUMNSTORE INDEX IX_Sales
 ON Sales;



Saturday, March 21, 2015

How to get system/error messages with severity level lower than 19 logged to Application Log

Have you noticed that SQL Server does not send all messages to Application Log? The default setting does not send all messages, generally messages with severity level between 19 and 25 are automatically written but not all. The best way to check which messages that are getting logged, is checking sys.messages catalog view. This has all messages and the column is_event_logged indicates whether the messages will be logged or not.

Now, what if you need to get a message logged which is not get logged automatically? Good example is, a message related to deadlock. If you check this message, you will see that its is_event_logged is not set to 1.

SELECT * FROM sys.messages 
WHERE language_id = 1033 AND severity = 13;


If required, this can be enabled for logging. It can be done using sp_altermessage stored procedure. Here is the way of doing it.

EXECUTE sp_altermessage 1205, 'WITH_LOG', 'True';

Now it is enabled. If you run the first code again, result will be just like this:


Let's test this. Below code generates a deadlock.

USE master;
GO

-- Create a sample database
IF EXISTS (SELECT * FROM sys.databases WHERE name = 'DeadlockTestDB')
 DROP DATABASE DeadlockTestDB;
GO

CREATE DATABASE DeadlockTestDB;
GO


-- Add two tables and insert two records
USE DeadlockTestDB;
GO

CREATE TABLE Table01 (Id int PRIMARY KEY, Name varchar(200));
GO
INSERT INTO Table01 VALUES (1, 'Table01 value');
GO

CREATE TABLE Table02 (Id int PRIMARY KEY, Name varchar(200));
GO
INSERT INTO Table02 VALUES (100, 'Table02 value');
GO


-- Create two stored procedures.
-- This one updates a record in Table01, waits, 
-- and updates Table02.
CREATE PROC UpdateTable01andTable02
AS
BEGIN

 BEGIN TRAN

  UPDATE Table01
   SET Name = 'Updated'
  WHERE Id = 1;

  WAITFOR DELAY '00:00:10';

  UPDATE Table02
   SET Name = 'Updated'
  WHERE Id = 100;

 COMMIT TRAN
END;
GO

-- This one updates a record in Table02 and Table01
CREATE PROC UpdateTable02andTable01
AS
BEGIN

 BEGIN TRAN

  UPDATE Table02
   SET Name = 'Updated - 2'
  WHERE Id = 100;


  UPDATE Table01
   SET Name = 'Updated -2'
  WHERE Id = 1;

 COMMIT TRAN
END;
GO

/*

-- Open a new connection and run this code
USE DeadlockTestDB;
GO

EXEC UpdateTable01andTable02;
GO

-- Open another connection and run this.
-- Make sure this run just after you start the above execution
USE DeadlockTestDB;
GO

EXEC UpdateTable02andTable01;
GO
*/

Once you get the code executed and received the deadlock error, open the Event Viewer and see. You should see something like this;


Thursday, March 19, 2015

I changed the Recovery Model before BULK operation, is it a good practice?

I have already written a post on bulk operations, indicating the benefits we get by changing the recovery model before performing bulk operations. I got a couple offline questions after my SQL Server Stored Procedure session at SS SLUG and one question was this. I was talking about how plan cache getting flushed away and I mentioned that changing recovery model causes to get the plan cache flushed. Now the question is, should we change the recovery model for our bulk operation?

First of all, let's see whether changing the Recovery Model causes to get the plan cached removed. Here is the code I wrote for testing.

I am executing two SELECT statements, one against AdventureWorks2014 and another with AdventureWorksDW2014. The last statement checks plan cache.

-- EXECUTE STATEMENT SEPARATELY

-- Cleaning buffer
DBCC FREEPROCCACHE
DBCC DROPCLEANBUFFERS
GO
  
USE AdventureWorks2014;
GO

-- Query Sales.SalesOrderDetail table
SELECT * FROM Sales.SalesOrderDetail
WHERE SalesOrderDetailID = 1;
  

USE AdventureWorksDW2014;
GO

-- Query dbo.DimProduct
SELECT * FROM dbo.DimProduct
WHERE ProductKey = 1;

-- Checking chached plans
SELECT p.usecounts, p.cacheobjtype, p.objtype, t.text
FROM sys.dm_exec_cached_plans p
CROSS APPLY sys.dm_exec_sql_text(p.plan_handle) t
WHERE 
 TEXT LIKE '%SELECT * FROM Sales.SalesOrderDetail%'
 OR TEXT LIKE '%SELECT * FROM dbo.DimProduct%';

As we expected, we see two cached plans in plan cache.



This code changes the Recovery Model of AdventureWorks2014. Second statement shows plans in the cache after the change.

-- Change the Recovery Model
ALTER DATABASE AdventureWorks2014 SET RECOVERY Bulk_Logged;
GO

-- And see plans
SELECT p.usecounts, p.cacheobjtype, p.objtype, t.text
FROM sys.dm_exec_cached_plans p
CROSS APPLY sys.dm_exec_sql_text(p.plan_handle) t
WHERE 
 TEXT LIKE '%SELECT * FROM Sales.SalesOrderDetail%'
 OR TEXT LIKE '%SELECT * FROM dbo.DimProduct%';



As you see, plan related to AdventureWorks2014 has been removed. This proves that changing the Recovery Model flushes plans in the cache.

Now the question is, how advisable to do this in production environment for bulk operations. My recommendation is, if it is a large bulk operation and database is heavily accessed by users, bulk operation should be done during off-peak hours. We still lose cached plans but this will at least minimize the damage. However, if the database is continuously being used by users, then it is advisable to perform bulk operations without changing the Recovery Model.