Showing posts with label Administration. Show all posts
Showing posts with label Administration. Show all posts

Wednesday, May 31, 2017

SQL Server AUTOGROW_ALL_FILES Option

Few days back, I wrote an article on SQL Server default behavior when it expands files with AutoGrowth enabled. Where there are multiple files in the file group and all files are fully filled, SQL Server expands only one file at a time, making the data distribution inconsistence. Read more on it: SQL Server does not always write data to files, proportional to the amount of free space in each file

There is a facility to change this behavior with SQL Server 2016. By default File Group is set to AUTOGROW_SINGLE_FILE but if we set it to AUTOGROW_ALL_FILES, then it changes the default behavior and grows all files when required.

Let me take the same code used with my previous code and show you.

Let's create a database with an additional file group that contains two file. Let's set the new file group as the default and create a table on it. The below code does it and inserts 900 records to the table. And it checks the spaced used;


USE master;
GO

DROP DATABASE TestDatabase;
GO

-- Two additional data files are added under FG1
CREATE DATABASE [TestDatabase]
 ON  PRIMARY 
( NAME = N'TestDatabase'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase.mdf' 
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB), 
FILEGROUP [FG1]  DEFAULT
( NAME = N'TestDatabase_Data1'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data1.ndf' 
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB),
( NAME = N'TestDatabase_Data2'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data2.ndf' 
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB)
 LOG ON 
( NAME = N'TestDatabase_log'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_log.ldf' 
 , SIZE = 10MB , FILEGROWTH = 5MB )
GO


USE TestDatabase;
GO

-- Create a table
-- This will be created on FG1 as it is the default
CREATE TABLE dbo.TestTable
(
 Id int identity(1,1) Primary Key
 , Name char(8000) not null
);
GO

-- Inserting 900 records
INSERT INTO dbo.TestTable
 (Name) VALUES (Replicate('a', 8000));
GO 900

SELECT 
 file_id, name, physical_name
 , Convert(Decimal(12,2),Round(size/128.000,2)) AS [FILE_SIZE_MB]
 , Convert(Decimal(12,2),Round(FileProperty(name,'SpaceUsed')/128.000,2)) AS [SPACE_USED_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000,2)) AS [FREE_SPACE_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000/(size/128.000),2)) AS [FREE_PCT_MB]
FROM sys.database_files;


As you see, both files are almost filled. Let's change the default setting of the file group and insert some additional records.

ALTER DATABASE TestDatabase MODIFY FILEGROUP FG1 AUTOGROW_ALL_FILES;

Done. Now let's see whether only one file or both files have been grown.

SELECT 
 file_id, name, physical_name
 , Convert(Decimal(12,2),Round(size/128.000,2)) AS [FILE_SIZE_MB]
 , Convert(Decimal(12,2),Round(FileProperty(name,'SpaceUsed')/128.000,2)) AS [SPACE_USED_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000,2)) AS [FREE_SPACE_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000/(size/128.000),2)) AS [FREE_PCT_MB]
FROM sys.database_files;


Both files have been grown. This makes sure that data is getting distributed properly and I believe that this should be the standard setting for our databases.

Monday, May 22, 2017

SQL Server does not always write data to files, proportional to the amount of free space in each file

We add multiple data files to file groups to get data distributed (it is always better to get them distributed among multiple physical volumes) for improving the performance. But does it guarantee that it always distributes data among files?

This is something every DBA should know. There are certain situations that SQL Server does not write data to all files allocated to the file group though there are multiple files in the same file group. Generally, SQL Server uses round-robin proportional fill algorithm to allocate extents. This means, when there are two data files allocated to one file group and there is a table created on that file group, allocations are made in each data file proportional to the amount of free space in each file. Example, if the first file has 60MB free space and second file has 40MB free space, when 10MB is written to the table, 6MB is written to first file and 4MB is written to the second file. This makes sure that data is distributed properly for improving the performance.

This behavior gets changed when data files are expanded automatically. Files are expanded when the space of them are exhausted and AutoGrowth is enabled. This expansion happens one at a time, again in round-robin way. Continuing with above example, if space of files are exhausted, SQL Server expands the first file based on the AutoGrowth setting, without expanding the second file. When the first file is exhausted, it expands the second file without expanding the first file. This makes data distribution uneven. When only first file is expanded, data is written only to the first file, means data is not getting stripped across all files. This is where you see the issue.

See this code. It creates a database with three data files; one file under Primary File Group and two files under FG1 File Group. Then it creates a table on FG1 and inserts set of records.

USE master;
GO

DROP DATABASE TestDatabase;
GO

-- Two additional data files are added under FG1
CREATE DATABASE [TestDatabase]
 ON  PRIMARY 
( NAME = N'TestDatabase'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase.mdf' 
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB), 
FILEGROUP [FG1]  DEFAULT
( NAME = N'TestDatabase_Data1'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data1.ndf' 
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB),
( NAME = N'TestDatabase_Data2'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data2.ndf' 
 , SIZE = 4MB , MAXSIZE = UNLIMITED, FILEGROWTH = 2MB)
 LOG ON 
( NAME = N'TestDatabase_log'
 , FILENAME = N'D:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_log.ldf' 
 , SIZE = 10MB , FILEGROWTH = 5MB )
GO


USE TestDatabase;
GO

-- Create a table
-- This will be created on FG1 as it is the default
CREATE TABLE dbo.TestTable
(
 Id int identity(1,1) Primary Key
 , Name char(8000) not null
);
GO

-- Inserting 900 records
INSERT INTO dbo.TestTable
 (Name) VALUES (Replicate('a', 8000));
GO 900

SELECT 
 file_id, name, physical_name
 , Convert(Decimal(12,2),Round(size/128.000,2)) AS [FILE_SIZE_MB]
 , Convert(Decimal(12,2),Round(FileProperty(name,'SpaceUsed')/128.000,2)) AS [SPACE_USED_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000,2)) AS [FREE_SPACE_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000/(size/128.000),2)) AS [FREE_PCT_MB]
FROM sys.database_files;

See the result when we check the file sizes. Focus on TestDatabase_Data1 and TestDatabase_Data2 files. They are exhausted.


Since we have enabled AutoGrowth, files will be expanded if we enter more data. Let's enter some more data and see whether both files are getting expanded.

-- Inserting 100 more records
INSERT INTO dbo.TestTable
 (Name) VALUES (REPLICATE('a', 8000));
GO 100

SELECT 
 file_id, name, physical_name
 , Convert(Decimal(12,2),Round(size/128.000,2)) AS [FILE_SIZE_MB]
 , Convert(Decimal(12,2),Round(FileProperty(name,'SpaceUsed')/128.000,2)) AS [SPACE_USED_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000,2)) AS [FREE_SPACE_MB]
 , Convert(Decimal(12,2),Round((size-FileProperty(name,'SpaceUsed'))/128.000/(size/128.000),2)) AS [FREE_PCT_MB]
FROM sys.database_files;


See, only first file has been expanded.


How to avoid this behavior?

This can be overcome by either expanding manually or enabling Trace Flag 1117. For expanding file manually, it is better to disable AutoGrowth for stopping automatic expansion.

Thursday, April 13, 2017

SQL Server - Checking Index Fragmentation - Understanding LIMITED, SAMPLED and DETAILED

Indexes get fragmented based on the operation we perform against records and modification we do against index keys. We use one of the Dynamic Management Function which is called sys.dm_db_index_physical_stats for checking both Internal Fragmentation and External Fragmentation. Today, I had to check one my clients data table for fragmentation which is a very large table. Since this is a quick look, I decided the use SAMPLED mode for checking the fragmentation instead of my usual mode DETAILED. Once the check is done, we had a short-conversation on the mode selected; difference between them and why should use them. This conversation resulted this post.

If you need to know Internal and External Fragmentation with a sample code, please see my post: Understanding Index Fragmentation – SS SLUG Aug 2013 – Demo I.

Here are the differences between three modes;

Mode Details
LIMITED
  • Fastest way of scanning the index for fragmentation.
  • For B-Tree indexes, only the Parent Level Pages are scanned.
  • For Heaps, associated PFS and IAM pages are checked and data pages are scanned.
  • This cannot be used for checking Internal Fragmentation.
  • This still shows External Fragmentation because it uses Pointers to the Leaf Level in Parent Pages for calculating the External Fragmentation.
SAMPLED
  • Slower than LIMITED as it scans leaf pages too.
  • It uses 1% of all pages for scanning, hence the fragmentation values are approximate.
  • If the number of pages are lesser than 10,000, then DETAILED mode is used instead of SAMPLED mode.
DETAILED
  • Slower than LIMITED as it scans leaf pages too.
  • It uses 1% of all pages for scanning, hence the fragmentation values are approximate.
  • If the number of pages are lesser than 10,000, then DETAILED mode is used instead of SAMPLED mode.


The reason for me to use SAMPLED mode for checking is, the number of records it has. Since it takes long time for scanning all pages, I decided to use SAMPLED mode because it could help to me determine whether the index is fragmented or not.

This shows how the fragmentation is shown with all three modes;



Wednesday, April 12, 2017

How to find the related file and the page numbers of records in SQL Server data table

Sometime, for some administration works, we need to know which file has been used for holding our records and the related page numbers. How can we easily find these information?

Generally we use DBCC IND and DBCC PAGE but there are two more great functions that can be used for finding the same.

The first function is %%PHYSLOC%%. This returns the RID (Record Identifier) as a hexadecimal value. The RID consists file number, page number and, record number. The second function is a table-valued function which is fn_PhysLocCracker. It accepts the RID returning from %%PHYSLOC%% and returns three-columned table for file number, page number and, record number. Combing these two, we can get the information as we want. Here is an example;

SELECT p.BusinessEntityID, p.Title
    , p.FirstName, p.LastName
    , ph.*
FROM [Person].[Person] p
CROSS APPLY fn_PhysLocCracker(%%PHYSLOC%%) ph



Saturday, March 18, 2017

SQL Server Backup Expiry Date Setting

I would say, this is the most confusing property in the Microsoft SQL Server. Most misunderstand it and most do not know how it works. Although there are many posts on this, only few explains it with an example, and some articles/posts make it more confused, hence making this post the way I understand;

If you refer MSDN, it explains it well but in simple term, what it says it, if you set an expiry date for your backup set (if you need to understand what is backup media set, backup media family, backup device or backup set, read this)  or if you set Retention days, the backup set can be overwritten only when the backup set is expired or after number of days mentioned with Retention days (this is shown as After x days in GUI).

Simply, if I take a backup today (19-Mar-2017) and set the expiry date as 25-Mar-2017, I will not be able to overwrite the backup set until 25-Mar-2017. Remember, backup can be restored any day regardless of the expiry date set.

Here is an example;

Let's take a backup of AdventureWorks2014 database. See both ways; using GUI and TSQL. Note that I have mentioned a name for Media too. Without this, expiry setting will not as we expect.




BACKUP DATABASE [AdventureWorks2014] 
TO  DISK = N'E:\DatabaseBackups\AdventureWorks2014.bak' 
WITH  EXPIREDATE = N'03/25/2017 00:00:00', FORMAT, INIT
,  MEDIANAME = N'Media01',  NAME = N'AdventureWorks2014-Full Database Backup'
, SKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

Now if you try to take another backup to the same media by overwriting the existing backup sets;


BACKUP DATABASE [AdventureWorks2014] 
TO  DISK = N'E:\DatabaseBackups\AdventureWorks2014.bak' 
WITH NOFORMAT, INIT
,  MEDIANAME = N'Media01',  NAME = N'AdventureWorks2014-Full Database Backup'
, NOSKIP, NOREWIND, NOUNLOAD,  STATS = 10
GO

SQL Server will throw this error;


If you still need to overwrite it, with TSQL, you can use SKIP instead of NOSKIP, or with GUI, do not mention the Media name.

Wednesday, March 1, 2017

SQL Server Licensing for High Availability

Licensing is always complex and sometime we cannot figure out the number of licenses we have to buy for the production. It is always better to contact someone from Microsoft for licensing because it is not always as we think, it gets changed with many options.

This post is not about how to decide the number of licenses you need for Microsoft SQL Server, it is about the licenses you have to buy when you configure High Availability.

There are two models in SQL Server Licensing; Server + CAL and Core based. If you the devices and users connect to SQL Server and it is not much, Server + CAL model works well with it. If the number of connections are unknown and expect many connections, then Core-based model is the best. However, you need to calculate the cost and compare before making the decision.

We use many techniques for implementing High Availability, we can use Log Shipping, Mirroring (both are deprecated), Clustering and AlwaysOn Availability Group. I was recently working on a similar implementation, had a question whether we need to purchase a license for the secondary server as it is not always active. Do we really need to purchase licenses for the secondary/stand-by server?

This was with Enterprise Edition, hence only model we can use with SQL Server 2016 is Core-based. While searching on this, I found a good resource that explains everything need, here are some I have taken from it.


First of all, let's see what is the core.


Your CPU can have multiple Processors and a processor can have multiple cores. The above image shows a CPU with 2 processors and processor is a Octo Core Processor that has 8 cores. If you have a CPU like above, you need to buy 16 Core licenses.

When you implement High Availability with Windows Clustering (whether it is Cloud or On-Premises, it is same), one node becomes Active and other node becomes Passive. Since the Passive Node will not be used unless there is a fail-over, it does not requires licenses for its cores.


However, if you use AlwaysOn Availability Group with Windows Clustering and second node will be used as a read-only instance for queries and reporting, then it needs licenses based on the number of cores. 



Thursday, December 29, 2016

Setting a higher Priority Base value to SQL Server - are we optimizing the process or taking a risk?

There are set of server level configurations related to optimizing SQL Server but not all guarantee that the result will be as expected and friendly. The Boost SQL Server Priority is one of such settings and though we have been given it for setting it to a higher value, prioritizing SQL Server processes in the Operating System, it is not recommended to change it.

I was asked about this last week, whether we can optimize SQL Server processes by setting the property to true. Theoretically, it should optimize but it can result unexpected result as well.


In simple term, this setting allows us to set the Priority Level (or Priority Base) to 13. The default is 7, not 0 or a higher value. I am sure that the value 7 has been set for this for many reasons, hence it should not be changed. Assume that we changed this and now it is 13. This makes Operating System to give SQL Server processes to high priority than other requested processes. Experts say that this might bring you to a situation where you see something like no response from keyboard or no response from mouse

Therefore, remember, this is something you need to think twice if you are going to change it.

Monday, December 26, 2016

SQL Server Agent Error are logged in SQL Server Error Log or somewhere else?

I have written two posts on SQL Server Error Log; How to delete current SQL Server error log? My current log is in GBs and Reading SQL Server Error Log using XP_READERRORLOG, and they explain how the log is maintained, how log files are created and how the content can be read. I was asked a question on this but not exactly on SQL Server errors but SQL Server Agent errors.

Can we read/see SQL Server Agent errors in SQL Server Error Logs? Or should we see somewhere else for this? Generally, you should see all errors related to SQL Server but errors related to Agent are maintained somewhere else called Agent Error Log.

Just like the way SQL Server maintains its error logs, multiple files are maintained for Agent error logs. By default there are 10 files and they are saved in Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Log folder.


The file named as SQLAGENT.OUT is the current one and just like the SQL Server Error logs, this gets recycled when the service is restarted. Once recycled, current one becomes SQLAGENT.1 and old SQLAGENT.1 becomes SQLAGENT.2, and so on. However, using Congfigure menu option in SQL Server Error Log node, we can adjust the number of files to be maintained but that facility is not exist with Agent Error Log.


If you see that the current log file is larger than you expect, it can be recycled by either by clicking the Recylce menu item in the context menu (see above image) or executing dbo.sp_cycle_agent_errorlog stored procedure.

Wednesday, December 21, 2016

Get SQL Server Trace automatically started at service startup

Although SQL Trace and SQL Server Profiler have been marked as deprecated and Microsoft recommends to use Extended Events instead, many of us still use SQL Profiler. You may run the Profiler for small traces or long traces but but if you need to make sure that it runs continuously even at the server restart, you need to consider following points;
  • SQL Server Profiler is not recommended for long-term monitoring and large traces.
  • Profiler cannot be automatically started when the server is restarted.
If you still need to implement it, you can consider;
  • Use SQL Trace instead.
  • Use sp_procoption to start the trace at the SQL Server Service start.
This makes sure that the trace configured is continuously running even when the server is restarted and the output is saved and queried anytime.

In order to implement this, you need to follow the steps below;
  1. Start with SQL Server Profile and select all events you need with columns and filters for trace.
  2. Get the definition of the created using the Profiler and use it for creating the SQL Trace.
  3. Include the code of SQL Trace with a stored procedure and add the stored procedure as a startup procedure.
Here is an sample implementation with details.

First let's make a Trace using the Profiler. The following Trace is created with TSQL_Duration Template and All Columns is selected for both Events.


Let's click on Run and then immediately stop. In order to get the code generated for SQL Trace, click File -> Export -> Script Trace Definition -> For SQL Server 2005 - 2016... menu item.


Save it as Trace01.sql. Then open the saved script in Query Window for modification. This code has a place to modify which sets the trace output file path. Since we need to make sure that this code can be executed at each server restart, a unique output file name is required. Therefore, let's add a variable called @FilePath and set with sp_trace_create statement. This is how the first part of the code is changed now;

/****************************************************/
/* Created by: SQL Server 2016 Profiler          */
/* Date: 12/21/2016  10:28:14 PM         */
/****************************************************/


-- Create a Queue
declare @rc int
declare @TraceID int
declare @maxfilesize bigint
set @maxfilesize = 5 

-- Please replace the text InsertFileNameHere, with an appropriate
-- filename prefixed by a path, e.g., c:\MyFolder\MyTrace. The .trc extension
-- will be appended to the filename automatically. If you are writing from
-- remote server to local drive, please use UNC path and make sure server has
-- write access to your network share

DECLARE @FilePath nvarchar(200);
SET @FilePath = N'E:\TraceOutput\SQLDurationTrace_' 
 + CONVERT(nvarchar(20), GetDate(), 112) + N'_' + REPLACE(CONVERT(nvarchar(20), GetDate(), 108), ':', N'')

exec @rc = sp_trace_create @TraceID output, 0, @FilePath, @maxfilesize, NULL 
if (@rc != 0) goto error

Note that, this creates a file named as SQLDurationTrace_YYYYMMDD_HHMMSS.

It is better to record the TraceID generated from this because it is required to stop and close the trace if required. Since this is going start automatically, the generated TraceID cannot be taken out as it is written. Therefore we need to change it as well. Let's create a table for holding generated trace ids. Let's create this table in master database.

CREATE TABLE dbo.TraceID
(
 ServerStartDateTime datetime NOT NULL 
  PRIMARY KEY DEFAULT(GetDate())
 , TraceID int  NULL
);
GO


Scroll down the Trace code generated and comment the select TraceID=@TraceID statement and add the following instead.

-- display trace id for future references
--select TraceID=@TraceID
INSERT INTO dbo.TraceID
 (ServerStartDateTime, TraceID)
VALUES
 (DEFAULT, @TraceID);

goto finish


We have done the necessary changes to the Trace code. Let's place the entire code into a stored procedure and name it as StartTrace. Here is the entire code now;

USE master;
GO

CREATE OR ALTER PROC dbo.StartTrace
AS
BEGIN

 -- Create a Queue
 declare @rc int
 declare @TraceID int
 declare @maxfilesize bigint
 set @maxfilesize = 5 

 DECLARE @FilePath nvarchar(200);
 SET @FilePath = N'E:\TraceOutput\SQLDurationTrace_' 
  + CONVERT(nvarchar(20), GetDate(), 112) + N'_' + REPLACE(CONVERT(nvarchar(20), GetDate(), 108), ':', N'')

 exec @rc = sp_trace_create @TraceID output, 0, @FilePath, @maxfilesize, NULL 
 if (@rc != 0) goto error

 -- Client side File and Table cannot be scripted

 -- Set the events
 declare @on bit
 set @on = 1
 exec sp_trace_setevent @TraceID, 10, 1, @on
 exec sp_trace_setevent @TraceID, 10, 9, @on
 exec sp_trace_setevent @TraceID, 10, 2, @on
 exec sp_trace_setevent @TraceID, 10, 66, @on
 exec sp_trace_setevent @TraceID, 10, 10, @on
 exec sp_trace_setevent @TraceID, 10, 3, @on
 exec sp_trace_setevent @TraceID, 10, 4, @on
 exec sp_trace_setevent @TraceID, 10, 6, @on
 exec sp_trace_setevent @TraceID, 10, 7, @on
 exec sp_trace_setevent @TraceID, 10, 8, @on
 exec sp_trace_setevent @TraceID, 10, 11, @on
 exec sp_trace_setevent @TraceID, 10, 12, @on
 exec sp_trace_setevent @TraceID, 10, 13, @on
 exec sp_trace_setevent @TraceID, 10, 14, @on
 exec sp_trace_setevent @TraceID, 10, 15, @on
 exec sp_trace_setevent @TraceID, 10, 16, @on
 exec sp_trace_setevent @TraceID, 10, 17, @on
 exec sp_trace_setevent @TraceID, 10, 18, @on
 exec sp_trace_setevent @TraceID, 10, 25, @on
 exec sp_trace_setevent @TraceID, 10, 26, @on
 exec sp_trace_setevent @TraceID, 10, 31, @on
 exec sp_trace_setevent @TraceID, 10, 34, @on
 exec sp_trace_setevent @TraceID, 10, 35, @on
 exec sp_trace_setevent @TraceID, 10, 41, @on
 exec sp_trace_setevent @TraceID, 10, 48, @on
 exec sp_trace_setevent @TraceID, 10, 49, @on
 exec sp_trace_setevent @TraceID, 10, 50, @on
 exec sp_trace_setevent @TraceID, 10, 51, @on
 exec sp_trace_setevent @TraceID, 10, 60, @on
 exec sp_trace_setevent @TraceID, 10, 64, @on
 exec sp_trace_setevent @TraceID, 12, 1, @on
 exec sp_trace_setevent @TraceID, 12, 9, @on
 exec sp_trace_setevent @TraceID, 12, 3, @on
 exec sp_trace_setevent @TraceID, 12, 11, @on
 exec sp_trace_setevent @TraceID, 12, 4, @on
 exec sp_trace_setevent @TraceID, 12, 6, @on
 exec sp_trace_setevent @TraceID, 12, 7, @on
 exec sp_trace_setevent @TraceID, 12, 8, @on
 exec sp_trace_setevent @TraceID, 12, 10, @on
 exec sp_trace_setevent @TraceID, 12, 12, @on
 exec sp_trace_setevent @TraceID, 12, 13, @on
 exec sp_trace_setevent @TraceID, 12, 14, @on
 exec sp_trace_setevent @TraceID, 12, 15, @on
 exec sp_trace_setevent @TraceID, 12, 16, @on
 exec sp_trace_setevent @TraceID, 12, 17, @on
 exec sp_trace_setevent @TraceID, 12, 18, @on
 exec sp_trace_setevent @TraceID, 12, 26, @on
 exec sp_trace_setevent @TraceID, 12, 31, @on
 exec sp_trace_setevent @TraceID, 12, 35, @on
 exec sp_trace_setevent @TraceID, 12, 41, @on
 exec sp_trace_setevent @TraceID, 12, 48, @on
 exec sp_trace_setevent @TraceID, 12, 49, @on
 exec sp_trace_setevent @TraceID, 12, 50, @on
 exec sp_trace_setevent @TraceID, 12, 51, @on
 exec sp_trace_setevent @TraceID, 12, 60, @on
 exec sp_trace_setevent @TraceID, 12, 64, @on
 exec sp_trace_setevent @TraceID, 12, 66, @on


 -- Set the Filters
 declare @intfilter int
 declare @bigintfilter bigint

 exec sp_trace_setfilter @TraceID, 35, 0, 6, N'AdventureWorks2014'
 -- Set the trace status to start
 exec sp_trace_setstatus @TraceID, 1

 -- display trace id for future references
 --select TraceID=@TraceID
 INSERT INTO dbo.TraceID
  (ServerStartDateTime, TraceID)
 VALUES
  (DEFAULT, @TraceID);

 goto finish

 error: 
 select ErrorCode=@rc

 finish: 
END
GO


Now the last step. We need to add this stored procedure as a startup procedure. Use the following code to add StartTrace as a Startup procedure;

USE master;
GO

EXEC sp_procoption StartTrace, 'STARTUP', 'ON';
GO


Note that, in order get this work, we need to make sure scan for startup procs server setting is enabled. Generally, when the sp_procoption is execute, this is getting enabled but it is always better to get it enabled before using sp_configure. Done. Let's restart the SQL Server Service and check the dbo.TraceID table.


As you see, a record is inserted with an ID. This means that trace has been started. Let's check the folder.


File is created. For testing purposes, let's run some queries in AdventureWorks2014 database and restarted the service.

USE AdventureWorks2014;
GO

SELECT * FROM Production.Product;
SELECT * FROM Sales.Customer;


Once restarted, we should see a new record in the table and new file in the folder.


This means that our Trace is getting automatically started when the service is started and trace continues. You can query the trace, current or older ones using fn_trace_gettable function and if you need, you can stop the current Trace using value 1 and lose the current Trace using value 2 with sp_trace_setstatus stored procedure.



Tuesday, December 20, 2016

Finding SQL Server Startup Stored Procedures

When I was discussing about getting Stored Procedures automatically executed at service start, a question raised; How can we find Stored Procedure added as Startup Procedures. This is simple, all we have to do is, query the sys.procedure system view and filter the result for ExecIsStartup Property.

Here is the code;

USE master;
GO

SELECT *
FROM sys.procedures
WHERE OBJECTPROPERTY(OBJECT_ID, 'ExecIsStartup') = 1;

For more info on how to add procedures as startup procedures using sp_procoption, read this: https://msdn.microsoft.com/en-us/library/ms181720.aspx

Monday, December 19, 2016

Reading SQL Server Error Log using XP_READERRORLOG

We use sp_readerrorlog for reading the content of the error log of SQL Server and it actually uses xp_readerrorlog extended stored procedure. Since I had to do some troubleshoot with one of servers, I use this for finding some info, and thought share it.

Here are some ways of reading the error log;




In case of you need the code;

USE master;
GO

-- without parameters, load the current log
EXEC xp_readerrorlog;

-- Load the specific error log 
-- 0 is current one and by default there are 6 more old logs
EXEC xp_readerrorlog 0;

-- Second parameter is for log type
-- 1 is for engine and 2 is agent
EXEC xp_readerrorlog 0, 1;

-- Two more additional parameters can be passed 
-- for searching, 1st one is for searching
-- and second one for filter the search - or further searchin within the result
EXEC xp_readerrorlog 0, 1, N'Server', N'Listening';

-- And this can take three more parameters
-- for setting the date range and sorting order
EXEC xp_readerrorlog 0, 1, N'Database', N'resource'
 , '2016-12-21 23:00:18.860', '2016-12-21 23:00:18.930', N'DESC';

Thursday, October 13, 2016

SQL Server 2016 Partial backup operation on a READONLY database backs up all file groups

While I was writing a code on backing up operation for SQL Server 2016, I noticed that there is an issue with partial backup. As per documentations, when we take a partial backup using READ_WRITE_FILEGROUPS option, it backs up;

  • only PRIMARY, all READ/WRITE file groups and mentioned READONLY file groups.
  • only PRIMARY if the database is READONLY.

However, if you perform this operation with SQL Server 2016 RTM, it backs up;
  • all file groups; PRIMARY, all READ/WRITE file groups including READONLY file groups - which is NOT the expected behavior.
You can apply the latest cumulative update (I applied CU2), then it backs up;
  • only PRIMARY and all READ/WRITE file groups - which is still NOT the expected behavior.
Here is the code I tested, you also can run the same and see;

USE master;
GO
-- creating the database
CREATE DATABASE [TestDatabase]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'TestDatabase_Data_1', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data1.mdf' 
 , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), 

 FILEGROUP [FileGroup_Main] 
( NAME = N'TestDatabase_Dat_2', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data2.ndf' 
 , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ),
 
 FILEGROUP [FileGroup_History] 
( NAME = N'TestDatabase_Dat_3', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Data3.ndf' 
 , SIZE = 10240KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )

 LOG ON 
( NAME = N'TestDatabase_Log', FILENAME = N'D:\MSSQL13.MSSQLSERVER\MSSQL\DATA\TestDatabase_Log.ldf' 
 , SIZE = 5120KB , MAXSIZE = 2048GB , FILEGROWTH = 10%)
GO

-- creating tables and inserting records
USE [TestDatabase]
GO

CREATE TABLE [dbo].[Configuration]
(
 [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
 [ConfigurationName] [varchar](100) NOT NULL,
 [ConfigurationValue] [varchar](100) NOT NULL,
 [Settings] char(4000) NOT NULL
) ON [PRIMARY]

GO

CREATE TABLE [dbo].[Sales]
(
 [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
 [SalesDate] [date] NOT NULL,
 [Amount] [decimal](16, 2) NOT NULL,
 [Settings] char(4000) NOT NULL
) ON [FileGroup_Main]
GO

CREATE TABLE [dbo].[Sales_History]
(
 [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY,
 [SalesDate] [date] NOT NULL,
 [Amount] [decimal](16, 2) NOT NULL,
 [Settings] char(4000) NOT NULL
) ON [FileGroup_History]
GO

-- Insert set of records
INSERT INTO dbo.Configuration (ConfigurationName, ConfigurationValue, Settings) VALUES ('A', '1', 'a');
GO 5000
INSERT INTO dbo.Sales (SalesDate, Amount, Settings) VALUES ('01/01/2016', 10000, 'a');
GO 5000
INSERT INTO dbo.Sales_History (SalesDate, Amount, Settings) VALUES ('01/01/2010', 10000, 'a');
GO 5000


USE master;
GO

-- Make one FG as read-only
ALTER DATABASE TestDatabase MODIFY FILEGROUP [FileGroup_History] READ_ONLY;
GO

-- take a full database backup
-- this takes all three files
-- backup size is 63 MB
BACKUP DATABASE TestDatabase 
TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\BACKUP\TestDatabaseBackup01.bak'
WITH INIT
GO

-- take a backup with READ_WRITE_FILEGROUPS
-- this takes only TestDatabase_Data_1 and TestDatabase_Dat_2
-- backup size is 43 MB
BACKUP DATABASE TestDatabase READ_WRITE_FILEGROUPS
TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\BACKUP\TestDatabaseBackup02.bak'
WITH INIT
GO

-- Make the database as read-only
ALTER DATABASE [TestDatabase] SET  READ_ONLY 
GO

-- take a backup with READ_WRITE_FILEGROUPS
-- as per MSDN, this should take only PRIMARY
-- But this takes all three files with RTM and exlude read-only files with CU2
-- backup size is 63 MB
BACKUP DATABASE TestDatabase READ_WRITE_FILEGROUPS
TO DISK = 'D:\MSSQL13.MSSQLSERVER\MSSQL\BACKUP\TestDatabaseBackup03.bak'
WITH INIT
GO

Result before applying CU2;

Processed 2840 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_3' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE successfully processed 7898 pages in 0.275 seconds (224.357 MB/sec).
Processed 2840 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE...FILE= successfully processed 5370 pages in 0.191 seconds (219.611 MB/sec).
Processed 2840 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_3' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.

BACKUP DATABASE...FILE= successfully processed 7898 pages in 0.271 seconds (227.674 MB/sec).



Result after applying CU2.

Processed 2872 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_3' on file 1.
Processed 3 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE successfully processed 7931 pages in 0.289 seconds (214.379 MB/sec).
Processed 2872 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE...FILE= successfully processed 5402 pages in 0.202 seconds (208.890 MB/sec).
Processed 2872 pages for database 'TestDatabase', file 'TestDatabase_Data_1' on file 1.
Processed 2528 pages for database 'TestDatabase', file 'TestDatabase_Dat_2' on file 1.
Processed 2 pages for database 'TestDatabase', file 'TestDatabase_Log' on file 1.
BACKUP DATABASE...FILE= successfully processed 5402 pages in 0.205 seconds (205.852 MB/sec).


Be aware on this, this may be the expected behavior of SQL Server 2016 though I see it as an issue, working on it, I will update the post if I hear something new on it.

Wednesday, October 12, 2016

Find SQL Server Databases that are not accessed after last reboot

I have written a post tilted When was my SQL Server Database last accessed? that talks about how to find the last accessed date and time of databases. As per one of the comments, it is important to find out databases that were not accessed after last reboot as well, hence wrote this query for that.

It may be helpful to you.

SELECT name DatabaseName
FROM sys.databases
WHERE name NOT IN ('master', 'tempdb', 'model', 'msdb') -- and if you other system DBs
EXCEPT
SELECT DISTINCT
 DB_NAME(database_id) DatabaseName
FROM sys.dm_db_index_usage_stats
ORDER BY 1


Saturday, October 8, 2016

How to create a SQL Server job quickly

SQL Server Agent Jobs are commonly used for automating administrative tasks and other routine tasks. For creating a job as you need, you need to either use GUI that comes with SSMS or use stored procedures given. Yes, it takes time for adding a job and then adding steps but we can get some jobs automatically created using a simple way.

Assume that you need to create an Agent job for backing up one of your databases. Rather going through SQL Server Agent -> Job in the Object Explorer, you can open the Backup Interface just like the way you manually take backup, configure everything as you want. Once the configuration is done, without clicking OK, click on down-arrow next to Script Button and click on Script Action to Job.


This action opens a window that is used for creating Agent Jobs. Good thing is, it adds all we need with Steps and all you have to add is a schedule.


Using this technique, jobs can be easily created for most of the tasks.


Sunday, April 3, 2016

Troubleshooting agent jobs

Administrators always try to automate routine tasks, making sure that all required operations are done on time without failures. The main component used for automation is SQL Server Agent, which is used for creating jobs. If you have noticed that a scheduled job has not run or all jobs are not running, how do you troubleshoot? How do you start troubleshooting?



Here are some guidelines for troubleshooting based on a discussion I had;
  • First thing needs to be checked is whether SQL Server Agent service is running or not. You need to make sure that Startup type is set to automatic, this makes sure that service is started when the server is restarted. If, for some reasons, service is not running, and you cannot even manually start it, check following;
    • Check and see whether account assigned for the Agent is valid and no issues with the password. The account assigned may have been expired, changed or disabled. Check the system log for more details.
    • Check the msdb database. If it is corrupted or offline, Agent will not be started.
  • Check the job history. Check whether last run was successful. There can be some issues with business logic implemented.
  • Check whether the job is enabled. Someone might have disabled it.
  • Check whether the schedule set is either expired or disabled.
  • Check and see whether proxy accounts are working properly if you have used. Check credentials used for proxy accounts.
  • Check dependencies. There can be steps in the job that run without any issues but some. Check whether all required items, such as files, folders and all required permissions for accessing are available.

Sunday, February 7, 2016

How to delete current SQL Server error log? My current log is in GBs.

In a continuous running SQL Server environment, it is not uncommon to see a large error log related to SQL Server that is continuously growing. Recently, a discussion happened on this as this particular environment had a very large error log which had consumed a lot of space in the partition, creating an issue related to space required to data files. Can we delete the error log?

By default, SQL Server maintains 7 error logs; current log and 6 backups of last files created. The current log has no extension and last backups have extensions starting from 1. This setting can be changed by clicking configure of SQL Server Logs folder in Object Explorer - Management if required. Files are located in standard SQL Server path: {partition}:\Program Files\Microsoft SQL Server\{Instance}\MSSQL\Log.



Now if you try to delete the current log, you will get an error because it is being used by SQL Server. Therefore, you need to create a new file as the current log and make the current one as the last backup. The log file cycles with every restart of the SQL Server instance, making the current one as the last backup with extension 1 and creating a new one with no extension. With default setting, all other files starting from 1 to 6, become 2 to 5 and 6th file is deleted. If you cannot restart the service (if it is Production Environment), then this recycling can be manually done by calling sp_cycle_errorlog. This SP closes the current one and creates a new one. Once it is done, you can delete the large one, which is the one with extension 1 now.

EXEC sys.sp_cycle_errorlog;

Sunday, January 17, 2016

SQL Server Brain Basher of the Week #040 - SQL Server Services Account

Let's talk about accounts that can be assigned to the services related to SQL Server and how they can be assigned or changed later. Since SQL Services are Microsoft Windows services, changes related to the services can be done via Services Desktop App. In addition to that, with SQL Server installation, SQL Server Configuration Manager is installed and it can be used to manage services related to SQL Server too.

Here is the question for this week based on this.

Which application should be used for changing SQL Server service accounts?
  1. Services Desktop App only
  2. SQL Server Configuration Manager only
  3. Both Services Desktop App and SQL Server Configuration Manager.
What would be the answer? Or what would be the best?

Yes, you can use Services Desktop App for changing accounts related to SQL Server just like changing setting of other accounts. And since we have been given a SQL Server specific application, which  is Configuration Manager, it can be used too. Although both can be used, there is an advantage with SQL Server Configuration Manager. When a SQL Server service is updated with Configuration Manager, it knows which security groups should be updated, making the account as a member of them, and it does it immediately. This does not happen immediately with Services Desktop App until restart is done. Therefore, it is recommended to use SQL Server Configuration Manager instead of Services Desktop App.

Thursday, December 31, 2015

Query Shortcuts in Management Studio

The most friendly database tool, Management Studio was released with SQL Server 2005 and it has become the right-hand for both database administrators and developers. SQL Server Management Studio is an integrated environment for doing all types of operations and management related to SQL Server and it is a rich tool equipped with many functionalities and an editor. Although we have been using it for long time, Query Shortcuts are still unknown to many. Here is a post on it, and you will surely find the usefulness of them.

Query Shortcuts allows you to configure key combinations for executing your common commands. By default, three shortcuts are already configured and can be used immediately. Seeing configured shortcuts and new shortcut configuration can be done with Options interface which can be opened with Tools menu.


As you see, three shortcuts are already configured. If you press Ctrl+1 in Query Editor, sp_who procedure will be automatically executed and you will see the result of it.

In addition to that, we can configure our own short cuts. Here are some example;



"SELECT * FROM " is configured with Ctrl+5 and "EXEC" is configured with Ctrl+6. This allows us to highlight either a table name or stored procedure and get it either queried or executed using assigned shortcut. For example, if Person.Person is selected and Ctrl+5 is pressed, the statement SELECT * FROM Person.Person will executed.


Make sure you do not assign shortcuts for all operations. Operations like delete should not be configured because you can accidentally delete records once configured.

SQL Server CPU usage: Recording and monitoring with many more server statistics

There are many ways of collecting statistics of SQL Server such as CPU usage, IO made, reads and writes. One facility given by SQL Server is, an Extended Stored Procedure, sp_monitor. This procedure can be used for seeing statistics related to resource use as SQL Server keeps these information using system statistical functions. This procedure shows values since SQL Server was restarted and values since last run sp_monitor was run.

For more details: https://msdn.microsoft.com/en-us/library/ms188912.aspx

Here is the result of the procedure;


This has been mainly given to DBAs to have a quick look. Because of that, collecting them regularly for future analysis is bit difficult. Format is not much user-friendly and returns multiple resultsets, hence calling it using TSQL and saving it in a table is not much easy. However, it can be called and save the result using simple c#.net code. Let's make a table for collecting them first.

-- create a database for holding data
CREATE DATABASE ServerMain;
GO

-- create a table for holding data
USE ServerMain;
GO

CREATE TABLE dbo.ServerStatistics
(
 DateRun datetime primary key
 , CPU_Busy int not null
 , IO_Busy int not null
 , Packet_Received int not null
 , Packet_Sent int not null
 , Packet_Errors int not null
 , Total_Reads int not null
 , Total_Writes int not null
 , Total_Errors int not null
 , Connections int
);
GO
-- Procedure for inserting data
CREATE PROCEDURE dbo.AddServerStatistics @DateRun datetime, @CPU_Busy int, @IO_Busy int
     , @Packet_Received int, @Packet_Sent int, @Packet_Errors int, @Total_Reads int
     , @Total_Writes int, @Total_Errors int, @Connections int
AS
BEGIN

 INSERT INTO dbo.ServerStatistics
  (DateRun, CPU_Busy, IO_Busy, Packet_Received, Packet_Sent
  , Packet_Errors, Total_Reads, Total_Writes, Total_Errors, Connections)
 VALUES
  (@DateRun, @CPU_Busy, @IO_Busy, @Packet_Received, @Packet_Sent
  , @Packet_Errors, @Total_Reads, @Total_Writes, @Total_Errors, @Connections)
END
GO

and here is the c#.net code for calling this procedure and saving the result.

DateTime DateRun = DateTime.Now;
            int CPU_Busy = 0;
            int IO_Busy = 0;
            int Packet_Received = 0;
            int Packet_Sent = 0;
            int Packet_Errors = 0;
            int Total_Reads = 0;
            int Total_Writes = 0;
            int Total_Errors = 0;
            int Connections = 0;
            string commandText = "sp_monitor";

            using (SqlConnection connection = new SqlConnection(@"Data Source=(local)\SQL2014;Database=ServerMain;Integrated Security=true;"))
            {
                using (SqlCommand command = new SqlCommand(commandText, connection))
                {
                    command.CommandType = CommandType.StoredProcedure;
                    connection.Open();

                    SqlDataReader reader = command.ExecuteReader();

                    while (reader.Read())
                    {

                        DateRun = Convert.ToDateTime(reader[1]);

                    }

                    reader.NextResult();
                    while (reader.Read())
                    {
                        CPU_Busy = Convert.ToInt32(reader[0].ToString().Substring(reader[0].ToString().IndexOf("(") + 1).Substring(0, reader[0].ToString().Substring(reader[0].ToString().IndexOf("(") + 1).IndexOf(")")));
                        IO_Busy = Convert.ToInt32(reader[1].ToString().Substring(reader[1].ToString().IndexOf("(") + 1).Substring(0, reader[1].ToString().Substring(reader[1].ToString().IndexOf("(") + 1).IndexOf(")")));
                        
                    }

                    reader.NextResult();
                    while (reader.Read())
                    {
                        Packet_Received = Convert.ToInt32(reader[0].ToString().Substring(reader[0].ToString().IndexOf("(") + 1).Substring(0, reader[0].ToString().Substring(reader[0].ToString().IndexOf("(") + 1).IndexOf(")")));
                        Packet_Sent = Convert.ToInt32(reader[1].ToString().Substring(reader[1].ToString().IndexOf("(") + 1).Substring(0, reader[1].ToString().Substring(reader[1].ToString().IndexOf("(") + 1).IndexOf(")")));
                        Packet_Errors = Convert.ToInt32(reader[2].ToString().Substring(reader[2].ToString().IndexOf("(") + 1).Substring(0, reader[2].ToString().Substring(reader[2].ToString().IndexOf("(") + 1).IndexOf(")")));
                        
                    }

                    reader.NextResult();
                    while (reader.Read())
                    {
                        Total_Reads = Convert.ToInt32(reader[0].ToString().Substring(reader[0].ToString().IndexOf("(") + 1).Substring(0, reader[0].ToString().Substring(reader[0].ToString().IndexOf("(") + 1).IndexOf(")")));
                        Total_Writes = Convert.ToInt32(reader[1].ToString().Substring(reader[1].ToString().IndexOf("(") + 1).Substring(0, reader[1].ToString().Substring(reader[1].ToString().IndexOf("(") + 1).IndexOf(")")));
                        Total_Errors = Convert.ToInt32(reader[2].ToString().Substring(reader[2].ToString().IndexOf("(") + 1).Substring(0, reader[2].ToString().Substring(reader[2].ToString().IndexOf("(") + 1).IndexOf(")")));
                        Connections = Convert.ToInt32(reader[3].ToString().Substring(reader[3].ToString().IndexOf("(") + 1).Substring(0, reader[3].ToString().Substring(reader[3].ToString().IndexOf("(") + 1).IndexOf(")")));
                        
                    }

                    connection.Close();
                }

                commandText = "AddServerStatistics";
                using (SqlCommand command = new SqlCommand(commandText, connection))
                {
                    command.CommandType = CommandType.StoredProcedure;

                    SqlParameter parameterDateRun = new SqlParameter("DateRun", SqlDbType.DateTime);
                    parameterDateRun.Value = DateRun;
                    command.Parameters.Add(parameterDateRun);

                    SqlParameter parameterCPU_Busy = new SqlParameter("CPU_Busy", SqlDbType.Int);
                    parameterCPU_Busy.Value = CPU_Busy;
                    command.Parameters.Add(parameterCPU_Busy);

                    SqlParameter parameterIO_Busy = new SqlParameter("IO_Busy", SqlDbType.Int);
                    parameterIO_Busy.Value = IO_Busy;
                    command.Parameters.Add(parameterIO_Busy);

                    SqlParameter parameterPacket_Received = new SqlParameter("Packet_Received", SqlDbType.Int);
                    parameterPacket_Received.Value = Packet_Received;
                    command.Parameters.Add(parameterPacket_Received);

                    SqlParameter parameterPacket_Sent = new SqlParameter("Packet_Sent", SqlDbType.Int);
                    parameterPacket_Sent.Value = Packet_Sent;
                    command.Parameters.Add(parameterPacket_Sent);

                    SqlParameter parameterPacket_Errors = new SqlParameter("Packet_Errors", SqlDbType.Int);
                    parameterPacket_Errors.Value = Packet_Errors;
                    command.Parameters.Add(parameterPacket_Errors);

                    SqlParameter parameterTotal_Reads = new SqlParameter("Total_Reads", SqlDbType.Int);
                    parameterTotal_Reads.Value = Total_Reads;
                    command.Parameters.Add(parameterTotal_Reads);

                    SqlParameter parameterTotal_Writes = new SqlParameter("Total_Writes", SqlDbType.Int);
                    parameterTotal_Writes.Value = Total_Writes;
                    command.Parameters.Add(parameterTotal_Writes);

                    SqlParameter parameterTotal_Errors = new SqlParameter("Total_Errors", SqlDbType.Int);
                    parameterTotal_Errors.Value = Total_Errors;
                    command.Parameters.Add(parameterTotal_Errors);

                    SqlParameter parameterConnections = new SqlParameter("Connections", SqlDbType.Int);
                    parameterConnections.Value = Connections;
                    command.Parameters.Add(parameterConnections);

                    connection.Open();

                    command.ExecuteNonQuery();

                    
                    connection.Close();
                }
            }

If you continuously run this code or schedule for every one hour, you have enough of data to see how busy is your SQL Server and whether it needs more resources or not. Here is a simple chart created using SSRS, based on the saved data.


Sunday, October 4, 2015

SQL Server Error Severity Levels

Have you ever read the Severity Level indicated with an error occurred? Or just tried to find out the problematic code exist in your code for fixing it? It is always better to understand what error message says and its severity level because some errors cannot be fixed by us though it looks as fixable.

Severity level indicates the seriousness of the error, whether it can be ignored or not, whether it can be fixed by user level. Therefore we need to understand what it describes.

Range of the level  Description
0 to 9 Informational message, not an error. Warning for NULL usage in Average is a good example.
10 Informational message, it indicates non-severe error too.
11 to 16 Errors that can be fixed by us, such as syntax issues, deadlocks, permission related issues.
17 to 19 Serious software related error that user cannot correct. Out of memory, space are some of them.
20 to 24 Very serious errors related to either hardware or SQL Server itself. Generally, errors starting with level 19 gets logged in the Application log.
25 Same as above but this terminates SQL Server service.

Here are some examples;

USE AdventureWorks2014;
GO

-- Warning messages 0 - 10
SELECT COUNT(Color) FROM Production.Product;
GO

-- Errors form 11 - 16
SELECT * FRM Production.Product;
GO

-- Throwing an error with higher severity level
-- Note that 60005 is a user-defined message
RAISERROR (60005, 25, 1) WITH LOG;