Tuesday, May 3, 2016

[Guest Post] Methods to Fix SQL Server Error 4064 Login Failed For User by Andrew Jackson

This post is written by Andrew Jackson

Overview
SQL Server Database helps users to create their own databases where they can store or retrieve data whenever required. Sometimes, user may encounter error while connecting to the database and it displays an error “Cannot open user default database. Login failed. Login failed for user ‘UserName’. (Microsoft SQL Server, Error: 4064)” This SQL Server Error message appears when SQL Server Login is not able to connect to the default database. The post will be discussing about the SQL Server Error 4064 and the way to resolve it.

SQL Server Error 4064 Login Failed For User
Each user that has an account on the SQL Server belongs to a database by default. When user tries to connect to a PC running SQL Server without specifying login database, the default database is used. However, if the default database is unavailable at the time of connection, the SQL Server Error 4064 will be displayed. 



Possible Causes of the SQL Server Error 4064
Some of the causes for the unavailability of the database are as follows:
  • DB is in single user mode & only available connection is used already.
  • When your DB no longer exists 
  • Database is in suspect mode 
  • DB has been detached 
  • Database is offline
  • DB is set to emergency status
  • Does not have login account mapped to the user
  • User trying to login has denied access.
  • DB is part of a Database Mirror
How to Fix the Error 4064
One solution to avoid the error when the user’s default database is unavailable, is to logon as a user that can modify logins and change the user’s default database to a database that is currently available for a connection.

Steps for fixing the error 4064 are as follows:
- Before login to the user’s database, go to Options that is in right corner of the login window.


- Go to Connection Properties Tab of the login dialog box, enter the ‘master’ database in dropdown box changing default database to master, and click on Connect.



After successful login in the system, type the following TSQL command for your username and database.

User must make sure that they changes [test] with their own username and master with their database name. 

Alternative Method for Resolving Error Message 4064
  • In SQL Server 2000 & SQL Server 7

    OSQL Utility can be used to change the default’s database by following the steps:
    1. In the command prompt window, type the following ‘C :\> osql -E -d master’ and press Enter.
    2. Type ‘1>sp_defaultdb 'user's_login', 'master'’ at the osql prompt and press Enter
    3. Type ‘2>go’ at the second prompt and press Enter
  • In SQL Server 2005 & later versions

    SQLCMD utility is used to change the default database server by following the steps below:
    1. Go to Start -> Run. Type cmd and press ENTER.
    2. SQL Server Login can use either Windows authentication typing the following in command prompt ‘sqlcmd –E -S InstanceName –d master’ or SQL Server Authentication with ‘sqlcmd -S InstanceName -d master -U SQLLogin -P Password’
      {InstanceName =Name of SQL Server Instance to which user wish to connect
      SQL Login=Server Login whose database created by default has got dropped
      Password=SQL Server Login Password}
    3. Type ‘ALTER LOGIN SQLLogin WITH DEFAULT_DATABASE = AvailDBName’ at the sqlcmd prompt and press ENTER
    4. Now, Type ‘GO’ and press ENTER.
Conclusion
The post is aimed to guide users in resolving one of the most common errors faced by users while trying to connect to the database that is unavailable at the time of connection. The possible causes of the SQL Server Error 4064 login failed for user are discussed as well. It further defines solutions to change the unavailable database to any valid database on the server in order to resolve the 4064 Error.


Monday, May 2, 2016

SQL Server 2016 will be available on 1st of June 2016


Microsoft SQL Server team has announced that SQL Server 2016 will be available on June 1, 2016. SQL Server 2016 comes with end-to-end data management including best way of managing business intelligence on our data on any device.

If you need to have a quick look on What's New, get this PDF.

If you need to know why you should go for Microsoft SQL Server, refer below images;

Gartner Magic Quadrant for Operational Database Management System



Gartner Magic Quadrant for Data Warehouse and Data Management Solutions for Analytics


Gartner Magic Quadrant for Advanced Analytics Platform.


Sunday, May 1, 2016

SQL Server Brain Basher of the Week #041 - SQL Server 2016 - Editions

SQL Server 2016 will be available soon and will be surely seen more and more advanced features that will help us to continue with both operational and strategic level activities. Here is the question of this week based on SQL Server Editions.

As we know, generally we have two Premium Editions, two Core Editions and few additional editions;


Considering the main editions, one edition has been removed from SQL Server 2016. Which one has been removed from SQL Server 2016?

As per the current announcements, Business Intelligence edition is not included with SQL Server 2016. There can be changes with official announcements but currently, it is not available.

Saturday, April 23, 2016

How SQL Server handles our requests for modifications

This is not something new with latest Microsoft SQL Server but this is still being discussed and it is unknown or unclear to many of database developers. While I was discussing transaction log of SQL Server database with few, as a part of it, how SQL Server accepts our requests and modifies records was discussed. Thought it is useful to everyone, hence making a post on it;

What really happens when we send a request to SQL Server? It can be an update or a delete. Request might be related to one record or many records. Have a look on below image;


This starts with the request. Either using an application or connecting directly to SQL Server using something like SSMS, we send the request. Once SQL Server received the request, it checks data pages related to the records. If data pages required are not in the memory (or buffer cache), it loads relevant data pages from the disk to memory. Then, remember, it modifies records in pages that are in the memory, not pages in the disk. That is what 1st and 2nd steps in the image explains.

Third step is, updating the transaction log in the disk. Once the page (or pages) in the memory are modified, they become dirty pages. Then SQL Server writes redo and undo information to the log file. During this update, pages related are locked until the transaction log is completely updated.

Once the log is updated, the acknowledgement is sent to the application. Note that, even though the data files are not updated, we receive a message saying records are successfully updated. But we do not want worry, even something happen after we received the message, SQL Server can recover committed records, making sure durability which is one of the properties of the transaction, is satisfied with SQL Server.

Later, after one ore more transactions, a process called Checkpoint writes all dirty pages back to the disk, updating data files. This is how SQL Server handles our update requests.

Thursday, April 21, 2016

SQL Server 2016 - Manipulating data in Azure Storage using PolyBase

When PolyBase is opened with SQL Server editions other than APS or PDW, I tested it with SQL Server 2016 CTP 2.1. It had some issues but was able to get it worked with some workaround. Thought to do the same with RC3 since there are many enhancements, and it works fine now. If you are interested in Polybase, have a look on the post I have written in June 2015; SQL Server 2016 features: Polybase: SQL Server connector to Hadoop.

What is PolyBase? It is a feature that is available with SQL Server 2016. It facilitates us to use TSQL against data stored in Hadoop or Azure Blob Storage, allowing us to combine structured data with semi-structured data. Data warehousing benefits a lot from this as it reduces the time spending on ETLing and supports real-time analysis. Other than that it can be used with OLTP databases and can be used for archiving data as well.

Currently this feature is available with SQL Server 2016 RC3, I did not come across Edition Comparison related to 2016, hence no idea which Edition will support with final release. Note that there are few prerequisites like .Net framework 4.5 or later, Oracle Java SE runtime. Make sure you have installed all required before installing SQL Server and enabling Polybase. You can read a post I wrote on installation SQL Server RC3: Installing SQL Server 2016 RC1 - Sharing issues.

Generally, we should install PolyBase feature as Standard-alone Polybase. If you want to make a collection of SQL Server instances as a PolyBase Scale-Out Group, then select PolyBase Scale-Out Group during the installation.

PolyBase support two Hadoop providers: Hortonwork’s Data Platform (HDP) and Cloudera’s CDH either on Linux or Windows. In addition to that it can connect with Azure Blob Storage too.

Let's start with a simple one. I have uploaded a file to my Azure Storage. This file holds small number of CDR records and let's see how we can read these records using TSQL.



In order to access this file, we need to make sure PolyBase is configured properly. Make sure two new services: SQL Server PolyBase Data Movement and SQL Server PolyBase Engine are running. In addition to that, make sure that it is enabled and connectivity is set as below.


For understanding the value that needs to be set with Hadoop Connectivity, read this.

Let's create a database and configure Data Source. Note that, we need to create a Database Credential first for using it with the Data Source.

CREATE DATABASE Telco;
GO

USE Telco;
GO

-- Create a master key to encrypt credential secret.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'Pa$$w0rd';

-- Create a database scoped credential (Azure Storage).
-- Enter your azure storage primary key as the secret.
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential WITH 
        IDENTITY = 'AzureStorageUser'
  , Secret = 'primary key';

-- create the external data source
CREATE EXTERNAL DATA SOURCE DinesqlStorage
WITH
(
 TYPE = HADOOP,
 LOCATION = 'wasbs://cdrcontainer@dinesqlstorage.blob.core.windows.net/'
 , CREDENTIAL = AzureStorageCredential
);


Next step is creating a File Format for the file we gonna read. PolyBase supports Delimited Text, Hive RCFile, Hive ORC, and Parquet. Let's create the File Format for our file.

-- Create an external file format (delimited text file).
CREATE EXTERNAL FILE FORMAT CDRFileFormat WITH (
        FORMAT_TYPE = DELIMITEDTEXT, 
        FORMAT_OPTIONS (FIELD_TERMINATOR =';', 
                USE_TYPE_DEFAULT = False)
);


Last step is creating an External Table matching with the file uploaded combining the File Format. Read more on it with CREATE EXTERNAL TABLE. Here is the code I used.

-- create the table using file format created
-- and for the file uploaded
CREATE EXTERNAL TABLE CDRData
(
 MSIDN nvarchar(100)
 , IMSI nvarchar(100)
 , IMEI nvarchar(100)
 , [Plan] nvarchar(10)
 , CALL_TYPE nvarchar(20)
 , CORRESP_TYPE nvarchar(10)
 , CORRESP_ISDN nvarchar(100)
 , Duration int
 , [Time] nvarchar(20)
 , [Date] nvarchar(20)
 )
WITH
(
 LOCATION = '/cdrfiles'
 , DATA_SOURCE = DinesqlStorage
 , FILE_FORMAT = CDRFileFormat
);

Now I can simply query data in my Azure Storage using the table I created. Not only that I can join this dataset with my other tables and do more analysis. I will add more complex examples with next set of posts.


Wednesday, April 20, 2016

SQL Server 2016 - Connecting to Azure SQL Server using SSMS - RC3

Please note that there is an issue with connecting to Azure SQL Server using latest RC3 build. If you are experiencing the following error, note that it is an issue related your system.

An error occurred while creating a new firewall rule. (ConnectionDlg)...


There is no other options, if you need to connect with your Azure SQL Server, you need to open the portal and add your IP address using Firewall Settings.


Tuesday, April 19, 2016

SQL Server 2016 - System-Versioned Temporal Tables

Although the word "Versioning" was not used, we have been maintaining versions of records, or more precisely history of records using different techniques. SQL Server offers various ways of handling or maintaining history records, or changes that have been done to our records, using features like CDC, CT, Optimistic Isolation Levels. Most of these features do not support actual "versioning" of records but these features can be used for handling different scenarios.

SQL Server 2016 introduces a new feature called System-Versioned Temporal Tables that provides the entire history of our records related to the changes done. This feature records versions of the records based on  update and delete operations with the validity period of the version, allowing us to see not only the current record, state of the record during any given period, or allowing us to do point-in-time analysis. This feature is based on ANSI SQL 2011 standard but SQL Server current 2016 release does not support all the features that describes with it.

Let's write a code and see how it works. The following code creates a Database called Sales and a Table called Customer in Sales. Code creates Customer as a Temporal Table that requires few additional elements with the CREATE TABLE statement. It requires two datetime2 columns for maintaining the validity period. In addition to that, it needs SYSTEM_VERSIONING = ON and optionally a name for the history table. If the name is not mentioned, system will create one for us.

CREATE DATABASE Sales;
GO

USE Sales;
GO

CREATE TABLE dbo.Customer
(
 CustomerId int identity(1,1) Primary Key
 , FirstName varchar(100) null
 , LastName varchar(100) not null
 , CreditLimit money not null
 , ValidFrom datetime2(0) GENERATED ALWAYS AS ROW START -- datetime2(any precistion)
 , ValidTo datetime2(0) GENERATED ALWAYS AS ROW END -- datetime2(any precistion)
 , PERIOD FOR SYSTEM_TIME (ValidFrom, ValidTo)
)
WITH (SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.CustomerHistory));

Let's insert three records and query both tables;

-- Inserting records
INSERT INTO dbo.Customer
 (FirstName, LastName, CreditLimit)
VALUES
 ('Dinesh', 'Priyankara', 10000)
 , ('Jack', 'Wilson', 15000)
 , ('John', 'Knight', 3500);

-- Checking records
SELECT * FROM dbo.Customer;
SELECT * FROM dbo.CustomerHistory;


As you see, history table does not show any records and last two datetime2 columns have been automatically filled. Now let's make some changes. Note that records have been inserted on 14/04/2016 and will be updating records on 16th, 17th and 19th.

-- Modifying a record - 2016-04-14 01:56:23
UPDATE dbo.Customer
 SET CreditLimit = 12000
WHERE CustomerId = 1;

-- Deleting a record - 2016-04-17 01:57:17
DELETE dbo.Customer
WHERE CustomerId = 3;

-- Modifying the same record - 2016-04-19 01:57:26
UPDATE dbo.Customer
 SET CreditLimit = 20000
WHERE CustomerId = 1;

-- Checking records
SELECT * FROM dbo.Customer;
SELECT * FROM dbo.CustomerHistory;


As you see, three records are in the history table; 2 records for the Customer Id 1 for two modifications made and 1 record for the Customer Id 2 for the deletion. This table exactly says how recorded are changed and when they have been changed, not only that it allows us to see the state of the record based on the validity period.

Let's see how we can retrieve records. There are multiple ways for querying records. We simply query the table without considering the history or we can go through the history using new clause given; FOR SYSTEM_TIME. Note the different between BETWEEN and FROM.

-- Retrieving the current record
SELECT *
FROM dbo.Customer
WHERE CustomerId = 1;

-- Retrieving for a date
-- Better include time too
SELECT *
FROM dbo.Customer
FOR SYSTEM_TIME
 AS OF '2016-04-17 00:00:00'
WHERE CustomerId = 1;

SELECT *
FROM dbo.Customer
FOR SYSTEM_TIME
 BETWEEN '2016-04-16 00:00:00' AND '2016-04-19 00:38:43'
WHERE CustomerId = 1
ORDER BY ValidFrom;

SELECT *
FROM dbo.Customer
FOR SYSTEM_TIME
 FROM '2016-04-16 00:00:00' TO '2016-04-19 00:38:43'
WHERE CustomerId = 1
ORDER BY ValidFrom;


You can read more on this at msdn.

Here is the cleaning code if required;

ALTER TABLE dbo.Customer SET ( SYSTEM_VERSIONING = OFF );
DROP TABLE dbo.Customer;
DROP TABLE [dbo].[CustomerHistory];
GO
USE master;
GO
DROP DATABASE Sales;



Saturday, April 16, 2016

SQL Server 2016 New Three Releases: SQL Server RC 3, Management Studio, SQL Server Data Tools

Microsoft SQL Server team has announced three new releases related to SQL Server product suite;

  1. SQL Server Release Candidate 3
    This is the last Release Candidate and it is available for downloading now.
    Click here to download it and click here to see the release note.
  2. SQL Server Management Studio April Preview
    New Preview is available with bug fixes and some enhancements. Click here to read more and click here to download it.
  3. SQL Server Data Tools Preview Update
    An update has been released for SQL Server Data Tools specifically for SQL Server Release Candidate 3. Click here to download the Preview.

Friday, April 15, 2016

SQL Server 2016 - Parameter Sniffing can be controlled at database level

SQL Server uses Parameter Sniffing for creating the best plan for your query (or stored procedure) that sniffs values passed for parameters and generates the plan accordingly. In most of the cases, this is the best for most queries and stored procedures unless the values for parameters are vary with each execution. There was no easy way of enabling and disabling this with previous versions but trace flags 4136 allows you to disable Parameter Sniffing at server level. However the question is, should we disable it, or why should we disable it? 

Let's try to understand this before looking at the setting given with SQL Server 2016. Have a look on the following code;

USE AdventureWorks2016CTP3
GO
SELECT * FROM Sales.SalesOrderDetail 
    WHERE SalesOrderDetailID = 1
SELECT * FROM Sales.SalesOrderDetail 
    WHERE SalesOrderDetailID = 1000

SQL Server generates two different plans for these two queries because they are ad-hoc statements and two different values are passed to SalesOrderDetailID. The plans were generated based on the values passed, basically using Parameter Sniffing. That is why we see two different plans for the same query.


With stored procedures, this is bit different. The plan for the procedure is created with its initial execution, based on the values passed for parameters and it stores the plan in the cache. SQL Server does not sniff parameter values with subsequent executions for generating the plan again unless it cannot find the cached plan. This speeds up the execution because SQL Server does not need to spend time and use resources for generating the plan again and again. But, remember, it uses the same plan for all type of values passed, it may not be the optimal plan for all values.

CREATE PROC GetOrderDetails @Number int
AS
    SELECT * FROM Sales.SalesOrderDetail 
    WHERE SalesOrderDetailID <= @Number
GO

EXEC GetOrderDetails 1
EXEC GetOrderDetails 1000


Okay, now why we should disable this behavior? If we disable this, SQL Server uses average distribution statistics for generating the plan which is not best for almost all implementation. If you really need a plan that does not depend on values passed and you need sort of average type plan for all executions, then you will benefit disabling Parameter Sniffing.

Read more on this with my article: Do we need to recompile Stored Procedures when an index is added or parameter value is vary greatly?

As I mentioned above, the trace flag can be used for disabling but it applies to all databases. With SQL Server 2016, whether you need it or not, this setting is available at database level and can be enabled/disabled anytime.

This is how it display with database properties.


Let's disable this for AdventureWorks database and see how it works.


Let's clear the cache and execute the procedure again and see.

-- Clearing the cache
DBCC FREEPROCCACHE;

EXEC GetOrderDetails 1;
EXEC GetOrderDetails 1000;



As you see, a plan has been generated without considering values passed and it is being reused. If you think that your codes benefit with this, then this new option can be applied to your database, otherwise, keeping the default is the best.

Thursday, April 14, 2016

Installing SQL Server 2016 RC1 - Sharing issues

Just sharing my experiences with installing SQL Server 2016 RC1;

If you are planning to install SQL Server 2016 with Polybase support, make sure you have installed Oracle JRE 7 or highest, otherwise you will be facing this issue;


It can be downloaded at: http://www.oracle.com/technetwork/java/javase/downloads/index.html

In addition to that, you might get time-out  issues during installation, in my case, though it occurred several times, retry worked me and installation was successful.



Wednesday, April 13, 2016

I renamed my computer, Will my SQL Server instance work? Should I do something on it?

This is something we always worry, what will happen if the name of the host machine is renamed? After renaming, can I connect with my SQL Server instance using new name set with the host, or I have to re-install SQL Server, or I have to do some setting changes? Let me share what I just did.

My computer name was initially set as DESKTOP-QOKBL3L because I forgot to set the name properly when installing the OS. And not only that, I installed an instance of SQL Server 2012 as a named instance too. After realizing that name of the computer is not as I want, I just changed it as DINESH-HM-LAP01. Now can I connect with my SQL Server instance?

I just tried with the old name as I used to;


As you see, it did not allow me to connect. Then I tried with my new computer name;



And I was able to connect with my instance, yes without doing anything. But, does it mean that I do not want to anything?

Let's check the server name using TSQL;


As you see, not all metadata is updated, means it needs some updates. For updating metadata, we need to drop the server and add again. After that, a service restart is required, once done, if the above query is run again, both will show the new name instead of old name.

sp_dropserver 'DESKTOP-QOKBL3L\SQL2012';
GO
sp_addserver 'DINESH-HM-LAP01\SQL2012', local;
GO

For more info on this, read this article in MSDN: https://msdn.microsoft.com/en-us/library/ms143799.aspx

Wednesday, April 6, 2016

Point-In-Time restoring with Azure SQL Database

We know that Point-In-Time restoration is possible with SQL Server and it is one of useful ways of recovering data in a disaster-recovery situation. This requires backup taken that contains data that needs to be recovered. If no backup is available, we can still achieve this by taking a backup of the database, may be tail-log backup.

However, Azure SQL Database works differently. For Point-In-Time restoration, you do not need backups if the recovery related to certain time period. Azure SQL Database allows you to recover your database (or restore) to any restore point within 7 days if the tier is Basic, 14 days for Standard, 35 days for Premium. This is a very simple process and it restores the database with a different name like Database Name + TimeStamp.

Here are the steps for Point-In-Time restoration;

First login to your Azure Portal and get SQL Servers blade opened. Select the server and get the preferred database server blade opened.


The select the database from Databases section and get Properties of database opened. Click on Restore button for opening the blade for resorting.


You can see oldest restore point based on your tier. Select the Point-In-Time as you want, change other properties if need (example, different server) and click OK to get it restored.

Once restored, it can be accessed via Management Studio just like you access the other database. One thing you need to remember is, you cannot see both databases by connecting to the server, hence when connecting, make sure you mention the database name for connecting to newly created database.


Sunday, April 3, 2016

Troubleshooting agent jobs

Administrators always try to automate routine tasks, making sure that all required operations are done on time without failures. The main component used for automation is SQL Server Agent, which is used for creating jobs. If you have noticed that a scheduled job has not run or all jobs are not running, how do you troubleshoot? How do you start troubleshooting?



Here are some guidelines for troubleshooting based on a discussion I had;
  • First thing needs to be checked is whether SQL Server Agent service is running or not. You need to make sure that Startup type is set to automatic, this makes sure that service is started when the server is restarted. If, for some reasons, service is not running, and you cannot even manually start it, check following;
    • Check and see whether account assigned for the Agent is valid and no issues with the password. The account assigned may have been expired, changed or disabled. Check the system log for more details.
    • Check the msdb database. If it is corrupted or offline, Agent will not be started.
  • Check the job history. Check whether last run was successful. There can be some issues with business logic implemented.
  • Check whether the job is enabled. Someone might have disabled it.
  • Check whether the schedule set is either expired or disabled.
  • Check and see whether proxy accounts are working properly if you have used. Check credentials used for proxy accounts.
  • Check dependencies. There can be steps in the job that run without any issues but some. Check whether all required items, such as files, folders and all required permissions for accessing are available.

Saturday, April 2, 2016

SQL Server Developer Edition is free for us

Thought it was part of April fool's joke but it looks like it is not. The Developer Edition which is the exact copy of Enterprise Edition is freely available for developers, it is available for Visual Studio Dev Essentials members. Here is the link for that: https://www.visualstudio.com/products/visual-studio-dev-essentials-vs?wt.mc_id=WW_CE_BD_OO_SCL_TW_DESQLBenefitAnnouncement_SQL.


This is still for SQL Server 2014, once SQL Server 2016 is release, Microsoft will make 2016 available too.


Friday, April 1, 2016

SQL Server 2016 Release Candidate 2 available for downloading

Have not you tried this yet? Here are some reasons for downloading and trying this out;

  • In-memory performance and built-in operational analytics for faster transaction queries and deeper insights.
  • Protection and security on data at rest and in motion using Always Encrypted technology.
  • Rich-Enterprise ready high availability and disaster recovery solutions with AlwaysOn technology.
  • Advance analytics capabilities and rich visualization supportability on any device.
  • And much more....
Read more on this:


Sunday, February 7, 2016

How to delete current SQL Server error log? My current log is in GBs.

In a continuous running SQL Server environment, it is not uncommon to see a large error log related to SQL Server that is continuously growing. Recently, a discussion happened on this as this particular environment had a very large error log which had consumed a lot of space in the partition, creating an issue related to space required to data files. Can we delete the error log?

By default, SQL Server maintains 7 error logs; current log and 6 backups of last files created. The current log has no extension and last backups have extensions starting from 1. This setting can be changed by clicking configure of SQL Server Logs folder in Object Explorer - Management if required. Files are located in standard SQL Server path: {partition}:\Program Files\Microsoft SQL Server\{Instance}\MSSQL\Log.



Now if you try to delete the current log, you will get an error because it is being used by SQL Server. Therefore, you need to create a new file as the current log and make the current one as the last backup. The log file cycles with every restart of the SQL Server instance, making the current one as the last backup with extension 1 and creating a new one with no extension. With default setting, all other files starting from 1 to 6, become 2 to 5 and 6th file is deleted. If you cannot restart the service (if it is Production Environment), then this recycling can be manually done by calling sp_cycle_errorlog. This SP closes the current one and creates a new one. Once it is done, you can delete the large one, which is the one with extension 1 now.

EXEC sys.sp_cycle_errorlog;

Wednesday, January 27, 2016

SQL Server 2016 - Now installation recommends the number of files for tempdb

Last year, I wrote a post on number of data files on tempdb, whether we want multiple files, then how many files should be added, what would be the recommendation. Generally, we add 1 file per core or 0.25 file per core as if number of core increases. Here is the link: How to decide the number of data files required for tempdb?

We do not need to do extra work after installation now because SQL Server 2016 installation recommends it now.


As you see, a new tab has been added and it shows number of files need to be added based on number of cores available in the machine.

Tuesday, January 26, 2016

SQL Server 2016 - Instant File Initialization can be enabled during the installation

This is something I noticed during installation of SQL Server 2016 CTP 3.2. Not sure whether this was available with all the versions of CTP, however it is something invaluable that has been added to the installation.


Instance File Initialization allows SQL Server to allocate space (or initialize space) without zeroing out the required space, when we create databases, add files, growing files and restoring databases. Generally, when we doing these mentioned operations, it takes long time based on the size specified because the space required to be cleaned up by deleting all existing data (even though we have deleted our data) and filling with zeros. But enabling Instance File Initialization, time required can be significantly saved.

This requires few additional features to be enabled that may open a security threat. Since the existing data remains without wiping out, it opens an opportunity to smart users (or hackers) to read deleted data exist in the disk. If you can manage this, then you always get benefits enabling Instance File Initialization.

This shows how to enable it with previous version of SQL Server:

Now the best part it, we can immediately enable this during the installation SQL Server 2016.

Monday, January 25, 2016

Repeating headers in Table data region created with Reporting Services 2008 R2

This is not about reports created with Matrix data region, so no column headers. And this may not relevant to latest version such as Reporting Services 2014 and 2016. While troubleshooting a report hosted in SSRS 2008 R2, I was trying to address an issue related to repeating headers on a particular report. It was a simple report, created using a Table data region and no groups. Repeating headers had been addressed as follows;


Although it has required settings done for getting headers repeated, it was not working as expected. Unfortunately, all my searches showed posts related to Matrix, not for Table, hence could not find the solution easily. Luckily, I was guided by another person, instructed me to add some elements to rdl file manually, just like below.


It started working immediately. What is the reason? As per the research done, this issue comes when the Table has not been added through the Wizard but manually. If it was created using the Wizard, above setting is enough for getting headers repeated.

I will be checking this with latest version and updating the same soon.

Sunday, January 24, 2016

[Guest Post] Database disaster prevention by Olek Nilson

This post is written by Olek Nilson.

Database disaster prevention is one of the greatest pains in life of every DBA. Data from database are changing daily, gradually expanding the size of the latter. Of course, there is no perfect way to prevent database corruption, but there are some practical steps that can help you to bring the risk to minimum.

It’s important to say a few words about Microsoft Corporation. In spite of the fact that SQL Server is 100% Microsoft product, the company does not provide any means to protect your database, meaning that database protection completely up to the user.

Below we would like to discuss cases that can trigger database disaster.

One of the most popular cases that can potentially cause database corruption is hardware failure, for example, incorrect work of Disk, CPU or Memory Module.

Another thing that can corrupt your database is Antivirus software. Please note that after your antivirus software is installed, all SQL Server Database Files should be excluded from being scanned by the Antivirus software.

The next issue that can potentially damage your database is a desire to save free space on the local disk. Based on my personal experience, there were some users who stored all SQL Server database files in zip or 7zip files to save some space on their disks. Ultimately, these actions lead to a database failure. If you desperately need free space, you can use third party tool to make database backups with compression and store them online.

Another important thing that will help you to prevent database disaster is setting up the latest updates for SQL Server and Windows. Otherwise it will cause incorrect working of your shift.  

And do not forget about the case with a power failure that may lead to all databases being corrupted.

It is not an exhaustive list of failures that can destroy your database. So what shall we do to prevent database disasters and minimize the risk? You have to remember that there is no perfect way or method that will ensure 100% protection for your database. However, the best thing that can help you to minimize the risk of the database corruption is to make scheduled database backups. Database backups cannot prevent the corruption, but they can help you to restore the database with minimal losses.   

It is recommended that all experienced DBAs should make regular database backups. This leads to a question  – How often should I make backups or what backup strategies should I employ? Before answering this question, please consider yet another question - How much data can you afford to lose? Working as a DBA, I have had a great many databases to restore and I can say that the more quality backups you have the less data you will eventually lose. 

Presently there are three ways to make database backups:
  1. Using T-SQL commands:

    If you prefer using T-SQL command you have to keep in mind that you need to make database backups all the time. Even you have to create a schedule and carefully stick to it. Perhaps, make backups with the help of T-SQL commands are comfortable if the database is not large and grows slowly. But in the most cases changes in the database are taking place all day and night, so how do make backups at night? Of course, to solve this issue you can find in the Web scripts for making schedule backups.

    So, if you make the decision to make database backups with the help of T-SQL use the following commands:

    -- To make a full database backup use the following command:
    BACKUP DATABASE AdventureWorks TO DISK = 'Z:\SQLServerBackups\AdventureWorks.bak'
    
    -- Use the following command to make a differential backup:
    BACKUP DATABASE AdventureWorks TO DISK = 'Z:\SQLServerBackups\AdventureWorksDiff.bak' WITH DIFFERENTIAL
    
    -- The following command will help you to make a transaction log backup:
    BACKUP LOG AdventureWorks TO DISK = 'Z:\SQLServerBackups\AdventureWorksLog.bak'
    
  2. With the help of SSMS (SQL Server Management Studio):

    Using SSMS for making backups is a little bit easier then make backups with the help of T-SQL commands. You don't need to write commands every time when you are going to make a backup it is enough to make a couple simple steps, which we described below. It has to be admitted that you can also make scheduled backups with the help of SSMS, but it will take more efforts from you.

    To make a backup please:
    a. Right-click on the database you want to backup
    b. Select “Tasks”, then “Back up…”
    c. Choose “Backup type"
    d. Add backup destination
    e. Click “OK”
  3. Or use third party tools; You can choose such enterprise solutions as SQL Backup Pro from RedGate, something similar to SqlBackupAndFtp, or try out Sqlbak.com if you prefer SAAS model.
In my opinion, this is the best way for making scheduled database backups. Here everything you need is to create the job for the scheduled backups and run the application. The third party tool will make backups according to the schedule.
     So, if your databases are growing fast and there are many changes occurring every hour, it would be better to choose “maniacal” backup plan, for example, it is a Full database backup every 24 hours, a Differential backup every three hours and make transaction log backups every 15 minutes. Yes, perhaps it will take more time to restore your database should failure occur, but the percentage of data loss will be minimal.


    Monday, January 18, 2016

    SSRS: Issue with SSRS Excel export - The number in this cell is formatted as text or preceded by an apostrophe

    This is not a new issue, it has been there for long time but it still appears in the forum and we still experience it. One of the issues I worked in this week related to it; Once a report with few number of numeric columns is exported to Excel, some of the numeric columns are formatted as text, showing a message on each cell;


    How can this happen? We are 100% sure that source-column used for this report-column is numeric and there are no non-numeric values. But still exporting treats it as text and format it as text.

    As per the questions that have been published in forums, this issue appears mostly because of additional functionalities added to the expression. In my case, it is somethign like this;


    Although many have suggested to use Immediate If (Iif) for handling this, because it can be used for checking whether the value is either empty or null and setting the column-value for zero before applying any other functionalities required. However it does not work all the time. The other technique that can be used for this is, multiply the final result by either 1 or 1.0 based on the output format you need.


    This solved my issue, you may solve it by either applying this or applying Iif. There was another suggestion; making the report-column as Currency but it is not possible with all our columns. What could be the reason for this? One reason could be, how Excel identifies the type of the column. Generally, Excel uses first 8 rows for deciding the data type, I am not sure whether same theory is applied with this, but similar is applied for sure.

    Sunday, January 17, 2016

    SQL Server Brain Basher of the Week #040 - SQL Server Services Account

    Let's talk about accounts that can be assigned to the services related to SQL Server and how they can be assigned or changed later. Since SQL Services are Microsoft Windows services, changes related to the services can be done via Services Desktop App. In addition to that, with SQL Server installation, SQL Server Configuration Manager is installed and it can be used to manage services related to SQL Server too.

    Here is the question for this week based on this.

    Which application should be used for changing SQL Server service accounts?
    1. Services Desktop App only
    2. SQL Server Configuration Manager only
    3. Both Services Desktop App and SQL Server Configuration Manager.
    What would be the answer? Or what would be the best?

    Yes, you can use Services Desktop App for changing accounts related to SQL Server just like changing setting of other accounts. And since we have been given a SQL Server specific application, which  is Configuration Manager, it can be used too. Although both can be used, there is an advantage with SQL Server Configuration Manager. When a SQL Server service is updated with Configuration Manager, it knows which security groups should be updated, making the account as a member of them, and it does it immediately. This does not happen immediately with Services Desktop App until restart is done. Therefore, it is recommended to use SQL Server Configuration Manager instead of Services Desktop App.

    Thursday, January 14, 2016

    Basic of data mining

    Data Mining or Machine Learning is not something new, it has been there for years and many have used it for various types of analysis and finding hidden patterns related to a data set. However, it is NOT something we always use, apply and integrate with our solutions as it is something unique and it caters for unique requirements. Not only that, the definition many have grabbed is, just a part of the exact usage of Data Mining, not realizing the purpose of it. Let's discuss the basic of Data Mining and see how we can use even with a simple scenario.

    What is Data Mining?
    Data Mining is a special kind of analysis technique that reveals previously-unknown or difficult-to-identify connections and correlations in as large dataset, using statistical models. This aligns with data warehousing and business intelligence as itor always works with large datasets. Business needs to identify useful patterns related to captured data stored in data warehouses for improving the productivity and efficiency while satisfying its customers. In an organization, for smooth and success run, it is always better to identify potential customers for product recommendations, work on predictive future behaviors of customers and understand the trends including competitors'. Although this can be done up to some extent with functionalities given for reporting and analysis by BI client tools, it is bit difficult to get everything required done efficiently without having statistical analysis; Data Mining.

    Data Mining Algorithms

    Data Mining uses algorithms for analyzing data. There are many algorithms, some are heavily used and some are used only with specific scenario. There are algorithms created for same purposes with tiny differences. Similar ones have been given for selecting the best for the scenario. Algorithms can be categorized as below;
    • Classification algorithms: Predict one or more discrete variables based on other attributes. Example: Predict whether a credit can be granted to a customer or not. Algorithms: Microsoft Decision Tree, Neural Network, Naive Bayes
    • Regression algorithms: Predict one or more continious variables. Example: Predict the sales revenue. Algorithms: Microsoft Time Series, Linear Regression, Logistic Regression.
    • Segmentation or clustering algorithms: Group data into multiple segments. Example: Group customers based on their other attributes for marketing campaign. Algorithms: Microsoft Clustering.
    • Association algorithms: Find correlations between different attributes in a dataset. Example: Finding products to be bundled for selling. Algorithms: Microsoft Association.
    • Sequence analysis algorithms: Find sequence (or order) in a data set. Example: Finding common clickstream patterns in a web site. Algorithms: Microsoft Sequence Clustering.

    Data Mining Solution
    Let's talk about a scenario. Assume that we have a large number of records related to customers and it includes many general attributes such as name, age, job, housing and a specific attribute whether customer has been granted a credit or not. If we have a dataset like this, we can use it for determining whether a new customer should be granted a credit or not. 


    As you see, credit risk has been marked as target attribute and all other attributes are considered as features. We can get this dataset analyzed by an algorithm (or multiple algorithms) using a Model. Model specifies the data structure that marks attributes to be used and attributes to be predicted along with a dataset. Generally, Model uses 70% of the dataset (Training set) for identifying patterns and 30% for testing (Testing set). Model with algorithms trains data for predicting the column and uses testing set for checking the accuracy. Accuracy can be easily checked because training set has the value to be predicted. If multiple algorithms have been used with the model, best one can be picked based on the accuracy of testing. Once picked, model can be marked as Trained Model which can be used for new customers.

    Tools available for Data Mining
    There are many tools and applications in different platform. Here are some of them offered by Microsoft;
    • Data Mining Add-ins for Excel: This provides many facilities for performing data mining. This add-ins has Table Analysis which can be used without knowing much about data mining.
    • Microsoft Analysis Services: This allows us to create data structures for data mining. Once created, it can be used for creating reports or analysis.
    • Azure Machine Learning: This is the cloud-offer which can be easily used for data mining. This allows to create Models with drag-n-drop facility, train models, and then open trained model as web services

    Tuesday, January 12, 2016

    Making the connection string for the database

    If the development tool does not support creating the connection via an interface, if it needs manual entry for the connection string, and if you cannot remember the elements required for making the connection string, obviously you have to make a search for finding it. But, do you know that there is an older technique for making the connection string?

    Here is a simple solution for that. All you need is a creating an UDL file (Universal Data Link), make the connection for required database, and get the connection string added to the file related to the connection you made. This support comes through Microsoft Data Access Component (MDAC) and it provides you a common interface for inputting required information for the connection.

    Here are steps for creating an UDL file and getting the connection string.

    Create a text file and change the extension as udl.



    Double-click on the file for opening the interface and input information for the connection.


    Save it and open the udl file using a text editor, you have the connection string required.


    Monday, January 11, 2016

    Reporting Services (SSRS) - Showing both overall and group page numbers

    This is nothing new or related to latest which is SQL Server 2016 Reporting Services. But I think it is still important to blog on it as this question still appears in forums. SSRS offers multiple ways to control how pages need to be divided and printed or rendered with the support of pagination. Generally page numbers are shown with all reports without considering groups added. But, occasionally, page numbers specifically on groups are required while showing overall page numbers.


    The report shown is created with a dataset prepared from AdventureWorksDW2014 database. It shows Cost and Sales for Category and Product and it is grouped by Category. Note that there are two texts that show pagination; one for overall pagination and other for grouping.

    How this has to be configured? 

    Here are steps for configuring it. First of all, few properties of the group have to be set;


    Under PageBreak property, three properties are displayed; BreakLocation, Diabled and ResetPageNumber that can be used for defining explicit behaviors. As you see, BreakLocation has been used for setting the page break between groups and ResetPageNumber has been used for restarting the page numbering when page break generates. Once these properties are set, Globals.PageNumber and Globals.TotalPages are affected, hence page numbers are based on the group.

    Now, how can we show overall page numbers with this? For that, we can use another two global properties called Globals.OverallPageNumber and Globals.OverallTotalPages. This is how displaying of both grouping and overall page numbers have been set with two text boxes;