Showing posts with label SQL Server Brain Basher. Show all posts
Showing posts with label SQL Server Brain Basher. Show all posts

Sunday, March 5, 2017

Table Value Function or Stored Procedure - SQL Server Brain Basher of the Week #067

Let's talk about a very common scenario that you do with all solutions. If you need to implement a business logic using a database object such as View, Stored Procedure, or Function, how do you decide the best for your requirements.

Here is the Interview Question of the week. Should you use a Stored Procedure or Table Value Function (TVF) for the given scenario?

You have a table called Customer that has a column called CreditLimit. You must write module for accepting the CustomerId and Credit Limit, and updating the CreditLimit column with submitted Credit Limit. You need to make sure that exceptions are handled as well.

In order to answer this question, you should certain set of factors that can be used for deciding whether the logic must be implemented using a Stored Procedure or Function. Let's see each an every factor.

First thing you should know is, the output of the Stored Procedure cannot be easily used with another output generated by SELECT though INSERT EXEC is possible. However this can be easily done with TVFs. In addition to that, Stored Procedures allow us to return multiple results but function can return a single result.

Other main thing is modification. You cannot modify records inside Functions but it is possible within Stored Procedures. And you can form a statement dynamically and execute inside the Stored Procedure but it is not possible with Functions.

Exception handling is not possible with functions but details exception handling is possible with Stored Procedures.

Considering these factors, the answer is Stored Procedure because data modification and exception handling are not possible with Functions.

Sunday, February 26, 2017

Azure SQL Database or SQL Server on Azure Virtual Machine - SQL Server Brain Basher of the Week #066

Here is a modern interview question on SQL Server. In addition to general questions on administration and maintenance, Azure based questions have become common now because many maintain databases either in a VM hosted in cloud or as a Cloud-Managed-Database. Therefore, let's talk about an interview question related to Azure.

Here is the question. What are the factors you can use for determining whether the database has to be set up with a Virtual Machine hosted on Azure or set up as an Azure SQL Database?.



There is a very common answer for this question. If we want to get the entire database managed by Azure, such as Disaster Recovery, Backup, or Performance Tuning, then Azure SQL Database is the best. If we need to manage the instance by ourselves, then we need to go ahead with SQL Server on Azure Virtual Machine.

There is nothing wrong with the answer but it always better to know few more factors that can be used for deciding the way. Let's talk about some important factors.

Azure SQL Database is a relational Database-as-a-Service (DBaaS) (that falls under industry categories of Software-as-a-Service - SaaS and Platform-as-a-Service - PaaS) and SQL Server on Azure Virtual Machine is a Infrastructure-as-a-Services - IaaSAzure SQL Database is completely managed by Microsoft, you do not need to worry about the Operating System, configuring hardware, service packs, patches. But if it is SQL Server on Azure Virtual Machine, then we need to manage everything by us. It is good if you have a team or dedicated DBA for managing the database and you need to be the administrator of it, not Microsoft.

Azure SQL Database Server is logical server though it can hold multiple databases. It is not considered as an Instance that you configure in SQL Server on Azure Virtual Machine. You know that, you can have multiple unrestricted administrators in an instance which is possible with SQL Server on Azure Virtual Machine but not possible with Azure SQL Database.

Another key thing you need to remember is, Microsoft makes most of new features available initially with Azure SQL Database before making them available with other versions. That is why you see features such as Treat Detection, Temporal Tables with Retention Policies only available with Azure SQL Database. In addition to that, this has an Intelligence Database Service that recommends possible optimizations for performance tuning.

Do not forget, Azure SQL Database is limited to 1 TB whereas instance of SQL Server on Azure Virtual Machine can consume 64 TB with multiple databases.

We do not need to worry on High Availability with DBaaS as it is provided with the service. With IaaS needs a manual set up with multiple VMs using clustering + Availability Group as High Availability is only for the VM, not for the SQL Server instance.

Azure SQL Database has no hardware and administrative cost where as SQL Server on Azure Virtual Machine has administrative cost. When it comes to License cost, Azure SQL Database is sold as a service based on multiple tiers and SQL Server on Azure Virtual Machine comes with a license but you can use your own license if you need.

If you need to use other components such as Integration Services, Analysis Services or Reporting Services, then SQL Server on Azure Virtual Machine is the only option though some services such as Analysis Services available as a managed service.

There are few more tiny reasons for picking one over other. But as an Interviewee, the mentioned ones should be known.


Sunday, February 12, 2017

Converting Attributes to Columns - SQL Server Brain Basher of the Week #065

Let's talk about something common for all database management systems without talking something specific to SQL Server. This is the interview question of the week and let me start it with a conversation I had with a interviewee.


Me: Let's assume that you have been given a document that contains identified attributes for a particular entity, for an example, Customer Entity. Business Analyst has mentioned that it needs an attribute called Customer Name. How do you take this entity and design your database table?

Interviewee: All I have to do is, understand given attributes and create a table call Customer with relevant columns. We need to make sure that the right data type is selected for each every attribute or the column.

Me: How do you decide the data type? Let's talk about this specific attribute: Customer Name.

Interviewee: Yes, I will be setting varchar for this attribute, probably with the size as 200. Another thing, if the solution is a multilingual application, have to use nvarchar instead of varchar.

Me: Good, anything else to be considered on that?

Interviewee: Regarding Customer Name, I think that is all I have to consider. We might have to change the size of it but 200 is reasonable.

Okay, what do you think? He is not a senior person, that is what I started with basic but I expected something additional, something extra.

First thing you need to understand is, BA does not know or does not think about database design. It is all about the business. It is our duty to covert the business identified to logical and physical database design. So, even though BA has identified Customer Name as an attribute, or as a single attribute, it does not mean that we need to stick into it. This is where we apply normalization rules. Not only that, we need to think and see;
  • How this attribute is going to be filled
  • How this attributed is going to be read
  • How often the entity is searched based on this attribute.

For an example, Customer Entity (or the table) might be searched by Customer Last Name. Have we addressed this requirement. Do not say that BA has not mentioned it, if not mentioned, it is always good to get it clarified but it is something we need to consider.

If we consider other common things whether BA has mentioned it or not. Customer Name is not going to be a single column. You will surely have at least two columns called First Name and Last Name. This makes sure that your table is designed properly and it is ready for standard or most common analysis as well as holding data efficiently. Remember, breaking an attribute to multiple columns is not only for known attributes such as Employee Name, Location. We might break attributes like Product Code, Invoice Number as well. For example, Product Code might be forming using two elements such as Product Type Code and Unique Number. In that case, it is better to have two columns for Product Type Code and Unique Number, and another Computed Column for Product Code (if required).

Make sense? Although this is a simple thing, many miss it, hence DO NOT FORGET SIMPLE THINGS AND FUNDAMENTAL RELATED TO DATABASE DESIGN :).



Sunday, February 5, 2017

On-Premises or Cloud - SQL Server Brain Basher of the Week #064

Here is the Brain Basher of this week.

During interviews, questions on deciding the hosting place for a planned database is very common. Sometime we ask the best place for a database by giving the certain criteria but sometime we ask that how you can decide considering general factors.

If you were asked the below question, what would be your answer?

"You have to create a database for one of the business processes related to the company. Hosting environment for the database is still undecided. We can either create it as an Azure SQL Database or can use the existing On-Premises SQL Server for hosting this database. What do you think?"

This is the Interview Question of the week.

Since there is no specific requirement, particularly on business need, most of the time, interviewee gets stuck with certain areas. However, as per my experience, it would be great if interviewee can explains the factors related for making such decisions along with pros and cons. Let's list out possible factors that influence the decision;
  • Stable Connection - How this database will be used? Frequently, with heavy transactions, or infrequently with less transactions? This is very important because if the database is an OLTP database with heavy transactions and business is depend on it, you need a stable, high speed internet connection. If the company has no such facility that cannot satisfy the speed and the bandwidth required, it is better not to move into cloud. However, getting high speed internet connection is a plan that will be executed soon, hosting in cloud should be considered.
  • Sensitive data - Some companies, specially some financial companies have policies on sensitive data. If the policy says that data should be maintained only in local servers, then hosting the database in cloud is not going to work. Not only the policies established by the company, there are some regulations enforced by the government, hence it has to be considered too. We may go for a hybrid version, that hosts part of the database in cloud and the rest in on-premises server for making sure that sensitive data is not moved out. This is possible with Microsoft SQL Server (SQL Server On-Premise Database files maintained in Azure).
  • Accessibility - How our database is going to be accessed? What we need to make sure is, it can be accessed by the audience who need it. Users may access it via a web application, using reports, or using tools like Microsoft Excel or Power BI by connecting directly. If we hosted, can all access data as they want? Does it require special software to be installed? Does it require special permissions? Do we have to purchase more licenses? These are things we have to consider on this. If we have to spend more which is not a part of the budget or it makes the usage complex, we might have to consider on-premise. However, certain scenario make on-premises hosting complex and costly as well.
  • Size - Is it going to be in megabytes, gigabytes or terabytes? If you expect that database will grow in terabytes within shorter time period, then it should be hosted in a local machine as 1 TB is one of the limitations with Azure SQL Database.
  • Security - Most common factor that pushes the implementation away from cloud. Many think that having the database hosted with a local server is more secured than placing the database in cloud. However, it is not true. If you go for a good provider like Microsoft, Amazon, there have been no security breaches recorded and they guarantee the security. Securing the local environment is very difficult, have to consider security implementations electronically as well as physically and someone has to continuously watch it, patch it, upgrade it when required and monitor it. Therefore, experts recommend hosting in cloud than in-house when security is considered.
  • High Availability and Disaster Recovery - If this is required and you have already implemented this using methods like Mirroring, Clustering or even with a backup/restore strategy, and most importantly it addresses required RPT and RPO (read more at http://dinesql.blogspot.com/2016/10/determine-backup-strategy-sql-server-interview-question.html), then you can consider local implementation. However, if you have not implemented anything, then it is always better to move into cloud as almost all providers support HA and DR.
  • Cost - No doubt, initial cost is always very high when it comes to hosting in local server. You may spend on purchasing servers, software licenses and some extra hardware for speeding up the processes. But the initial cost is very low when hosting in cloud. You may calculate for multiple years and compare, but in most cases, hosting in cloud is always cheaper than hosting in on-premises server.
  • Maintenance - If it is local, you need someone to do this. Installing service packs required, checking space required, performance tuning, monitoring,... many things. If it is cloud, most of these are addressed by the provider and only few need our attention. Example, Azure SQL Database suggests enhancements on indexing and statistics and we can use these recommendations for performance tuning that saves time and resources. So, if you do not have a dedicated person for performing these, go for cloud.
  • Features and Functionalities - You may need to implement some processes that needs certain set of functionalities. For example, if you need to send an email using SQL Server Database, you need to use Database Mail feature. If you need to execute a business login on a scheduled time, you need to use SQL Server Agent. You know that these features are not available with Azure SQL Database. Not only that, it does not support all TSQL commands, standard partitioning among multiple volumes (with Azure SQL Database, partitioning is possible but volumes cannot be decided. If need, federation can be considered) is not available. Therefore, we need to think about features and functionalities that will be used with the database and then decide whether we can host in a local server or cloud.
There are more but interviewee should talk about at least mentioned factors. That makes sure that the interviewee is smart and knowledgeable on the subject, not just a person who implement anything when asked.


Sunday, January 8, 2017

Table Design Considerations - SQL Server Brain Basher of the Week #063

He is the Brain Basher or the interview question of the week. This is something every database engineer should know and it is not limited to database administrators.

What would be the considerations you make when you are asked to create a table for holding data?

Let me elaborate the question more. How do you start designing of the table when you are asked to design it on a particular entity. If you are an ordinary engineer who thinks traditionally, your answer could be;



"Need to check the requirements and create a script with relevant columns, and apply normalization as need."

Do you think that the answer is similar to cartoon shown above? 

Being an interviewer, I prefer to hear something more, something that explains the interviewee's experience. Let's point out some valuable considerations we have to make when a table has to be designed.
  • Read the attributes identified by the BA and understand the business requirement - yes, you can think that this not something we should concern but trust me, you need to know.
  • Analyze and see whether this needs as a normalized table or denormalized table. Based on the requirement, decide the level like how far we need to go through normalization rules.
  • Decided the data types of the attributes and size of them. Need to pick the best, thinking present and future too. Should we think about past data as well. Yes, we should if we need to load legacy data.
  • Consider the datetime format and whether it needs to maintain Unicode characters.
  • Need to analyze whether it needs partitions and partitions need to be distributed among multiple file groups (preferably multiple volumes).
  • Constraints required. In addition to the Primary Key, we may add additional constraints such as Foreign Key, Unique Key and some rules.
  • Whether Artificial or Surrogate Key is required.
  • Number of indexes required, whether we need Columnstore or not, order of them, whether they need any filters.
  • Whether compression is required.
There are few other tiny things as well but these things show that the Interviewee is smart and knowledgeable. Remember, it is not just these points, knowing details of them is a must.

Sunday, December 25, 2016

Automate adding common objects to new databases when they are created - SQL Server Brain Basher of the Week #062

Thought to share a specific conversation of one of the interviews did last month on a DBA role. The interview was done over the Skype call as the Interviewee was at a different place. We were having a conversation on past experiences of Interviewee and I learnt that he has worked for a company that maintains many databases for many clients and of course, he was explaining the work he had to do.

Here is the Brain Basher of the week based on the conversation I had. What is the best way of adding common objects to new databases when they are created?

Let me explain this by adding the conversation we had on it (Note that I have obtained permissions from the client and Interviewee for adding this small part of the interview to my blog, this is not the exact conversation had, have slightly adjusted it for writing purposes). let's call Interviewee as Jack.

Me: Okay Jack, since you have worked with a company that maintain large number of databases for clients, I am sure that you have many common objects like Tables, Stored Procedures in all databases.

Jack: Yes Dinesh. There are set of Tables and Procedures that we need to create when a new database is created to a new client.

Me: How do you do it? Are you adding them manually? Or have you done some automation on it?

Jack: I had automated it, yes, up to some extent. I have a script for creating a database and that script not only had the database creation part, I had creation code of all other required objects. So, all I had to do is, just run this script with few modification for setting up a new database.

Me: Great, do you know any other technique or method to make sure that all required common objects are available in the newly created database without adding them through the script?

Jack: There may be but, since I was comfortable with what I had, I did not look for something else. I had no issue with the ready-made code.

Me: Agree, but what if I need to make sure even I can create a database in your environment with all common objects, without your script? Or assume that one of your colleagues need to create a new database for a new client, and you are absent and he cannot find your script.

Jack: Yes, we experienced it several times and I accept that I did not have a document that explains how to do it :). But it was not a major issue at that time.

Me: Yes, it is always great to maintain a run-book (read more on Run Book: SQL Server Brain Basher of the Week #021 - Documentation). Anyway, we can use some components of the SQL Server for handling that, any idea how to use existing SQL Server components to automate some of those without having a script?

Jack: Like SQL Server Agent, using a job?

Me: How SQL Server starts the database creation when we ask to create one for us?

Jack: I am not sure about the internal operation but it requests the space from the OS based on the way we have configured the files and do some other operations, must be, and complete it.

Me: It actually takes a copy from a template and use it for creating the database. Do you know where is the template store?

Jack: Must be in one of the system folders. Sorry Dinesh, not sure where it is actually located.

Me: No problems. It is actually one of the system databases. It is Model Database.

Jack: Yes, I have heard about it but did not check much on that.

Me: You can actually add all your common objects to the Model Database and keep. Then, when anyone create a database, since it takes a copy of Model Database, the newly created database will automatically contain all required components. So, we do not need to add them manually or via a script.

Jack: Yes, I have missed that part...........

Conversation continued...........

Remember this, Model is the template and it is used by SQL Server to create databases when we ask.




Sunday, November 27, 2016

TRUNCATE Table - SQL Server Brain Basher of the Week #060

Here is a simple question I asked from an candidate I interviewed last week. This was not the exact topic discussed but while discussing something else, I asked based on one his explanations. It is not about differences between TRUNCATE and DELETE or advantages over another, this is what I asked;

Is TRUNCATE statement is a DML statement or DDL statement?

Unfortunately I did not get the answer I need. First of all let's try to understand what is DML and DDL;
  • DML - Data Manipulation Language - most action queries come under this and most common ones are INSERT, UPDATE and DELETE. There is another category called DQL - Data Query Language that is the category for commands like SELECT but some consider the SELECT is a part of DML.
  • DDL - Data Definition Language - Commands that modify objects in the database come under this. Statement like CREATE TABLE, CREATE USER are example of this.
Now, TRUNCATE is DML or DDL?

It is a DDL command because it does not work with data (like DELETE command). Yes, it deletes all records but it actually does a operation similar to drop/re-create operation hence it is considered as a DDL command.

Remember, DDL does following;
  • Removes all data pages and no empty-pages are left behind.
  • Acquires only table or page level locks.
  • Resets identity property.
  • Needs Alter permission.

Sunday, November 20, 2016

Quickest way of creating a copy of existing database without data - SQL Server Brain Basher of the Week #059

Sometime we need to take a copy of the database without data and have it as a new database. This either requires to generate the entire script and run the script with a new database name or take a backup of the database, restore it and delete data. Both require a reasonable time base on various conditions.

Here is the Brain Basher of the week on it.

What is the quickest way of creating a copy of existing SQL Server 2016 database without data?

Both techniques mentioned above work fine but add complexities. If you have installed SQL Server 2016 Service Pack 1, this can be esaily achieved using DBCC CLONEDATABASE command.

Here is the way of creating a copy with a new name.

DBCC CLONEDATABASE(AdventureWorks2014, CloneAdventureWorks2014)
WITH NO_STATISTICS,NO_QUERYSTORE ;

Note that database is created as a read-only database;


However, changing the Database read-only property to false makes the database as a read-write database.


For more info on this command, read: DBCC CLONEDATABASE is now working with SQL Server 2016.

** Note that this does not take all objects exist in the database when copying the schema. Therefore, if your database contains many different types of objects (such as Naively Compiled Stored Procedures), this is not the best way of getting a copy of your database.

** Although we can get the schema from this, this functionlity is primarily given for getting a copy of datatbase for troubleshooting, NOT FOR CREATING A DATABASE SCRIPT OR COPY OF DATABASE.

For more info, read: https://support.microsoft.com/en-us/kb/3177838

Sunday, October 23, 2016

How to replace empty string with NULL - SQL Server Brain Basher of the Week #058

Everyone asks how to check whether the value is NULL and replace it with a different value if it is NULL because it is something that we commonly do with our applications. It can be easily done and many aware of it: All we have to do is, use IsNull function for checking and replacing it if it is NULL. However, I was asked a similar question today but it is bit different;

How can we check the value whether it is empty or not and replace it with a NULL if it is empty?

This is uncommon but important, hence it is the Brain Basher of this week.

This can be achieved by using NULLIF function. It does not check for NULLs but it returns a NULL if both supplied values are equal. See below code;

DECLARE @Variable int = 0

-- This returns NULL as both @Variable and 0 are equal
SELECT NULLIF(@Variable, 0)

-- This retuens 0 because values supplied are not equal
-- and returns the first expression
SELECT NULLIF(@Variable, 1)




Sunday, October 16, 2016

Memory-Optimized table memory limit - SQL Server Brain Basher of the Week #057

Let's talk about memory-optimized tables as a part of Brain Basher in this week. This question was raised during one of my workshops and it is an obvious question comes to your mind when planning on memory-optimized tables. Here is the question/

How much memory memory-optimized table can consume maximally?

If the version is SQL Server 2014, then there is a limitation on durable tables which is 256GB and no limitation on non-durable tables. However, this has been addressed with SQL Server 2016 and now NO LIMITATIONS on both durable and non-durable tables.

You can read more info on this enhancement at: https://blogs.msdn.microsoft.com/sqlserverstorageengine/2016/06/23/increased-memory-size-for-in-memory-oltp-in-sql-server-2016/

Read some post made on memory-optimized tables;
In-Memory OLTP: Inserting 3 millions records in 3 seconds
How to check tables and stored procedures for in-memory migration


Sunday, October 9, 2016

Determine the backup strategy - SQL Server Brain Basher of the Week #056

Determining the backup strategy for a SQL Server database is not an easy task as there are many things to be considered. As per my experience, many implement a strategy without considering anything, example, just a daily full database backup even without asking the business requirement.


Generally, we consider two things: RTO and RPO. RTO (Recovery Time Objective) refers time it takes for recovering. In other words, how long business can survive without the system. RPO (Recovery Point Objective) refers the acceptable data lost at a recovery. In other words, business accepts to repeat the work for a certain time period.

This week question is based on backup strategies. Here is the question:

You have a database which requires a recovery (or backup) strategy based on the given requirements;
  1. Size of the database is 25 GB.
  2. The database must never be unavailable for longer than 8 hours.
  3. At the recovery from a failure, no more than 1 hour of transactions may be lost.
  4. Total backup throughput is 100 MB/minute.
  5. Total restore throughput is 80 MB/minute.
  6. Average rate of changes during office hours is 500 MB/hour.
  7. Average amount of new data is 200 MB /day.
  8. Office hours - 09:00 - 17:30.
What would be your plan the requirements given?

How do you start this. You should start with #2 and #3 because they talk about RTO and RPO. Whatever backup strategy we planned, we must make sure the restoration can be done within 6 hours. Not only that, we have to make sure that we must not lose more than 1 hour data. Okay, let's work on this.

We cannot survive without a full database backup. Let's calculate and see how much time it needs for backing up and restoring the database.

Database size = 25 GB = 25,600 MB

Backup throughput = 100 MB/minute
Time takes for backing up the database = 25600/100 = 256 minutes = 4 hours 26 minutes

Restore throughput = 80 MB/minute
Time takes for restoring the database = 25600/80 = 320 minutes = 5 hours 33 minutes

As you see, it takes about 4-5 hours for backing up and we can easily set it up during off peak hours. However, it takes 5-6 hours for restoring that is something we need to consider.

Since the RPO is 1 hour, we need to make sure that Database Recovery Model is Full and it needs Transaction Log backup every hour. Let's calculate the time it needs.

Changes per hour - 500 MB
Time takes for backing up the log = 500/100 = 5 minutes
Time takes for restoring the log = 500/80 = 7 minutes

Let's see whether we can have strategy like - Daily full backup and hourly Transnational Log Backup during office hours. If we plan the full backup at 00:00, it can complete it before 05:00. And we can have hourly Transnational Log Backup starting at 10:00 and ending at 18:00. In worse case, if something happens at 17:30, we have to restore the full backup and 8 log backups. Let's calculate the time and see.

Restoration time for the full database backup = 5 hours 33 minutes
Restoration time for log backups = 7 x 8 = 56 minutes
Total duration = 5 hours 33 minutes + 56 minutes =  6 hours and 29 minutes.

Since we have been given the RTO as 8 hours, the strategy we checked works fine with it. However, we need to think about the growth as well. We add approximately 200MB daily, approximately 1GB weekly. This increases the time it takes for full database backup. Assuming that database becomes 30 GB after a month;

Time takes for restoring the database = 30720/80 = 384 minutes = 6 hours 24 minutes.

If we recalculate the time;
Total duration = 6 hours 24 minutes + 56 minutes =  7 hours and 20 minutes.

Now this has to come to closer to the given RTO. As you see, within few months, this strategy will not be able to handle the requirements. If you see something like this, then you cannot handle it with traditional backup types, you might have to go for either disk level backups or partial backups. Let's talk about more on that with another post.

Sunday, October 2, 2016

SQL Server Brain Basher of the Week #055 - Security

You are the Database Administrator and you maintain a database with following characteristics;

  • Database contains sensitive data.
  • Database authentication is set to Mixed Mode.
  • Users can access only tables they are authorized. Authorization is completely handled with SQL Server.
  • Tables related to finance data can be accessed only by users who are in Finance User-Defined Database Role.

Sales team uses their own application to access the database. There is a new requirement from the Sales team as below;
  • Sales team needs to see some financial data through the application.
  • Sales team members should NOT be able to execute impromptu SELECT, INSERT, UPDATE, and DELETE statements against financial tables.
  • The application will prompt for the second password when the screen related to finance data is opened. Sales person needs supervisor to input the second password for seeing data.
Here is the question of the week;

Being a DBA, what is your suggestion to achieve above requirement?

There are couple of things you need to consider. First, although users access the database via applications, users credentials are used for authorization, means users need permissions on finance data in order to access them. But, secondly, if users are granted permissions, they will be able to execute impromptu queries against tables, which should not be avoided. How can you grant permission to these specific tables without giving explicit permissions to users or roles?

If you are a smart DBA, now you should think about Application Roles

An Application Role is a database level role that permissions can be set with an application via the role, without setting permissions to users or other roles. You create it with a password, grant permissions as needed, in this case, let's say two tables related to finance, and then open the Application Role when two tables are required to access.

Since Supervisor enters the password, it can be used as the Application Role password. Once the application accepts the password, it calls a special stored procedure called sp_setapprole with the name of the Application Role and password. This creates New Security Context for the application connection and current user's context becomes inactive. User's security context will only be back once the sp_unsetapprole is called, until that, only Application Role and Public tokens are available for accessing resources.


Note that once the Application Role is active, user can only access other databases via guest account only.

For more info on Application Role, read this: https://msdn.microsoft.com/en-gb/library/ms181491.aspx

Sunday, September 25, 2016

SQL Server Brain Basher of the Week #054 - Guest Account

Here is the question of the week. I am sure that you have seen guest user account in your databases. This week question is based on it;

What is Guest User Account and usage of it?

The Guest User Account is a system user account that is available with all user defined databases and it cannot be dropped. By default, it is disabled, it can be enabled using following code.

GRANT CONNECT TO guest;

This user account has no relevant Login Account. SQL Server uses this account when;
  • User connects with SQL Server using the Login and tries to access a database that he has no User account created.
  • Guest User Account is enabled.
In a way, this is good because this allows users to connect with SQL Server using Logins and accesses databases without creating Users but it might introduce some security threats as well. Therefore, it is better to keep it disabled. If you need to enable it, make sure it has least privileges on resources in the database.

Sunday, September 18, 2016

SQL Server Brain Basher of the Week #053 - Data Archiving

We archive data when we see that database has many records, they never get updated and they are rarely read. With traditional archiving way, we take out data and store them in a different database or store them as text files. In most cases, instead of storing exact structure of records, we summarize and store them. However, the traditional archiving process is not sufficient enough for modern requirements.

There are instances you need to read old archived data along with new data. There are instances you need analyze old data for getting some information. Unfortunately, with traditional method, it takes long time for setting up and reading data, and in some cases, not all required information is not available.

Here is the question for the week.

What is the new SQL Server 2016 feature that supports data archiving, addressing new requirements?

The new feature is Stretch Database. This feature allows you to migrate old data into Azure and still be able to query data. It does not require any changes to applications and codes written, end user sees as if they query a on-premise data table.

When configure a table as a Stretch Table, it can be set to "stretch" the entire table or partially, using an inline function that filters data. Once enabled, this feature set some restrictions to the table. You cannot UPDATE/DELETE archived data and uniqueness does not force for archived data via UNIQUE or PRIMARY KEY constraints. For more info on limitations, read: https://msdn.microsoft.com/en-us/library/mt605114.aspx.

You can see whether your table can be stretched or not using Stretch Database Advisor in Microsoft SQL Server 2016 Upgrade Advisor. If the Advisor says that the table can be stretched, then you can go ahead and stretch it.

The steps for making the table stretched is simple.
  1. Check and see whether table can be stretched, using the Advisor.
  2. Right-click on the table, click on the Stretch menu item and follow the pages in the wizard.
  3. You can either select the entire table or click on Entire Table for filtering. If you click on it, you get a screen like below. You can set the function as you want (see below image).
  4. Next screen is for connecting with Azure. Enter your credentials for log in.
  5. There is an option for SQL Server selection. You can either create a new one or use an existing one. Fill necessary details and submit.

Now your table is stretched.






Sunday, September 4, 2016

SQL Server Brain Basher of the Week #052 - Identity property

The Identity property is not something new and it has been widely used with most of table when a surrogate key is required. It can be simply used with a numeric data type and it inserts a value automatically when an insert operation is done, based on the seed and increment set. The Identity property is used with CREATE TABLE statement like blow;

USE tempdb;
GO

CREATE TABLE dbo.Customer
(
 CustomerId int identity(1,1) primary key
 , CustomerName varchar(200) not null
);

Now the question is, where else you see the keyword Identity with similar functionality?

Many assume that the functionality we get from Identity is only available with CREATE TABLE statement. But there is a function, named as Identity that offers the same functionality.

See the following code. It is SELECT INTO statement that allows us to get some records from a table and create a new table with the recordset. You can use Identity function with SELECT INTO statement for adding a column and populating values as per seed and increment. The Identity function needs three parameters: data type, seed and increment. And the good thing is, it adds the Identity property to the column as well.

USE tempdb;
GO

-- This creates a table with PersonId
-- which based on Identity function
SELECT Identity(int, 1, 1) As PersonId, FirstName, LastName
INTO dbo.Person
FROM AdventureWorks2014.Person.Person;


Sunday, August 21, 2016

SQL Server Brain Basher of the Week #051 - SQL Server 2016

SQL Server 2016 is released and many have started using it. Here is a simple question on SQL Server 2016;

True or False: SQL Server 2016 is available for both x32 and x64 architectures.

Although earlier versions of SQL Server supported both x32 and x64 architectures, SQL Server 2016 is supported on x64 processor only. It is no longer supported x86 processors (x32 architecture).

Sunday, August 7, 2016

SQL Server Brain Basher of the Week #050 - SSRS Client Printing

Here is the weekly question on the SQL Server for improving your knowledge for facing interviews. This time, let's talk about an area on Reporting Services.

SQL Server Reporting Services is a great platform for all types of reporting solutions and its Web Portal (former Report Manager) is one of the tools for managing deployed reports. The Web Portal allows you to view and manage reports published and it allows you to print viewed reports like below;


Now the question is How can you disable the printing facility from the portal?

If you have a requirement like that, it can be done with Reporting Services Properties using the Management Studio. Simply, connect to Reporting Services using the Management Studio and get its properties by right-clicking the instance. There is a property called EnableClientPrinting that controls the functionality, you can set this has False for disabling client printing.

Once disabled, the Print icon will not be appeared.



Saturday, July 30, 2016

SQL Server Brain Basher of the Week #049 - SSIS Package Store

Let's talk about Integration Services in this week for Brain Basher. SQL Server supports two deployment models named Package Deployment Model and Project Deployment Model. The Package Deployment Model was the initial model available with SQL Server and this allows us to deploy packages and manage individually. The Project Deployment Model was introduced with SQL Server 2012 and it allows us to deploy all packages as a single unit. This week question is based on old model; Package Deployment Model.

What is SSIS Package Store?

When deploying with Package Deployment Model, individual files can be deployed either to msdb database or to file system. There are multiple ways of deploying individual packages. One way is, importing packages using Import Package in SSMS Integration Services to import into SQL Server. When importing to SQL Server using Import Package, if you select the destination as File System instead of MSDB database, the package is saved in the SSIS Package Store.

The path of the SSIS Package Store is C:\Program Files\Microsoft SQL Server\{version}\DTS\Packages. Let's deploy a package to SQL Server / File System and see how it goes to SSIS Package Store.

Let's open SSMS and connect with Integration Services;


Once connected, right click on File System and select Import Package, you need to select the package for importing (or deploying).


Once imported, you can have a look on relevant installation folder, you should see the package deployed.


Should we use this or not? It depends, I prefer File System not SQL Server / File System or SQL Server / MSDB. Biggest issue with this is, no specific folders in this folder for instances if you have multiple instanced from the same version. You may encouter some issues with security as well. Therefore, for older way, File System is better and if possible, go for the latest way which Project Deployment Model.



Sunday, July 24, 2016

SQL Server Brain Basher of the Week #048 - Dropping Clustered Index

SQL Server mainly maintains two structures for tables: Heap and Clustered. If there is no clustered index created with the table, table is with Heap Structure else it will be with Clustered structure.

Once the Clustered Index is created, SQL Server organizes data in a form of B-Tree and data pages related to the table will be held with the last level which is called as Leaf Level. Since the Clustered Index uses data pages, it is not possible to have another Clustered Index in the same table. Now the question is;

What will happen when the Clustered Index is dropped?

Many think that it is not possible because it holds data pages related to the table. But it is possible. If you drop the index, it goes back to the Heap and table will be maintained in the Heap. Here is a code that shows it;

-- creating a test table
CREATE TABLE dbo.TestTable
(
 TestTableId int index ix_TestTable_Id clustered
 , TestTableValue1 varchar(100) not null
 , TestTableValue2 char(5) not null index ix_TestTable_Value2 nonclustered
);
GO

-- Inserting test records
INSERT INTO dbo.TestTable 
VALUES 
(1, 'Hello world', 'ABC01')
, (2, 'Hi', 'ABC02')

-- Checks indexes availeble
SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.TestTable'), NULL, NULL, 'DETAILED');


-- Dropping the clustered index
DROP INDEX ix_TestTable_Id ON dbo.TestTable;

-- Checking the indexes and data after dropping 
SELECT * FROM sys.dm_db_index_physical_stats(DB_ID(), OBJECT_ID('dbo.TestTable'), NULL, NULL, 'DETAILED');
SELECT * FROM dbo.TestTable;


Sunday, July 17, 2016

SQL Server Brain Basher of the Week #047 - Database diagram

This is something I asked during an interview. While we were talking about system databases, I questioned interviewee that how get information related to tables exist in system databases. The answer was "Simple, we can make a database diagram and can use it for understanding structures and relationships". Can we really do it?

Here is the question for the week? Can we create database diagrams on system databases?

The answer is simple, but not the answer given by interviewee. It is not possible. SQL Server does not allow us to generate diagram or script system tables. Even if you try to use Generate Script it will not list out system tables.

See, how it appears when trying to script the msdb database. As you can see, there is no options to get even script generated on system tables.