Friday, October 9, 2015

Types of Database End Users

There are distinct types of people who engage with database management system and End Users are well known. However, when referring End Users, does it represent all people work with DBMS or just business users?

Theoretically, there are four types of people involve with databases and database management systems; Database Administrators, Database Designers, Application Developers and End Users. This means, we are not part of End Users group and they work with databases differently.

image taken from:

There are mainly two types of End Users;
  • Naïve users
    These users have no clue on database management systems and do not know how to access database or how to make requests from database directly. Naïve users typically access the database through given GUIs that has readable menu items for opening windows forms or web pages to interact with data. Relevant example for this is, bank executive opens the interface given for entering customer information.
  • Sophisticated users
    This group represents people who know about the structure defined in the database for some extent. They have skills for accessing the database directly (or through another interface that requires database and SQL knowledge to use) and they make direct requests to the database for getting required data down. Most smart business users and data stewards are fallen into this group. When considering modern database implementations, sophisticated users do not access the core database unless there are specific objects (like Views) created specifically for them, but they access via configured databases such as reporting databases, relational and OLAP data warehouses or data models such as Microsoft Multi-dimensional models, tabular models and models designed with Excel+Power Pivot.

Monday, October 5, 2015

SQL Server Brain Basher of the Week #032 - TSQL - Converting from one value to another

With T-SQL queries, converting data between data types is very common. Conversion can happen automatically (implicitly) or we can take the control and convert as we want (explicitly). This week Brain Basher is based on it, here it is;

What are the functions available with SQL Server 2014 for converting data between data types?

SQL Server offers multiple functions for converting data between types. Some are based on ANSI-Standard and some are SQL Server specific with additional functionalities. Here are four functions available for converting data.
  1. CAST
    CAST is an ANSI-Standard function, hence recommended over SQL Server specific functions. It is a scalar function and returns an error if data types are incompatible.
    CONVERT is proprietary to SQL Server and provides additional functionalities over CAST. This function accepts optional style number for formatting returned value. This is also a scalar function.
  3. PARSE
    This is special function that converts a string to date, time, and number type. The optional parameter which accepts the culture converts the value based on it. If it is not submitted, culture setting of the current user will be used.
    These two functions work just like PARSE and CONVERT. However, instead of raising run-time error for incompatible types, failed conversion return NULL.
Here are some sample codes;

USE AdventureWorks2014;

-- CAST function, converting datetime to date
SELECT SalesOrderId, CAST(OrderDate AS date) OrderDate
FROM Sales.SalesOrderHeader;

-- CONVERT function
SELECT SalesOrderId, CONVERT(char(10), OrderDate, 101) OrderDate
FROM Sales.SalesOrderHeader;

-- PARSE function
SELECT PARSE('01/01/2015' AS datetime2 USING 'en-US') ;

-- TRY_PARSE and TRY_CONVERT function
SELECT PARSE('13/01/2015' AS datetime2 USING 'en-US') ; -- this throws an error
SELECT TRY_PARSE('13/01/2015' AS datetime2 USING 'en-US') ;

Sunday, October 4, 2015

SQL Server Error Severity Levels

Have you ever read the Severity Level indicated with an error occurred? Or just tried to find out the problematic code exist in your code for fixing it? It is always better to understand what error message says and its severity level because some errors cannot be fixed by us though it looks as fixable.

Severity level indicates the seriousness of the error, whether it can be ignored or not, whether it can be fixed by user level. Therefore we need to understand what it describes.

Range of the level  Description
0 to 9 Informational message, not an error. Warning for NULL usage in Average is a good example.
10 Informational message, it indicates non-severe error too.
11 to 16 Errors that can be fixed by us, such as syntax issues, deadlocks, permission related issues.
17 to 19 Serious software related error that user cannot correct. Out of memory, space are some of them.
20 to 24 Very serious errors related to either hardware or SQL Server itself. Generally, errors starting with level 19 gets logged in the Application log.
25 Same as above but this terminates SQL Server service.

Here are some examples;

USE AdventureWorks2014;

-- Warning messages 0 - 10
SELECT COUNT(Color) FROM Production.Product;

-- Errors form 11 - 16
SELECT * FRM Production.Product;

-- Throwing an error with higher severity level
-- Note that 60005 is a user-defined message
RAISERROR (60005, 25, 1) WITH LOG;

Saturday, October 3, 2015

Getting an email notification when a deadlock is occured

If you are administering a heavy transactional database and often facing an issue with incomplete transactions, deadlocks might be the issue and it needs to be detected. There are few ways of checking whether a deadlock is occurring nor not, the most common one is the Profiler. However, it is bit difficult to keep on running the Profiler (or even Traces) against a production database and monitoring it, hence tracking it when it occurs and notifying is the best.

The easiest the method to track it is, using an Alert. In addition to that, it needs Database Mail configured, an Operator, and a change in sys.messages table.
  • Database mail - An implementation of SMTP to enable database component to send email.
  • Operator -A person or group who can receive notification based on a job or an alert raised.
  • Alert - is a predefined response to an event.
Let's configure all and test this. Database mail can be set creating a profile and adding a SMTP account. It can be simply configured using Database Mail under Management in Object Explorer. Note that, for this example, I am using gmail as my SMTP server.

After configuring database mail, configure an operator accessing Operators under SQL Server Agent.

Before configuring an Alert, we need to do a modification to record in a sys.messages table. This table maintains all messages related to errors and it has a column called is_event_logged that describes whether the error related is getting written to Application Log or not. SQL Server raises Alerts only when it is notified by Application Log with a message and matched with a configured Alert. Therefore we need to make sure that error needs to be tracked is logged. By default, deadlock error message, which is 1205 is not getting logged, hence it needs to be changed.

USE master;

-- checking the record
SELECT * FROM sys.messages WHERE message_id = 1205 and language_id = 1033;

-- setting is_event_logged to 1
EXEC dbo.sp_altermessage 1205, 'WITH_LOG', 'true';

-- checking the updated record
SELECT * FROM sys.messages WHERE message_id = 1205 and language_id = 1033;

Now let's create an Alert and sets the Response to the Operator created.

Everything required is done now. Whenever a deadlock occurs, it will be written the Application Log and it noifies Agent and then Agent will fire the Alert.

Wednesday, September 30, 2015

SQL Server 2014 and 2012 new releases

Microsoft SQL Server team has done few releases related SQL Server 2014 and 2012 in last few months. You can download and upgrade your instances if required;

If you want to understand about Cumulative Updates, please read this;
What are hotfixes, Cumulative Updates and Service Packs? 

For all update releases;

Version Code Name Initial Version Updates
2014 Hekaton 12.00.2000.8 CU1: 12.0.2342.0
CU3: 12.0.2402
CU4: 12.0.2402
CU5: 12.0.2402
CU6: 12.0.2402
CU7: 12.0.2402
CU8: 12.0.2402
CU9: 12.0.2402 
SP1: 12.0.4100.1
SP1: CU#1 12.0.4416.0
SP1: CU#2 12.0.4422.0
2012 Denali 11.00.2100 SP1: 11.00.3000
SP1: CU#7: 11.0.3393
SP1: CU#8: 11.0.3401
SP1: CU#9: 11.0.3412
SP1: CU#16 11.0.3492.0
SP2: 11.0.5058
SP2: HotFix – 11.0.5522
SP2: CU#1: 11.0.5532
SP2: CU#6: 11.0.5592.0
SP2: CU#7: 11.0.5623.0

SQL Server Management Studio - September 2015 Release

The latest Preview of Microsoft SQL Server Management Studio is release and it can be downloaded from;

Here is the list that contains all updates for last three months (quoted directly from the site)
  • SSMS August 2015 Preview
    • Object explorer updates to reduce loading delay when there are a large number of databases.
    • Improvements for installation of SSMS on Windows 10 machines.
    • Bug fixes and updates to SQL Server Configuration manager, and SSMS user reports user interface
  • SSMS July 2015 Preview
    • Database diagrams for Azure SQL Database (V12).
    • Improved IntelliSense support for new temporal table syntax.
    • Updated DacFx library to support latest Azure SQL databases features including row-level security and Azure Active directory authentication.
    • Bug fixes (updated 'check for update' UI, UI fix in 'compatibility level' list and more).
  • SSMS June 2015 Preview
    • New SSMS light-weight web installer.
    • Automatic check for updates.
    • SSMS now has full-text-search support for Azure SQL Database (V12).
    • Top customer requests addressed:
      • 'Edit Top 200 Rows' enabled for tables and views in Object Explorer.
      • Table designer enabled for Azure SQL Database (V12).
      • Database and table property dialogs enabled for Azure SQL Database (V12).
      • New option to skip prompt to save T-SQL files.
      • Import/Export wizard support for new Azure SQL Database service tiers (Basic, Standard, Premium).
      • Numerous bug fixes (scripting scenarios, enabling change tracking for SQL databases, and more).

Monday, September 28, 2015

SQL Server Brain Basher of the Week #031 - SQL Server Editions

Many have been working with Microsoft SQL Server for years and most have worked with either Enterprise or Standard edition. Here is a question on it;

What are the editions available with SQL Server 2014?

As I mentioned above, everyone familiars with Enterprise and Standard but Business Intelligence edition is still unknown to many and it is not so common to see the implementation of it too. Here are all available editions;
  • Enterprise - Premium offering, provides highest level of reliability for demanding workloads.
  • Business Intelligence - Similar to Standard + business intelligence offerings
  • Standard - Delivers complete, reliable data management platform without enterprise features.
  • Express - Free, available for light-weight applications
  • Compact - Free, can be used as an embedded database.
  • Web - Specifically for internet-facing workloads and organizations work heavily in websites and services.
  • Developer - Similar to Enterprise edition but it is for building, testing and demonstrating functionalities.
  • Azure SQL Database - Microsoft cloud offering - offers relational database-as-a-service.
  • Microsoft Analytics Platform System - An appliance combining Microsoft’s massively parallel processing (MPP) data warehouse technology–the SQL Server Parallel Data Warehouse (PDW)–together with HDInsight.
Read more on;

Saturday, September 26, 2015

Configuring HDInsight Hadoop Cluster and running a sample MapReduce Job

Apache Hadoop is an open source solution for distributed data processing and storage that consists a cluster of servers for holding data, storing in a distributed file system named HDFS (Hadoop Distributed File System). This is the solution widely used for handling Big Data and many software vendors offer the same with their platforms. Microsoft offers this as HDInsight.

HDInsight is a cloud-based distribution of Hadoop that is fully compliant with the Hadoop open source project. This offer comes as a service in Microsoft Azure and provisioning and decommissioning nodes of the cluster as-required not as difficult as on-prem-managed clusters.

This speaks about how to create a HDInsight cluster with the new Azure portal and run the sample MapReduce job for testing, using the remote machine. If you need more details on Haddop, HDFS, and related project called Hive, read following;

Let's start with the new Azure portal. Go to and login to your portal. Once logged in, open Storage Accounts (Not Storage Accounts (classic)) and create a new storage account. HDInsight cluster requires at least one storage account for HDFS. If required, more than one Storage account can be attached to the cluster. Best practice with Storage account for Hadoop is, create one and use it when the cluster is created, This allows us to maintain the storage, create the cluster when required, and delete it when it is not required without deleting the storage, retaining data files placed.

Next step is, creating the cluster. Open HDInsight Clusters and create a new one using the Storage Account created. Note that the credentials for accessing the cluster goes as admin (which can be changed if want) and Remote Desktop enables accessing the cluster using Remote Desktop Connection.

You can use Node Pricing Tiers section for setting up nodes required for the cluster, deciding the tier for Worker nodes and Head node.

Generally, it takes about 10-15 minutes for setting up the cluster. Once it is done, navigate through it and open it using Remote Desktop Connection.

When the Remote Desktop is open, HDFS Commands can be used for navigating and manipulating folders and files, running MapReduce jobs and executing commands for related projects such as Pig and Hive. Double click on the shortcut called Hadoop Command Line for opening.

Here is an example for seeing HDFS file system.

Here is an example for executing a jar file.

Here is an example for seeing an output of the executed job.

Refer this page for referring Hadoop Commands

Tuesday, September 22, 2015

Loading Power Query data to Power Pivot: Power Query Default Query Load Setting

This concern can be seen everywhere, why my Power Query data sometime appears in Power Pivot but sometime does not appear in Power Pivot? This is not a bug or an issue, it is all about Default Query Load Settings. When data is loaded using Power Query, the placement of the loaded data is decided by Power Query using Default Query Load Settings. This has mainly two settings;
  • Use standard load settings
  • Specify custom default load settings
    • Load to worksheet
    • Load to Data Model

By default, Use standard load settings is selected and its behavior gets changes based on the way you load data. It loads data to worksheet if only a single table is selected. It loads data to data model when multiple tables are selected or loaded data set is not fit enough to worksheet (row limitation). If the setting is changed to Specify custom default load settings, either one of the options (Load to worksheet or Load to Data Model) can be set or both can be selected.

This is how it works;

With the Use standard load settings selected, when a single table is selected from the data source;

and if it is fit enough for the sheet, it will be loaded to worksheet, not to model.

When a table is selected with Select multiple items checked;

data is loaded to the model.

The same behavior for other data sources too;

If Specify custom default load settings is set, and both Load to worksheet and Load to Data Model are checked;

data will be loaded to both worksheet and model.

In addition to that, placement can be decided with Load button in Power Query Loading windows too;

Note that this new configuration was opened with the latest update of Power Query. It can be downloaded from:

And you can read more on this update at:

SQLServerAgent Error: The target server cannot establish an encrypted connection to the master server - MsxEncryptChannelOptions

SQL Server support multi-server administration that allows us to distribute jobs across our enterprise, basically across our SQL Servers. Configuration of multi-server administration is done through SQL Server Agent and one server needs to be configured as master and other servers have to configured as targets. When setting it up, you might get following error at the end of the wizard;

The enlist operation failed (reason: SQLServerAgent Error: The target server cannot establish an encrypted connection to the master server. Make sure that the MsxEncryptChannelOptions registry subkey is set correctly on the target server.) (Microsoft SQL Server, Error: 22026)

What is the reason for this?

Reason for this is, it tries to establish the connection between master and targets in a secure channel with full SSL encryption. It tries this because your systems requests it by default.

If SSL encryption is not enabled between servers, then this setting has to be changed in target servers using the registry. The key MsxEncryptChannelOptions(REG_DWORD) is available under  \HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\\SQLServerAgent for changing the default setting which is 2.

It accepts three values:
  1. 0 - disables encryption between the master and target.
  2. 1 - enables encryption between the master and target but no certification validation is required.
  3. 2 - enables full SSL encryption between the master and target.
If you do not need an encryption channel between the master and target, then set the value as 0 and configure multi-server administration.

Monday, September 21, 2015

SQL Server Brain Basher of the Week #030 - TSQL

Here is the question of the week, it is based on SELECT statements and join mechanism can be used.

Assume that you have two tables like below with data inserted;

USE tempdb;

CREATE TABLE dbo.Customer
 CustomerId int identity(1,1) PRIMARY KEY
 , LastName varchar(50) NOT NULL
 , Country varchar(50) NULL

CREATE TABLE dbo.Employee
 CustomerId int identity(1,1) PRIMARY KEY
 , LastName varchar(50) NOT NULL
 , Country varchar(50) NULL

INSERT INTO dbo.Customer
 (LastName, Country)
 ('Knight', 'United Kingdom')
 , ('Jack', 'United States')
 , ('Brown', NULL)
 , ('Hamilton', 'Italy');

INSERT INTO dbo.Employee
 (LastName, Country)
 ('Khanna', 'India')
 , ('Brown', NULL)
 , ('Jack', 'United States')
 , ('Philips', 'Italy');

And for getting records that are exist in both tables, following queries are written;

SELECT c.LastName, c.Country
FROM dbo.Customer c
 INNER JOIN dbo.Employee e
  ON c.LastName = e.LastName
   AND c.Country = e.Country;

SELECT c.LastName, c.Country
FROM dbo.Customer c
   WHERE LastName = c.LastName
    AND Country = c.Country);

SELECT LastName, Country
FROM dbo.Customer 
SELECT LastName, Country
FROM dbo.Employee ;

What would be the output of each query?

  • Statement with INNER JOIN returns only one record belonged to Jack other two queries return records related to both Jack and Brown.
  • Statements with INNER JOIN and EXISTS return one record related to Jack and statement with INTERSECT returns records related to both Jack and Brown.
  • All statements return records related to both Jack and Brown.
  • All statements return only record related to Jack.
For answering this, you need to understand how NULL values are compared with JOINs, predicates and set operators. Generally Set Operators consider two NULL values as equal but comparison with other methods yield unknown. Considering this fact, the correct answer is, second one because INNER JOIN and EXISTS return only one record but INTERSECT returns two records including the record that has NULL for country.

Sunday, September 20, 2015

What are Spreadmarts in Business Intelligence solutions?

Have you heard about a term called Spreadmart? If you have not, then it is something you should know because your business users might be using spreadmarts and you need to consider them when creating or enhancing your business intelligence solution.

Image was taken from:

What is a spreadmart? Is it another type of data mart or it is a data warehouse? In a way, it can be considered as a data mart because it is something created specifically for one business process or a department by an individual. It is a spreadsheet, example, an Excel workbook maintained by a business user for performing their own analysis with their unique dataset. Business user uses spreadmarts for combining data from multiple sources including existing reports and produces results for her/his analysis.

Although this gives more flexibility to business users, this introduces some issues with collaboration. Therefore it is always better to maintain IT-driven data marts and ask users to use the data mart for creating their data models either as spreadmart or using a modern way like Power BI.

Saturday, September 19, 2015

How to write SELECT for Analysis Services Tabular using DAX

DAX, Data Analysis Expression is a language that can be used with Microsoft Analysis Services tabular data models. It was initially introduced with Power Pivot Excel Add-In and later it was added to Analysis Services tabular models. DAX language holds collection of functions, operators and constants that can be used for creating calculated columns and measures, extending tabular models.

Since we always write queries with SELECT with database engine and Analysis Services multi-dimensional model, requirement for writing the same is not uncommon with tabular model too. How can we write a SELECT query with DAX?

Retrieving data from tabular model tables can be done with EVALUATE keyword, which is functionality equivalent to SELECT statement. Although it is not as rich as TSQL SELECT, it can be used for getting data for general reports.

Here are some examples;

-- Getting data from Internet Sales
evaluate ('Internet Sales');

-- With filtering
evaluate(Filter('Internet Sales', 'Internet Sales'[Product Standard Cost] > 1000));

-- With ordering
evaluate(Filter('Internet Sales', 'Internet Sales'[Product Standard Cost] > 1000))
order by 'Internet Sales'[Order Date] desc;

Friday, September 18, 2015

Disable Report Builder from Reporting Services Report Manager

Report builder is a report authoring tool that is specifically given for business users who are familiar with ribbon-based interface of Microsoft Office. It is the primary tool that addresses all self-service reporting scenario and a user-friendly tool.

Report builder has two versions, functionally equal but the way we open it for business user is different. Once version comes as Click-Once application which can be opened using Report Manager and other one comes as a stand-alone application which can be downloaded from Microsoft Download Center. When you open the Report Manager, the shown button can be clicked for downloading the Report Builder to the client machine.

Now, what if you want to disable this facility and make sure users cannot download the Report Builder from Report Manager? What you have to do is, connect with Reporting Services using Management Studio and get Advanced Properties opened. It has a property called EnableReportDesignClientDownload which can be used for enabling and disabling it.

Once it is set as false, the button Report Builder will no longer be appeared.

Thursday, September 17, 2015

Power BI Desktop with a Live Connection - Creating Analytical Reports without saving data in Power BI

Many use or plan Microsoft Power BI for reporting and dashboard solutions with an assumption that is, Power BI always loads selected data for forming an analytical model and saving with it, just like using Power Pivot and View with Excel. This leads to a question: If a business user wants to create a report for analysis, can she/he directly connect with the source and create without loading data into client application?

With Power BI Desktop, it is possible. But it is possible only with Analysis Services Tabular Mode database as a source. User can create his own report connecting with Analysis Services Tabular instance and save the report, just like creating a report using Reporting Services Report Builder.

Here are the steps for connecting and creating a report;

Open Power BI Desktop and Connect with Analysis Services Tabular Mode instance. 

And make sure that Explore the Tabular model by using a live connection radio button is selected.

As you see with below screen, you cannot select entities and attributes but the model.

Once connected, all elements and attributes are shown just like a usual connection allowing you to create reports as you want. Once done, you can save it, and you will see that saved file is very small compared to general Power BI files.

Generally business users do not upload created reports if there is no requirement on sharing. But if it is required, can we upload the created report to Power BI services just like other reports created?

If you try to publish this report, you will get following issue;

An error occurred while attempting to publish Test.pbix.

This can be handled by installing and configuring Analysis Services Connector. Once it is installed, it can be used for connecting and creating reports and published reports will connect to your source via that.

For more info on that and how to install, read this: Configure a Power BI Analysis Services Connector

For more info on How to create reports using Power BI Desktop, read this: Getting started with Power BI Desktop

For more info on How to create reporting using Analysis Services Tabular data, read this: Analysis Services Tabular data in Power BI Desktop

Monday, September 14, 2015

SQL Server Brain Basher of the Week #029 - SSAS

Analysis Services can be installed in one of three modes; Multi-dimensional, Power Pivot for SharePoint, and Tabular. One instance can be run with only one mode and it has to be set during the installation.

Once installed with a mode selected, can we change or switch the mode?

There is no defined method for doing it with a specific menu but there is a way of doing it which is not a solution provided by Microsoft. Microsoft does not recommend to change mode after the installation but it can be done by changing the msmdsrv.ini file. It requires no added objects (databases), in some situations, removing added objects (databases) and changing the mode has worked fine but specially with the mode set for Power Pivot may produce an unexpected result.

The msmdsrv.ini is located in C:\Program Files\Microsoft SQL Server\MSAS12.SQL2014\OLAP\Config path and the property called DeploymentMode can be changed as 0 (Multi-dimensional), 1 (Power Pivot for SharePoint), or 2 (Tabular).

Note that, though this works in many cases, it is NOT recommended.

And if you want to know the mode you have installed, DeploymentMode can be used for determining it but easiest way is by looking at the icon of the instance. This shows multi-dimensional, Power Pivot for SharePoint and Tabular in order.

Sunday, September 13, 2015

Do you know that Power BI is available now and you can use it FREE?

Microsoft Power BI, bringing your data to life, yes it is available and can be used FREE :).

Power BI is a cloud-based business analytics service that facilitates you for converting your data into meaningful information with rich, interactive and well formed visuals, aligning with a concept of Any data, anywhere, any time. This offers Power BI Mobile that can be used for seeing dashboards and reports created, and analyzing as you used to, making sure your most important data travels with you. And it offers Power BI Desktop that can be used for creating stunning reports and interactive visualizations.

It is FREE, but don't forget that it has a paid version too. Power BI Pro is the paid version and it is USD 9.99 user/month. Not much differences, however key differences are;
  • Data capacity limit is 1 GB/user for FREE Power BI and 10 GB/user for Power BI Pro.
  • Content refresh is Daily for FREE Power BI and Hourly for Power BI Pro.
  • Streaming data in dashboards is limited to 10K rows/hour for FREE Power BI and 1M rows/hour for Power BI Pro.
  • And features related to Collaboration is limited to Power BI Pro.

Saturday, September 12, 2015

The permission granted to user are insufficient for performing this operation - Cannot deploy reports from SQL Server Data Tools to Reporting Services

Are you experiencing this error when trying to deploy reports from SQL Server Data Tools to Reporting Services site?

As it says, it clearly indicates that you have no permission to deploy the created report. Of course, you need to check and see permission granted to you, but before that, try to open SQL Server Data Tools with administrator privileges and see;

After that, you should be able reports.

Friday, September 11, 2015

How to enable SQL Server Audit for Windows Security Log

I have written few posts on SQL Server Auditing and they are written based on either Windows Event Log or File as the target of the Audit;

There is another option that can be set for the target which is Windows Security Event Log which is the most secured place for placing audit info. But, it cannot be set as simple as other targets, it needs some extra steps.

If I have created an Audit like below;

and try to enable;

I get this error. Reason for this is, for setting the target as Windows Security Log, SQL Server Service Account has to be a member of Generate Security Audit Policy. For that, open Local Security Policy and add SQL Server Service Account to Generate Security Audit Policy.

You may need to restart the SQL Server before trying with enabling the Audit created. Once done, you will be successfully enable the Audit.

Thursday, September 10, 2015

SQL Server Audit for application event - User-Defined Audit Actions

SQL Server Audit is the main tool for capturing events related to server and databases, and it can record event information to files, Windows Application Event Log, and Windows Security Event Log. All editions of SQL Server support server-level auditing but only Enterprise, Developer and Evaluation editions support database-level auditing. 

When defining an Audit, it needs to specify an Audit (1), then either Server-Specification or Database-Specification (2) specifying what Action/Action Group need to be included. Action/Action Groups are pre-defined comprehensive events that cover almost all events related to server and databases. Once configured, events are recorded with the given target. What if we want to raise our own events and get the captured with the same Audit?

Good example is, tracking changes in Product table, specifically on the Price column. If an Audit is already defined for other events, and need to use same target for recording price changes too, all we have to do is, include USER_DEFINED_AUDIT_GROUP Action Group to either Server or Database specification and use sp_audit_write stored procedure for recording our custom event. Here are steps for doing it.

Let's create an Audit first. If you have already created an Audit, you do not need to create it. Go to security node in the Object Explorer , then Audit node, and then create a new Audit.

Next, let's create a Server-Specification. Note that I have included two events; FAILED_LOGIN_GROUP and USER_DEFINED_AUDIT_GROUP. The second one added is for custom events.

Capturing ListPrice changes can be easily implemented with a Trigger and sp_audit_write procedure can be called within it. Here is the sample code for implementing it.

USE AdventureWorks2014;

CREATE TRIGGER Production.CheckPriceChanged ON Production.Product

 DECLARE @Message nvarchar(4000)
  , @OldPrice money
  , @NewPrice money
  , @ProductNumber nvarchar(25)

 IF UPDATE(ListPrice)
  SELECT @OldPrice = ListPrice FROM deleted;
  SELECT @NewPrice = ListPrice, @ProductNumber = ProductNumber FROM inserted;

  SET @Message = N'Price changed: Product Number: ' +  @ProductNumber + N' Old price: ' + CONVERT(nvarchar(20), @OldPrice) + N' New price: ' 
      + CONVERT(nvarchar(20), @NewPrice) + N' User: ' +  CONVERT(nvarchar(20), SYSTEM_USER)

  -- Calling auditing stored procedure
  EXEC sp_audit_write 1, 1, @Message;

Let's make some changes and see how it is going to be recorded.

-- Updating records
UPDATE Production.Product
 SET ListPrice = 100
WHERE ProductID = 1;

UPDATE Production.Product
 SET ListPrice = 0
WHERE ProductID = 1;

-- Checking the audit output with fn_get_audit_file
SELECT event_time, object_name, user_defined_event_id, statement, user_defined_information
FROM sys.fn_get_audit_file ('c:\Audit\*',default,default);

Wednesday, September 9, 2015

Reporting Services Optional Multi-Valued Parameters with Analaysis Services as the source

Creating multi-valued parameters with reports is a common requirement and it can be easily done with Reporting Services, making Allow multiple values property enabled with the report. However, if you need to make it as optional, allowing user to continue with the report without selecting any item in the parameter, you will face an issue because Reporting Services does not allow user to continue without selecting an item if the parameter is multi-valued parameter. Not only that, you cannot enable Allow null value property too.

If your source for the parameter is SQL Server database, workaround is, adding a NULL value to the dataset manually (or as a part of the query) and use it for handling optional requirement. If you google, you can find the way of implementing this workaround in many forums and sites. What if the source for the parameter is Analysis Services, then this workaround does not work as it is, it needs some extra work.

Here is the way I implemented, it may not be the perfect method but it worked fine for me.

Assume that I need to load Products for a parameter and make it as multi-valued parameter, my MDX for that will be something like below;

MEMBER [Measures].[Product Name]
 AS [Product].[Product].CurrentMember.Name
MEMBER [Measures].[Product Unique Name]
 AS [Product].[Product].CurrentMember.UniqueName
 { [Measures].[Product Name], [Measures].[Product Unique Name] } ON 0
 , {[Product].[Product].[Product].Members } ON 1
FROM [Adventure Works];

However, I cannot make the parameter as optional because of the limit, hence I need a member that represents NULL. For that, I need to introduce query-scoped calculated member and add it to the resultset;

MEMBER [Measures].[Product Name]
 AS [Product].[Product].CurrentMember.Name
MEMBER [Measures].[Product Unique Name]
 AS [Product].[Product].CurrentMember.UniqueName
MEMBER [Product].[Product].[All Products].[(NULL)]
 AS Aggregate([Product].[Product].&[(NULL)])
 { [Measures].[Product Name], [Measures].[Product Unique Name] } ON 0
 , {[Product].[Product].[Product].AllMembers } ON 1
FROM [Adventure Works];

Now just like parameters loaded from SQL Server databases, I have a (NULL) item in the parameter for handling the situation. If you get an issue with sorting the result that MDX produces, alternative way for that is, executing the MDX using a TSQL Stored Procedure connecting to Analysis Services via a Linked Server object.

CREATE PROC dbo.GetProductsFromSSAS

 DECLARE @Query nvarchar(max)
 SET @Query = N'
   "[Measures].[Product Name]"
   , "[Measures].[Product Unique Name]"
   OpenQuery(OLAPDatabase, ''
     MEMBER [Measures].[Product Name]
      AS [Product].[Product].CurrentMember.Name
     MEMBER [Measures].[Product Unique Name]
      AS [Product].[Product].CurrentMember.UniqueName
     MEMBER [Product].[Product].[All Products].[(NULL)]
      AS Aggregate([Product].[Product].&[(NULL)])
      { [Measures].[Product Name], [Measures].[Product Unique Name] } ON 0
      , {[Product].[Product].[Product].AllMembers } ON 1
     FROM [Adventure Works];
  ORDER BY 1;'

Tuesday, September 8, 2015

Is MDX query slow? Here are four guidelines for improving performance

The key reason for implementing OLAP databases is performance related to data retrieval and what if the wrote query does not offer expected performance? In a way, troubleshooting MDX query is not as easy as troubleshooting TSQL, if you are not much familiar with the concepts related to MDX statements. However, you can keep following guidelines in mind and do the needful if you experience slow performance with written MDX statement.

Guidelines are based on the time spent on Query Processor and Storage Engine. Duration can be determined using following events with Profiler;

  • Query Subcube event - Duration column - for time spent on extracting data from the storage.
  • Serialize Results End event - Duration column - for time spent on manipulating data after taken from the storage.
Here are four guidelines;
  1. If you see more time on Query  Serialize Results End event - Duration (Processing) than Subcube event - Duration (Storage), you need to optimize the MDX statement reducing added calculation and complexity.
  2. If you see more time on Query  Subcube event - Duration (Storage) than Serialize Results End event - Duration  (Processing), it shows that it needs partitioning, user-defined hierarchies and attribute relationship among used attributes defined.
  3. If you see more time on Query  Subcube event - Duration (Storage) than Serialize Results End event - Duration  (Processing)and data is retrieval rarely from aggregation, it needs aggregation added based on usage.
  4. If you see more time on Query  Subcube event - Duration (Storage) than Serialize Results End event - Duration  (Processing) and data is retieved from the cache, you need to check the resource usage of the server and if required, scale up by adding memory, CPU, etc.
Await for next post on how to use the Profiler for capturing these for troubleshooting.