Showing posts with label Azure SQL Database. Show all posts
Showing posts with label Azure SQL Database. Show all posts
Monday, May 11, 2020
Wednesday, April 8, 2020
Sending Emails from Azure SQL Database - YouTube Video
This video focuses on sending emails for operations related to Azure SQL databases. Following are discussed and demonstrated;
- Possibilities of tracking changes in Azure SQL Database for sending emails
- Setting up an Azure SendGrid account
- Setting up an Azure Logi Apps for sending emails
Thursday, February 16, 2017
Creating multiple administrative accounts in Azure SQL Database Server
You know that we can create multiple administrator accounts in SQL Server if it is a on-premises instance or an instance configured in a VM (hosted in cloud or on-premises). What we generally do is, create a login and add the login to sysadmin fixed server role. Can we do the same in Azure SQL Database (Server)?
There is only one unrestricted administrative account that can be be created in Azure SQL Database Server. It is created when creating the server. In addition to that, you can add either one Azure Active Directory Account or Security Group Account (that has multiple accounts as members). If you open the Properties of your server, you can see your administrative accounts;
However, if you need to add multiple administrative accounts (not unrestricted administrative accounts), then there is a way of adding them. There are two server-level administrative roles that are available in the master database which user accounts can be added for granting permissions for creating and managing databases and logins.
The first role is dbmanager. This role has permission to create new databases, hence members of this role can create databases. This role exist in the master database. Therefore, only users in the master database can become members of this role.
-- In master database -- creating a user CREATE USER Dinesh2 WITH PASSWORD = 'Pa$$w0rd'; -- adding the user to dbmanager role ALTER ROLE dbmanager ADD MEMBER Dinesh2;
The other role is loginmanager. This role has permissions for creating logins in the master database. Again, this role is master database, hence only users in master database can become members of it.
-- In master database -- creating a user CREATE USER Dinesh3 WITH PASSWORD = 'Pa$$w0rd'; -- adding the user to loginmanager role ALTER ROLE loginmanager ADD MEMBER Dinesh2;
Wednesday, February 15, 2017
Should I create Azure SQL Logins or Azure SQL Users?
Unless it is a contained database, we must create a login for connecting with SQL Server in order to access a database. Databases are accessed using Users and Users are created using Logins. Is it same with Azure SQL Databases? Should I create a login before creating a user?
With Azure SQL Database, it is not a must as Azure SQL database is a contained database (portable). We can create a user account with a password for accessing the database without creating a login. However, it allows us to create a login in the master and then create a user in our database using the login created in the master.
What if I created a user in my database using a login?
The main advantage is maintenance. If the login needs to be dropped or disabled, it can be simply done in the master database. Not only that, since multiple user accounts can be created with multiple databases using the same login, one login can be used for connecting the multiple databases.
The biggest disadvantage with this is, database becomes non-portable. If databases needs to be copied, moved or replicated, login must be created with new servers.
What if I created a user in my database without creating a login?
The main benefit we get is a portable database. Since all users with passwords are maintained inside the database, it can be copied, moved, replicated without any issue.
However, if the environment has multiple databases and users need to access more than one database, then multiple user accounts have to be created.
Tuesday, February 7, 2017
Splitting values in a string variable and inserting values as rows - II - STRING_SPLIT function
Yesterday I wrong a post on splitting string values using an Extended Function. The biggest issue with it was, unavailability in Azure SQL Database. However, we have a new function to achieve it and it works in both SQL Server 2016 and Azure SQL Database.
This is how it works;
DECLARE @String nvarchar(4000) = 'SQL,Business Intelligence,Azure'; SELECT * FROM STRING_SPLIT(@String, ',');
Sunday, February 5, 2017
On-Premises or Cloud - SQL Server Brain Basher of the Week #064
Here is the Brain Basher of this week.
During interviews, questions on deciding the hosting place for a planned database is very common. Sometime we ask the best place for a database by giving the certain criteria but sometime we ask that how you can decide considering general factors.
If you were asked the below question, what would be your answer?
"You have to create a database for one of the business processes related to the company. Hosting environment for the database is still undecided. We can either create it as an Azure SQL Database or can use the existing On-Premises SQL Server for hosting this database. What do you think?"
This is the Interview Question of the week.
Since there is no specific requirement, particularly on business need, most of the time, interviewee gets stuck with certain areas. However, as per my experience, it would be great if interviewee can explains the factors related for making such decisions along with pros and cons. Let's list out possible factors that influence the decision;
- Stable Connection - How this database will be used? Frequently, with heavy transactions, or infrequently with less transactions? This is very important because if the database is an OLTP database with heavy transactions and business is depend on it, you need a stable, high speed internet connection. If the company has no such facility that cannot satisfy the speed and the bandwidth required, it is better not to move into cloud. However, getting high speed internet connection is a plan that will be executed soon, hosting in cloud should be considered.
- Sensitive data - Some companies, specially some financial companies have policies on sensitive data. If the policy says that data should be maintained only in local servers, then hosting the database in cloud is not going to work. Not only the policies established by the company, there are some regulations enforced by the government, hence it has to be considered too. We may go for a hybrid version, that hosts part of the database in cloud and the rest in on-premises server for making sure that sensitive data is not moved out. This is possible with Microsoft SQL Server (SQL Server On-Premise Database files maintained in Azure).
- Accessibility - How our database is going to be accessed? What we need to make sure is, it can be accessed by the audience who need it. Users may access it via a web application, using reports, or using tools like Microsoft Excel or Power BI by connecting directly. If we hosted, can all access data as they want? Does it require special software to be installed? Does it require special permissions? Do we have to purchase more licenses? These are things we have to consider on this. If we have to spend more which is not a part of the budget or it makes the usage complex, we might have to consider on-premise. However, certain scenario make on-premises hosting complex and costly as well.
- Size - Is it going to be in megabytes, gigabytes or terabytes? If you expect that database will grow in terabytes within shorter time period, then it should be hosted in a local machine as 1 TB is one of the limitations with Azure SQL Database.
- Security - Most common factor that pushes the implementation away from cloud. Many think that having the database hosted with a local server is more secured than placing the database in cloud. However, it is not true. If you go for a good provider like Microsoft, Amazon, there have been no security breaches recorded and they guarantee the security. Securing the local environment is very difficult, have to consider security implementations electronically as well as physically and someone has to continuously watch it, patch it, upgrade it when required and monitor it. Therefore, experts recommend hosting in cloud than in-house when security is considered.
- High Availability and Disaster Recovery - If this is required and you have already implemented this using methods like Mirroring, Clustering or even with a backup/restore strategy, and most importantly it addresses required RPT and RPO (read more at http://dinesql.blogspot.com/2016/10/determine-backup-strategy-sql-server-interview-question.html), then you can consider local implementation. However, if you have not implemented anything, then it is always better to move into cloud as almost all providers support HA and DR.
- Cost - No doubt, initial cost is always very high when it comes to hosting in local server. You may spend on purchasing servers, software licenses and some extra hardware for speeding up the processes. But the initial cost is very low when hosting in cloud. You may calculate for multiple years and compare, but in most cases, hosting in cloud is always cheaper than hosting in on-premises server.
- Maintenance - If it is local, you need someone to do this. Installing service packs required, checking space required, performance tuning, monitoring,... many things. If it is cloud, most of these are addressed by the provider and only few need our attention. Example, Azure SQL Database suggests enhancements on indexing and statistics and we can use these recommendations for performance tuning that saves time and resources. So, if you do not have a dedicated person for performing these, go for cloud.
- Features and Functionalities - You may need to implement some processes that needs certain set of functionalities. For example, if you need to send an email using SQL Server Database, you need to use Database Mail feature. If you need to execute a business login on a scheduled time, you need to use SQL Server Agent. You know that these features are not available with Azure SQL Database. Not only that, it does not support all TSQL commands, standard partitioning among multiple volumes (with Azure SQL Database, partitioning is possible but volumes cannot be decided. If need, federation can be considered) is not available. Therefore, we need to think about features and functionalities that will be used with the database and then decide whether we can host in a local server or cloud.
There are more but interviewee should talk about at least mentioned factors. That makes sure that the interviewee is smart and knowledgeable on the subject, not just a person who implement anything when asked.
Saturday, February 4, 2017
What if I create an Azure SQL Database using Management Studio (SSMS)?
Will there be differences if I create my Azure SQL Database using Management Studio instead of Azure Portal?
Yes, there are. One major difference I noticed is, Snapshot Isolation. By default it is on in an Azure SQL Database along with Is Read Committed Snapshot On;
[Database created using the portal]
However, if you create a database using SSMS, it does not set as expected;
[Database created using SSMS]
As you see, even though Snapshot Isolation is on, Is Read Committed Snapshot On is set as false. Not sure how it is happening, have already asked from experts, will update this with reasons once I received a reply.
Yes, there are. One major difference I noticed is, Snapshot Isolation. By default it is on in an Azure SQL Database along with Is Read Committed Snapshot On;
[Database created using the portal]
However, if you create a database using SSMS, it does not set as expected;
[Database created using SSMS]
As you see, even though Snapshot Isolation is on, Is Read Committed Snapshot On is set as false. Not sure how it is happening, have already asked from experts, will update this with reasons once I received a reply.
Friday, February 3, 2017
Is it possible to read data while they are being modified in Azure SQL Database with default settings?
Somewhere in 2011, I wrote an article titled Isolation Level in SQL Server that explains different Isolation Levels in SQL Server. It is based on an older version of SQL Server, but it is still valid for new versions as well.
Since now we have SQL Server as a SaaS, we slowly move to Azure SQL Database. As you know, Azure SQL Database is not same as On-Premises SQL Database, it has some differences; one is related to Isolation Level.
Before talking about the difference, let's write a sample code and see how it works in both Azure SQL Server and On-Premises SQL Server databases. For this, I have created a database called Sales in both environments.
Once databases are availble, the blow code that adds a table and inserts records, needs to be run in both local and Azure SQL Databases;
CREATE TABLE dbo.Products ( ProductId int identity(1,1) Primary Key , Name varchar(100) not null , Color varchar(20) not null ); GO INSERT INTO dbo.Products (Name, Color) VALUES ('Product 01', 'Black') , ('Product 02', 'Red') , ('Product 03', 'Blue');
Now we have the same table in both databases. Let's first test with local database.
Open a new connection and execute the following code. It creates a transaction and updates one of the records. Note that, we do not either rollback or commit.
-- First connection BEGIN TRAN UPDATE dbo.Products SET Color = 'Green' WHERE ProductId = 2;
Open another connection and execute the code given below. This connection tries to retrieve the record that is being modified by first connection.
-- Second connection SELECT * FROM dbo.Products WHERE ProductId = 2;
The result of the second connection is shown in the below image. As you see, we cannot immediately see the record as the record is being modified.
Let's go back to the first connection and rollback the transaction.
-- First connection ROLLBACK TRAN
Once it is done, second connection will be able to see records.
This is the standard/default behavior of SQL Server. Generally, we cannot access records that are being modified because default Isolation Level set is Read Committed. Now let's do the same in Azure SQL Database.
Open a new connection for Azure Sales Database and execute the same code that starts the transaction.
Open another connection for Azure Sales Database and execute the same code for retrieving records. Here is the result of it.
As you see, there is no issue with Azure Sales database. Second connection can retrieve records even though the record is being modified by another connection.
This is one of the differences between Azure SQL Database and Local SQL Database. The reason for this is, by default Snapshot Isolation Level is enabled in Azure SQL Database. Because of that, a version of the old record is created before the modification and it will be used for other users if request.
Can we turn this behavior off in Azure SQL Server? No, it is not possible, if you try to change it via SSMA, you will experience the below error;
However, you can change Is Read Committed Snapshot On that changes the default behavior.
If you set this to false, then second connection will not be able to see records while first connection modifying records but second connection can set the Isolation Level manually and see records.
Open a new connection for Azure Sales Database and execute the same code that starts the transaction.
Open another connection for Azure Sales Database and execute the same code for retrieving records. Here is the result of it.
As you see, there is no issue with Azure Sales database. Second connection can retrieve records even though the record is being modified by another connection.
This is one of the differences between Azure SQL Database and Local SQL Database. The reason for this is, by default Snapshot Isolation Level is enabled in Azure SQL Database. Because of that, a version of the old record is created before the modification and it will be used for other users if request.
Can we turn this behavior off in Azure SQL Server? No, it is not possible, if you try to change it via SSMA, you will experience the below error;
However, you can change Is Read Committed Snapshot On that changes the default behavior.
If you set this to false, then second connection will not be able to see records while first connection modifying records but second connection can set the Isolation Level manually and see records.
Wednesday, February 1, 2017
Azure SQL Database - Auditing & Threat Detection
Has your database been attacked by someone?
Most of our applications are data-driven applications and attackers can use various techniques for getting into the database. One of the famous and most common techniques is SQL Injection which is a Code-Injection that modifies the statement written in the application and get it executed as attacker wants.
Here is a link for reading more on SQL-Injection: http://www.w3schools.com/sql/sql_injection.asp
Whether the database is a On-Premises database or Cloud database, if the code has been written poorly, then it is exposed for attackers. And the biggest issue is, in most cases, we do not know whether the database is attacked or not. What if someone can detects possible attacks and notifies you immediately? Yes Microsoft Azure SQL Server can do it.
If you have worked with Azure SQL Servers, you have already noticed a Setting called Auditing & Threat Detection in the SQL Server Blade. It allows you to set Auditing and in addition to that, we can enable a feature called Threat Detection which is still in Preview mode (means it will not always work as you expected until the final version is released.).
Read the post written on Auditing for more info on it: How to check success and failure logins attempts in Azure SQL Database
Threat Detection detects anomalous database activities that are considered as security threats and notifies us with details. This means, it can detects possible SQL-Injections as well. However, as per my experienced, it does not detect immediately at the initial tries but later, notification is sent immediately. As per some experts, it needs some times to learn the patterns for notifying but it is too early to make a such statement as this is still in Preview mode.
Let's see how we can use Threat Detection with an example. I created a Azure SQL database with following schema and inserted a set of records. Assume that this Sales database contains Sales made by Sales Representatives.
-- Create User table CREATE TABLE dbo.Users ( UserId int identity(1,1) PRIMARY KEY , UserName varchar(10) NOT NULL , Password varbinary(1000) NOT NULL ); GO INSERT INTO dbo.Users (UserName, Password) VALUES ('Dinesh', ENCRYPTBYPASSPHRASE('DK?DF%23:D', 'Dinesh123')) , ('Jane', ENCRYPTBYPASSPHRASE('DK?DF%23:D', 'Jane123')) , ('Jack', ENCRYPTBYPASSPHRASE('DK?DF%23:D', 'Jack123')); -- Create Sales table CREATE TABLE dbo.Sales ( SalesId int identity(1,1) PRIMARY KEY , UserId int NOT NULL , Constraint FK_Sales_Users FOREIGN KEY (UserId) REFERENCES dbo.Users (UserId) , SalesDate date NOT NULL , SalesAmount decimal(16,2) ); GO INSERT INTO dbo.Sales (UserId, SalesDate, SalesAmount) VALUES (1, '2016-01-01', 10000.00) , (1, '2016-01-02', 12000.00) , (1, '2016-01-03', 14000.00) , (2, '2016-01-01', 9000.00) , (2, '2016-01-02', 110000.00) , (3, '2016-01-01', 17000.00) , (3, '2016-01-02', 126000.00) , (3, '2016-01-03', 19000.00)
Next step is creating a simple web application. I created an ASP.Net Web Application that contents with the Azure SQL Database created. I added a Web Form that contains two input boxes for accepting User Id and Password, a button for Loading User's Sales and a GridView for showing retrieved data.
This is Button-Click method written.
string sql = "SELECT u.UserName, s.SalesDate, s.SalesAmount FROM dbo.Sales s INNER JOIN dbo.Users u ON s.UserId = u.UserId WHERE "; sql += " u.UserName = '" + TextBoxUserId.Text + "' AND DECRYPTBYPASSPHRASE('DK?DF%23:D', u.Password) = '" + TextBoxPassword.Text + "'"; string connectionString = ConfigurationManager.ConnectionStrings["DefaultConnection"].ConnectionString; SqlConnection connection = new SqlConnection(connectionString); SqlCommand command = new SqlCommand(sql, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); DataTable table = new DataTable(); table.Columns.Add("UserName", System.Type.GetType("System.String")); table.Columns.Add("SalesDate", System.Type.GetType("System.DateTime")); table.Columns.Add("SalesAmount", System.Type.GetType("System.Decimal")); DataRow row; while (reader.Read()) { row = table.NewRow(); row["UserName"] = reader["UserName"].ToString(); row["SalesDate"] = Convert.ToDateTime( reader["SalesDate"]); row["SalesAmount"] = Convert.ToDecimal( reader["SalesAmount"]); table.Rows.Add(row); } GridView1.DataSource = table; GridView1.DataBind(); connection.Close();
As you see, SqlCommand is not a parameterized command, hence attacker can use a simple SQL-Injection to modify the statement and get different set of data.
All done. If I check for User:Dinesh, I will be seeing Sales related to him.
However, if you use a SQL-Injection like below, I can get all Sales Records regardless of the user.
We are not going to talk about the SQL-Injection and the poor code written in the Web Application. Let's see how we can get a notification from Azure when something like this is tried out by someone.
As you know, all we have to do is, enable Threat Detection. This how it has to be set up in Azure SQL Server Blade via Auditing & Threat Detection Setting.
Everything required is done. As you see, Threat Detection is ON and all Threat Detection types have been selected. I have added my email address for receiving alerts. Note that, you need to enable Auditing for enabling Threat Detection and it should be configured to all Audited Events.
Now if you try again with a SQL-Injection, you will immediately get a mail with details;
I can click on the View Related SQL Statements and see more details. If I click, it opens the Audit Records Blade that shows all records activities.
As you see, it is very easy to set up and you can be alerted on all suspicious activities. Try and see.
Saturday, December 31, 2016
Azure SQL Databases - Prevent accidental deletes - Azure Locks
Have you ever deleted a database accidentally? If you have done, then you know very well that how it makes you uncomfortable :). If you are managing a very important database and you should make sure that even authorized users cannot easily delete the database without taking an additional action, Azure has a way of configuring it. It is called Azure Locks.
Azure Locks is part of Azure Resource Manager. It allows us to set locks on two ways;
- Read-only - makes sure that authorized users can read the resource but editing and deleting are not possible.
- Delete - makes sure that authorized users CANNOT DELETE the resources.
For example, if you set a Delete Lock at one of your Azure SQL Servers, no resources available under the SQL Server such as A Database cannot be deleted until the Lock is released. You can do the same for a database instead setting the lock at the server level as well.
Here is an example. If we need to set a Delete Lock for one of SQL Servers, get the Server Properties Blade opened and click on the Locks property.
Once the Locks Blade is opened, we can add a lock by clicking the Add Button. As shown in the above image, I have set the name of it as DeleteLock, Lock type as Delete and a small note. Once this is saved, this lock gets applied to all resources that come under the selected SQL Server.
If I try to delete a database in that server, Azure Resource Manager makes sure that I cannot delete the database until I remove the lock.
Friday, December 23, 2016
Delete All Azure SQL Server Firewall Rules using TSQL
In order to access Azure SQL Server databases, firewall rules have to be set at either server level or database level by adding client IP addresses. Server level rules can be configured using the Azure Portal but database level rules have to be done using TSQL. If you have many added many rules and you need to delete all or some, and if you try to use the portal for deleting, it will take long time because portal allows you to delete one at a time.
Assume that you have many number of server level rules configured as below;
I was asked about this and here is the script I wrote to delete selected server level firewall rules.
DECLARE @name nvarchar(200) WHILE EXISTS (SELECT name FROM sys.firewall_rules WHERE name LIKE 'clientIPAddress_%') BEGIN SELECT TOP (1) @name = name FROM sys.firewall_rules WHERE name LIKE 'clientIPAddress_%' EXEC sp_delete_firewall_rule @name; END
It is a simple one but you can use it without using the portal. Make sure you have written the WHERE Condition properly otherwise it might delete rules you may need.
Thursday, December 22, 2016
Azure SQL Database - Enable TDE - Does it encrypt the bacpac file?
We use Transparent Data Encryption (TDE) for protecting our databases, making sure that no one can get our databases added to a different instance even though they manage to get either data files (.mdf) or backup files (.bak). TDE makes sure that data in data files is encrypted and it is called as encrypting data at rest. The standard way of enabling this is;
- Create a master key in master database.
- Create (or add an existing one from a copy) a certificate protected by the master key.
- Create a database encryption key protected by the certificate, in the database needs to be protected.
- Enable encryption in the database.
You can read more this at: https://msdn.microsoft.com/en-us/library/bb934049.aspx
This is different with Azure SQL Database. Azure allows us to enable TDE with Azure SQL Database. It does not require steps like above, it can be simply enabled using the portal.
Although we do not create the certificate and the database encryption key manually, the storage of the database is encrypted using a symmetric key that is protected by a built-in server certificate in the SQL Server
.
Not only that, the main exception comparing with TDE-Enabled On-Premises database is, copying this database from the portal to local environment. If the database is not an Azure SQL Database and if you take a backup of it, you will not be able to restore it in another instance without certificate created for TDE. But if the database is an Azure SQL Database, when you export it to a bacpac file, it can be loaded to any instance without the Azure SQL Server level Certificate used to protect it. See below images;
1. Exporting the bacpac to classic storage.
2. Loading the database to on-premises server using Import Data-tier application... wizard.
3. Getting the database successfully added.
The reason for this is, when exporting a TDE-Enabled Azure SQL Database, the bacpac file is not getting encrypted, hence we need to make sure that it is secured.
Tuesday, December 6, 2016
Azure SQL Data Warehouse - Part I - Creating and Connecting
We have been implementing on-premises data warehouses for years and everyone is well aware on difficulties-faced at the first phase of the project, that is requesting hardware for the project. In most cases, either the project is stopped or postponed by the management or client saying that no-budget. But now we have a solution that addresses the initial issue we have. Not only that, this solution gives you many benefits in terms of performance as well. It is Azure SQL Data Warehouse.
The Azure SQL Data Warehouse is a cloud-based SQL Data Warehouse which can be easily scaled with cloud-based-resources and pay as-it-grows. It is a fully managed service that that covers maintenance, software patches and backups. It can store and process both relational and non-relational data, and many Azure features such as Azure Blob Storage, Azure Data Factory can be used for implementing it.
Let's talk about some benefits in details;
Scalability and Cost
This is one of the major benefits. You can scale Azure Data Warehouse, increasing resources when required and cost will be calculated based on it. Since there is no upfront cost involved with this, you can simply start with less number of resources and adjust when you want. You will be charged on two resource usage types: Storage and Compute.
The storage configured for Azure SQL Data Warehouse is automatically re-scaled as you add and remove data, hence you pay for what you use.
The value set for Compute is the measurement for performance of execution in Azure SQL Data Warehouse. It is measured in Data Warehouse Units (DWU) and you can start with 100 DWU (USD 2 per hour) to 6000 DWU (USD 123 per hour) as per the requirements. This value can be anytime changed and cost of the usage will be calculated as per the value set.
Read more on DWU at: https://azure.microsoft.com/en-gb/pricing/details/sql-data-warehouse/
Performance
Since this can be scaled out as you need, data and queries can be distributed, increasing the performance. This uses Massive Parallel Processing Architecture (MPP), hence workload can be spread across many nodes using complex statistics calculated, increasing the performance significantly.
This uses Columnstore Indexing that is 10 times faster than Rowstore Indexing. Since it supports 5 times more compression than standard Rowstore Indexes reducing IOs, it improves the performance more.
Hybrid Cloud
It is a hybrid cloud environment. It supports seamless integration with on-premises data, cloud data and of course unstructured data. With Polybase, different type of sources can be queried using TSQL as data is stored in local, making sure that everything required is shown in a single database.
Not only that, copying data from other sources can be easily done via Azure Data Factory.
Okay, now let's see how we can create one and star using it. First of all, you need an Azure Account. Once an account is created, you need to create a SQL Server which is a logical server for holding your data warehouse. If you have already created a SQL Sever, then it can be used, else follow the steps below for creating a logical server.
1. Login to the Azure Portal and open SQL Servers Blade.
2. Set Sever name, Server admin login, Password, Resource Group and Location. Note that you can add database specific logins later.
3. Click on Create button to get it created. If you want to see the server in Azure Dashboard, make sure that Pin to dashboard is checked.
Now the server is ready. Let's create a data warehouse.
3. Open the SQL Data Warehouses blade.
4. Enter the Database Name. With this test, we will be using AdventureWorks (we will be creating different databases later), hence set the name as AdventureWorksDW.
4. Select the appropriate Resource Group. Make sure that you select the same selected for the server.
5. Select Sample for Select source.
6. Select AdventureWorsDW as the sample database.
7. Select the server created with above steps and enter the admin account and its password.
8. Select DWU as you need. Let's initially set this as 200.
9. Select Pin to dashboard and click Create to get the data warehouse created.
Database is created. Remember, you cannot connect with the server configured until you are permitted to connect from your IP address. Initially you can configure your IP address to the server using the portal. Later, you can connect to the server using Management Studio and configure IP addresses for others using TSQL, either at server level or database level. Here are the steps.
1. Open the portal and open SQL Servers blade.
2. Click on the server created (or the one used for adding the data warehouse) to get its properties.
3. You can add your IP address by clicking Add client IP button and clicking Save button. Or else, you can manually set it using the given grid.
4. Add it and save.
Now firewall rule is configured for you. Let's connect and see.
1. Open Management Studio and connect with the server account as below.
2. Once connected, open a new Query Window and type below code for seeing firewall rules set.
3. If you need to add more rules for others, use sp_set_firewall_rule for server level rules and sp_set_database_firewall_rule for adding database level rules.
-- Adding server level rules EXECUTE sp_set_firewall_rule @name = N'dinesqlserverAdminRule', @start_ip_address = '192.168.10.100', @end_ip_address = '192.168.10.100' -- Use this for removing it -- EXECUTE sp_delete_firewall_rule @name = N'dinesqlserverAdminRule' -- Adding database level rules -- *** Connect with database and execute EXEC sp_set_database_firewall_rule @name = N'dinesqlserverDevelopersRule', @start_ip_address = '192.168.10.220', @end_ip_address = '192.168.10.225' EXEC sp_delete_database_firewall_rule @name = N'dinesqlserverDevelopersRule'
4. Query your database and see now. You should be able to query your table without any issue.
With the next post, let's see how we can use other tools like Visual Studio and Power BI for connecting with the data warehouse created.
Friday, October 7, 2016
How to check success and failure logins attepmts in Azure SQL Database
Who tried to access my Azure SQL Database? Were they succeeded? Can I get more details on attempts?
If you are maintaining a Azure SQL Database, you should be asking these questions yourself. Or, practically, you need to have these questions in order to protect your database. Generally, with on-premise instance, we get these information using either Audit or Logon Triggers. Can we get the same from Azure SQL Database?
Azure SQL Database has Auditing that can be easily enabled via the portal. It does not support Logon Triggers and notifications cannot be sent via emails as it does not support Database Mail. However Audit captures everything you need and there are few ways of seeing them.
In order to see required information for my databases, I enabled Audit and then use Azure Storage Explorer to see all details because it is the easiest way. If you need the same, here is the way of doing it;
1. Open the Azure Web Portal and open SQL Server blade.
2. Click on the server you need to audit and select Auditing and Threat detection.
3. Enable Auditing. Select Table as Auditing type because it is easy to read.
4. Create a new storage for storing audit records.
5. Set the Retention (Days) as you need and set the Table name as you like.
6. Select Audited Events and check both Success and Failure check-boxes.
7. Select all Threat Detection Types.
8. Click on Save to save all settings.
As I mentioned above, there are multiple ways of getting audit records to our environment. Let's use Azure Storage Explorer and see how we can see them. Let's see how we can use other channels like Power BI with another post later.
1. Open Azure Storage Explorer and configure the connection for the storage you use. Read more on this at: http://dinesql.blogspot.com/2016/09/microsoft-azure-storage-explorer-latest-version.html
2. You should be able to see all required details related to login attempts.
Tuesday, October 4, 2016
What are dbmanager and loginmanager database roles in Azure SQL Database instance
Have you noticed two new database roles appeared in master database? You might have, you might have not, because these two appear only with Azure SQL Database instance. What are they and what is the usage of them?
dbmanager database role
Azure SQL Database instance does not offer any server level roles and no user can be created similar to sa that we have seen with on-premise instances. Since we do not have administrative control over Azure SQL Database instance, the dbmanager role is given to us inside the virtual master database. Generally, when the instance is created using the portal, the account added becomes the admin of the instance and account will be added to the master database. If you need more administrators who can create databases, then you need to create a new user and add the user to this role.
Here are the steps. You need to make sure that you are connected with master database.
CREATE USER Jane WITH PASSWORD = 'Pa$$w0rd'; ALTER ROLE dbmanager ADD MEMBER Jane;
dbmanager database role
Just like the above one, if you need users who can perform operations related to users such as create, alter user account, create a user and add him to this role.
CREATE USER Jack WITH PASSWORD = 'Pa$$w0rd'; ALTER ROLE loginmanager ADD MEMBER Jack;
However, this is not much required and important with Azure SQL Databases as almost all databases are treated as Contained Databases. Therefore required permissions can be simply granted to user as below without creating user accounts in master and adding them to loginmanager role.
GRANT ALTER ANY USER TO Jack;
Friday, July 22, 2016
What is CATS, CRTAS and CETAS? - SQL Server CREATE TABLE AS SELECT
I am sure that everyone is well aware on SELECT INTO statement. This statement allows us to create a table using a SELECT statement just like below;
However this has many limitations, specifically on the construction of the new table. We cannot add some elements such as constraints, indexes, when we get the table created using SELECT INTO.
SQL Server 2016 makes it possible to create the table with more control, introducing CREATE TABLE AS SELECT. But, unfortunately, this statement is available only with Azure SQL Data Warehouse and Parallel Data Warehouse. This statement is referred as CATS.
Here is an example;
You can read more on this at: CREATE TABLE AS SELECT (Azure SQL Data Warehouse).
There are two more abbreviations like this: CRTAS and CETAS.
The CRTAS stands for CREATE REMOTE TABLE AS SELECT that allows to create a table in remote SMP SQL Server and populating data from a table in Parellel Data Warehouse.
Read more on this at: CREATE REMOTE TABLE AS SELECT (Parallel Data Warehouse).
The CETAS stands for CREATE EXTERNAL TABLE AS SELECT that allows to create an external table and export data into Hadoop or Azure Blob Storage. Again, this available only with Azure SQL Data Warehouse and Parellel Data Warehouse.
Read more on this at: CREATE EXTERNAL TABLE AS SELECT.
USE WideWorldImporters; GO SELECT o.OrderId, SUM(ol.UnitPrice * ol.Quantity) OrderAmount INTO dbo.OrderAmount FROM Sales.Orders o INNER JOIN Sales.OrderLines ol ON o.OrderID = ol.OrderID GROUP BY o.OrderId; SELECT * FROM dbo.OrderAmount;
However this has many limitations, specifically on the construction of the new table. We cannot add some elements such as constraints, indexes, when we get the table created using SELECT INTO.
SQL Server 2016 makes it possible to create the table with more control, introducing CREATE TABLE AS SELECT. But, unfortunately, this statement is available only with Azure SQL Data Warehouse and Parallel Data Warehouse. This statement is referred as CATS.
Here is an example;
CREATE TABLE myTable WITH ( CLUSTERED COLUMNSTORE INDEX, DISTRIBUTION = HASH (CustomerKey) ) AS SELECT * FROM dimCustomer;
You can read more on this at: CREATE TABLE AS SELECT (Azure SQL Data Warehouse).
There are two more abbreviations like this: CRTAS and CETAS.
The CRTAS stands for CREATE REMOTE TABLE AS SELECT that allows to create a table in remote SMP SQL Server and populating data from a table in Parellel Data Warehouse.
Read more on this at: CREATE REMOTE TABLE AS SELECT (Parallel Data Warehouse).
The CETAS stands for CREATE EXTERNAL TABLE AS SELECT that allows to create an external table and export data into Hadoop or Azure Blob Storage. Again, this available only with Azure SQL Data Warehouse and Parellel Data Warehouse.
Read more on this at: CREATE EXTERNAL TABLE AS SELECT.
Thursday, July 21, 2016
Two new Free SQL Server / PowerBI / Azure related books
There are two new free eBooks available for downloading those who want to learn Power BI and Azure SQL Database. Here are the links, worth to download and read;
Free ebook: Introducing Microsoft Power BI
Free ebook: Microsoft Azure Essentials Migrating SQL Server Databases to Azure
Free ebook: Introducing Microsoft Power BI
Free ebook: Microsoft Azure Essentials Migrating SQL Server Databases to Azure
Thursday, July 7, 2016
Azure SQL Database - Cannot connect to database.windows.net.
Are you experience this error when trying to connect with your Azure SQL Database?
TITLE: Connect to Server
------------------------------
Cannot connect to database.database.windows.net.
------------------------------
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=53&LinkId=20476
------------------------------
The network path was not found
TITLE: Connect to Server
------------------------------
Cannot connect to database.database.windows.net.
------------------------------
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) (Microsoft SQL Server, Error: 53)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=53&LinkId=20476
------------------------------
The network path was not found
There can be various reasons for this, but other than general technical reasons, you need to check one thing if you have connected to another database before, which is not exist now.
For example, you might have taken a snapshot of your database and connected to it using Management Studio. For connecting, you need to go through Options and set the database name with it. Once set, SSMS keeps, and next time when you try to log in, even though u do not notice, it might try to connect with the Snapshot, which you connected before. If it exist, it will connect, else it will throw the above one without giving you much details.
So, check that before going into technical analysis.
Wednesday, May 4, 2016
Could not load schema model from package. (Microsoft.SqlServer.Dac)
Once I wrote a post on how to take a copy of Azure SQL Database into on-premise server: Taking backups of Azure SQL Database and restoring to on-premise server. It was a simple method, all we have to do is, export the database into a bacpac file and add to the server using Import Data-tier Application...
With new operating system installation, I was trying to do the same but was hit by an error;
Could not load schema model from package. (Microsoft.SqlServer.Dac)
Internal Error. The database platform service with type
Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider is not valid.....
I immediately realized the issue, it was all about the version of Management Studio. I tried the same with Management Studio April Preview and I was able to add the database exported from Azure.
As per the reading I did, if you try with Management Studio that comes with SQL Server 2014, you need Service Pack 1 and CU 5 or above.
Wednesday, April 20, 2016
SQL Server 2016 - Connecting to Azure SQL Server using SSMS - RC3
Please note that there is an issue with connecting to Azure SQL Server using latest RC3 build. If you are experiencing the following error, note that it is an issue related your system.
An error occurred while creating a new firewall rule. (ConnectionDlg)...
An error occurred while creating a new firewall rule. (ConnectionDlg)...
There is no other options, if you need to connect with your Azure SQL Server, you need to open the portal and add your IP address using Firewall Settings.
Subscribe to:
Posts (Atom)