One of the issues that many of our customers face when attempting to migrate OnPremise instances to the Cloud is the lack of a simple “shared storage”. Although there are some alternatives supported by third-party software or SDS solutions that allow us to configure a Failover Cluster instance in Azure, these are highly complex, therefore adding significant further costs to the solution’s TCO.

For that reason, a couple of years ago we had already posted a blog article on this configuration using Azure Files SQL Server Failover Cluster Instance y Azure File Storage” [in Spanish]  and an S2D based alternative was also used in another post a year ago “Azure cocktail: SQL Server Failover Cluster, Storage Spaces Direct (S2D) y templates ARM” [in Spanish]. In both cases, having a good performance shared storage would greatly simplify the matter, allowing us to support unreasonable SQL type loads (which are unfortunately the norm) for “non-premium” Azure Files performance.

In this article, we will demonstrate how to create an SQL Server Failover Cluster instance based on Azure Files Premium (preview). We will also show you some captures of the performance levels that can be obtained from this type of storage. We must always bear in mind that each SQL Server load is “particular” in its own way since it can be easy to forget about much higher specific input/ output requirements than the average usually considered values without an exhaustive OnPremise server monitoring.

Configuring Azure Files Premium is rather simple, and we will only be required to create the shares, assign them a quota and maximum file size that will be used to calculate the performance restrictions:

Our maximum throughput is calculated using the formula: 100MB/s+(0.1)*MB. Therefore, for 1TB we could expect to obtain a maximum throughput of 200MB/s. Baseline IOPS are calculated as one IOPS for each provisioned GB, provided that they can burst up to 3 IOPS for each GB. E.g., when using a 1TB disk, we would obtain 1,000 IOPS with a 3,000 IOPS burst.

Once the shares have been created, we will be able to access them from a virtual machine and attach them using the tokens provided by Azure. E.g., we could attach the “data1tb” share to volume Z: using the net use command:

net use Z: \\premiumazurefiles.file.core.windows.net\data1tb /u:AZURE\premiumazurefiles 0Ins0mAslDeEeY2/tQG0bl3lQ+vDpAlXt3JvSXkb8lsAjhkV2PLCahsJBzgVt8NXf8wNvlpjeP6FaoXnUhxCHg==

If we want the machine to be able to automatically attach the share, we must store the required credentials within the cmdkey command:

cmdkey /add:premiumazurefiles.file.core.windows.net /user:AZURE\premiumazurefiles /pass:0Ins0mAslDeEeY2/tQG0bl3lQ+vDpAlXt3JvSXkb8lsAjhkV2PLCahsJBzgVt8NXf8wNvlpjeP6FaoXnUhxCHg==

We must bear in mind that this credentials storage is linked to the Windows user account. Therefore, if we want to use them with a service account, such as the SQL Server linked account, for example, we will need to “impersonate” that user before running the command. Furthermore, if we want to grant access to system accounts such as Local System Account, we will have to do the same. In this particular case, we would use the SysInternals PsExec tool (https://docs.microsoft.com/en-us/sysinternals/downloads/psexec) using the parameter -s in order to precisely impersonate that specific Local System Account. Once the share has become accessible, we can now install SQL Server and undergo some performance tests.

Failover cluster installations in Azure are done in a similar manner to OnPremise with only a few differences. One of these differences is the lack of native floating IPs availability. For that reason, we will have to prepare the “simulated floating IP” by using a load balancer (ILB): https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-create-failover-cluster#step-5-create-azure-load-balancer

There is another main difference in that the failover cluster instances are usually configured on disks that are connected with nodes and shared with the cluster, either via Fiber Channel, iSCSI, etc. In this case, we will be using Azure files Premium (preview) in order to provide this shared storage via SMB 3.0.

Our first step will be to configure a Windows cluster with only two nodes sql2 and sql3, in addition to a virtual machine for Active Directory:

Once the Failover Clustering feature has been installed, we will create a cluster by adding the two nodes:

In cases where we have an even number of nodes, it will be very convenient to include an additional vote in the event of a tie should one of them fail. So in our example, I have opted for a “cloud witness” using an independent storage account. This Azure storage account should ideally be located in a different data center in order to maximize availability:

Once the cluster has been configured, we will do a SQL Server installation in a normal cluster, with the only difference being that there will be no available drives in the shared disk section:

When configuring the paths to the system databases, we will need to specify the UNC path to the shared drive we have created:

These warnings are only indicating that it was not possible to check whether or not we have permissions for that folder. Once we have gone past this point, the rest of the installation continues as usual. In the end, if we check the location of our databases, we will see that they are actually located on the above mentioned UNC path:

An exception to this is the tempdb database (located in the C: drive, although it would be better if it was in D: in order to take advantage of the machine’s local SSD) and resourcedb, an internal instance database that is “part of the local node installation”. Following, we will do a series of small tests in “complex” scenarios for this type of situations. Basically, those that are sensitive to disk access latency, such as small writes in the transactions log. For that reason, we will create a small table to undertake a series of sequential inserts before comparing the times obtained with those that could be obtained in an OnPremise instance:

create database testAzureFiles 
go
use testAzureFiles
go
create table test (id int identity (1,1) primary key)
go
set statistics time off 
set nocount on 
insert into test default values 
go 1000
insert into test default values 
go 10000
insert into test default values 
go 100000

Following is a list of obtained times:

As you can see in the above scenarios, the disk write latency rules and with higher latencies in these types of cases, the performance becomes comparatively worse as opposed to the OnPremise environments. In terms of performance, however, it is very similar for both the 1TB and the 5TB disks, since the latencies are actually very similar. Some examples of these types of latency sensitive operations would be serial iteration invoicing processes, write cursors, etc.

In the case of bulk operations, the situation is slightly better with some minor differences, although still slower than the average OnPremise environment. So it would be important to test it on mass data import processes in order to verify whether these have acceptable performance in an Azure environment such as this:

create database testAzureFiles2
go
use testAzureFiles2
go
create table test (id int identity (1,1) primary key)
go
set nocount on 

select top (1000000) replicate('a',1000) a into temp from sys.objects s1, sys.objects s2,  sys.objects s3, sys.objects s4
select top (10000000) replicate('a',1000) a into temp2 from sys.objects s1, sys.objects s2,  sys.objects s3, sys.objects s4

 

We will now move to the synthetic tests and, to keep it short, we will focus on two, sequential reads and writes, in order to verify the possible throughput that can be achieved and, on the other hand, random small size 8KB operations to validate the amount of IOPS reads/ writes that can be achieved. It is also interesting to analyze the latencies of these operations in order to see how stable they are. In order to carry out these tests, we will use DiskSpd (https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223) with the -L parameter in order to obtain detailed latencies. As you can see, the latencies looked rather high at 71MS in the 90th percentile already:

When we analyzed the behavior of each thread, we found something strange in the even-numbered pairs that were doing a much lower IOPS number than the odd pairs:

This was due to using a small size burstable B type VM resulting in a poorer input/ output performance. In order to avoid results that are below our expectations, we must also consider the machine’s own limitations, such as whether they are equipped with HT or no, etc. After changing to a DS6v3, we were able to obtain significantly better values, both in terms of IOs thread distribution as well as in regards to latency stability, which remained in values of under 4MS in writes up to the 95th percentile:

In the case of writes, the values remained under 7MS up to the 95th percentile:

We must also bear in mind that we are only discussing a preview, so it is highly likely that these one-off latency spikes will improve once the final public version becomes available. Ideally, the maximum values that we see in the 99th should not be so high, and we should be able to reach them in a few milliseconds at most. For example, if we did this test on a local drive, it would be unlikely to get into 3 digit figures even in the worst of cases:

However, as a curious fact, we can see two threads that do a clearly lower amount than the rest even at higher latency:

Since our goal was never to analyze this issue, we will only state that, in this case, these cores are being “restricted” due to the load of a separate OnPremise virtual machine on the same cores that were “stealing” CPU cycles from the virtual cores in this virtual machine. It is rather usual to find “unusual” behaviors when squeezing virtual machines, and these are caused by “noisy neighbors” within the same host.

So if we go back to the Azure Files Premium (preview) performance, we conclude that the latency is about 4MS in reads and 7MS in writes, with some one-off spikes. If we analyze the performance of each share we have created and compared these with the theoretical performance levels we calculated, we would obtain the following:

We can see that the limitations are respected on the IOPS level and the burst speeds can be reached without any issues. In the case of the maximum read and write throughputs, these are being exceeded in the 500GB, 1TB, and 2TB disks. Write speeds are generally lower than read speeds, however, this is not uncommon (just like in a RAID10). Only in the case of the 5TB disk (the maximum allowed size in the preview mode), we are not even able to reach the theoretical read values, let alone write values. It is possible that we may need to use several network cards on the VM level in order to reach the maximum speeds. But we have not managed to verify it yet. In either case, we are referring to really good performance levels when compared to the “standard” Azure Files which already cover several medium-sized OLTP database scenarios. Therefore, we would highly recommend considering this solution in order to simplify the infrastructure and facilitate the migration of SQL Server instances in Failover Cluster OnPremise to Cloud, provided that the input/ output loads are suitable.

As a potentially very interesting improvement to Azure Files Premium, we would suggest being able to combine it with the new SSD Ultra disk functionality (https://azure.microsoft.com/en-us/blog/announcing-ultra-ssd-the-next-generation-of-azure-disks-technology-preview/) that allows users to customize the IOPS performance from 100 to 160,000 with rather high values that can be achieved with relatively small disk sizes:

Ultra SSD Managed Disk Offerings

Disk size (GiB)481632641282565121,024-65,536 (in increments of 1 TiB)
IOPS range100-1,200100-2,400100-4,800100-9,600100-19,200100-38,400100-76,800100-153,600100-160,000
Throughput Cap (MBps)3006001,2002,0002,0002,0002,0002,0002,000

Summing up, Azure Files Premium opens a range of possibilities to migrate clustered SQL Server instances to Azure. With a reasonably good performance level, it is similar to many medium range iSCSI storages available in the market. However, in the event of highly sensitive disk access latency loads (e.g., extreme OLTP  instances) this would not be recommended (we would have similar issues with S2D solutions, iSCSI in Cloud, etc.).

In these cases, if we want to move to Azure, we will probably need to avoid using a clustered instance and use independent instances instead with Ultra SSD disks (or Storage Pools by adding premium disks). We strongly recommend measuring the performance levels of your OnPremise system prior to any migration. This is because, for example, in the case of a correctly configured DW, it will be more than likely to achieve speeds of more than the 2 GB/s that the new Ultra SSD disks are restricted to.

Rubén Garrigós

Rubén Garrigós is an expert in high-availability enterprise solutions based on SQL Server design, tuning, and troubleshooting. Over the past fifteen years, he has worked with Microsoft data access technologies in leading companies around the world. He currently is a Microsoft SQL Server and .NET applications architect with SolidQ. Ruben is certified by Microsoft as a Solution Expert on the Microsoft Data Platform (MSCE: Data Platform) and as a Solution Expert on the Microsoft Private Cloud (MSCE: Private Cloud). As a Microsoft Certified Trainer (MCT), Ruben has taught multiple official Microsoft courses as well as other courses specializing in SQL Server. He has also presented sessions at official events for various Microsoft technologies user groups.