Microsoft Azure stands out as a premier cloud provider, offering a comprehensive suite of services spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) solutions, including Office 365. Its robust administration tools, extensive API support, and diverse array of developer-focused services make it a top choice for businesses worldwide.
At ClicData, we’ve embraced Azure’s offerings, particularly its Database as a Service (DaaS) solutions. While traditional database setups involve significant costs and complexities, DaaS streamlines operations by providing scalable, managed database solutions without the need for extensive infrastructure investments.
This article delves into one of Azure’s most critical database services, Azure SQL Database, and explores the decision-making process between its two main purchasing models: vCore and DTU.”
Database as a Service (DaaS)
Buying a license for database software and installing it on a server is only a small portion of the cost and effort.
Since most database licenses are not cheap, to begin with, you can imagine the total bill which includes items such as:
- Physical and Virtual Server or Servers
- Operating System Licenses
- Backup Devices
- Monitoring Services/Servers
- Maintenance and Administration Staff
- High Availability Replication Servers and Databases
Next, there is the real work to make sure that the database itself is performing, secure, and replicated properly, testing restore and recovery procedures, and so forth.
Microsoft Azure currently offers a wide array of database as a service offerings, as does Amazon with their RDS offerings among others.
Why Microsoft Azure SQL Database?
ClicData runs user data exclusively on MS Azure SQL Database because it is simply the most advanced and performing database in the market.
Some of the following features either make it unique or are definitively better implemented than the competition:
- Dynamic Memory Caches
- Columnar Store Indexes
- Transparent Data Encryption
And for ClicData’s specific case, because we use mostly .NET core and Azure technology, selecting this database also makes sense from a technology stack point of view since all products are Microsoft made.
Finally, we also wanted a Database as a Service provider that covers as many countries as possible.
As each region and country starts enforcing stronger privacy and data location legislation, we want to make sure that the data resides as close as possible to our customers.
DTU vs vCore: Database Performance and Sizing
Database sizing and configuration is not easy.
It requires a lot of iterations of testing and monitoring to fine-tune a database, with different hardware and under different workloads, schema, and volume.
A query that performs well under a certain hardware configuration may not perform as well in another, even if the configuration is perceived as better.
Azure’s Database offers are even more complex to navigate. Because their revenue model, infrastructure, and scaling is not transparent to the customers their offers use abstract scales to measure the performance.
These scales are not easily aligned with real-world examples. Basically today they offer two main modes a DTU based model and a vCore based model.
DTU Model
According to Microsoft, a Database Transaction Unit or DTU is a blended measure of:
- CPU,
- Memory
- Data I/O
- Transaction Log I/O
In a ratio determined by an Online Transaction Processing (OLTP) benchmark workload designed to be typical of real-world OLTP workloads.
Doubling the DTUs by increasing the performance level of a database equates to doubling the set of resources available to that database.
Their first line of database offerings in this model is Basic, Standard, and Premium. They are being offered under a DTU model whereby you scale the DTU higher if you need more performance.
They can all be targeted for production use but what is different about them is the IO.
In Basic and Standard the IOPS per DTU is 2.5 whereas in Premium it is a whopping 48 IOPS per DTU.
And here lies the first issue.
The DTU is defined as a measure of data and log I/O.
IOPS is defined as an input/output performance measurement used to characterize computer storage devices like hard disk drives, solid-state drives, and storage area networks. They should be related or at least impact one and the other.
However, looking at the pricing models for a Standard Database, for example, an S3 is 100 DTUs and a P1 is 100 DTUs so should they perform the same or not?
If the IOPS on Premium are higher we would assume that the answer is no. But since a DTU is a “blended measure that includes IO” we would assume that yes – otherwise what is the point of coming up with a scale that actually doesn’t measure performance? More on this later…
vCore Model
Earlier this year, Microsoft also released the vCore series.
The vCore series is supposed to be better since you can select the size of the disk and the number of vCores for that database.
vCore comes in 2 flavors: General Purpose and Business Critical.
Additionally, they have 2 generations of processors/configuration: Generation 4 and Generation 5.
General Purpose boasts 500 IOPS per vCore to a maximum of 7000, while Business Critical is 5000 per vCore up to a maximum of 200,000 IOPS.
vCore seems like a much more reasonable model to size databases but has its own inconsistencies.
Taking General Purpose as an example. Basically, after 14 cores you are at the maximum IOPS possible. Because GP is sold in steps of 8, 16, 24, 32, 40, 64, and 80 vCores, in essence, it means that after 8 cores you are at the maximum IOPS possible.
So, in what cases would you need so much parallelism in your queries that requires 80 vCores when the real bottleneck will be your disk? I am sure there are queries that can profit from that but not in many cases.
Additionally, selecting Generation 4 or Generation 5 in both General Purpose and Business Critical sometimes gives different values in both the Azure Portal as well as the pricing models and pricing calculator.
DTU vs vCore: Time to Test
So we got really confused as to what to select for our customers.
We know what type of queries we run but it was difficult for us to bank on one specific series. We are still running mostly on DTU but the vCore series seems like a better fit for our dedicated customers. We then took it upon ourselves to do our own performance testing and compare as many series and models as possible.
Find our methodology below.
Test Criteria
Remember that a configuration built for one use case may not be ideal in another use case so we ran 2 major distinct uses cases:
- Bulk Loading of data
- Read Aggregated and Non-Aggregated Data.
Keep in mind that we don’t do a lot of UPDATES and DELETES in our databases. Also to keep in mind is that we apply Columnar Stored Indexes on almost all columns on each table and that our aggregates are software generated.
Data Loading
Our loading tests involved 2 tables. The first one had 10 million rows exactly and 5 columns of varying types. The other one had just over 18 million rows and 15 columns, also of varying types and lengths.
We measured loading the data sets every 20 minutes and every hour respectively for 2 days straight on each selected Azure configuration.
One of the loads was done in concurrency with other loads to further test if additional cores or type of disk performed better under multiple data loads scenarios.
Querying
For querying, we picked 3 resource-intensive queries based on actual production use of ClicData and simulated them using our own test data.
These queries involved calculations and aggregations at all times since that is our number one case scenario. One of them included a window function as well.
The Results
Using our own dashboards we monitored all the tests as we changed the same account to use different database configurations.
The dashboard below shows a partial rendition of the monitoring dashboard. Each dot representing the time take on that particular test run to perform that task. In the background, you will see the database configuration in use at that time.
We then grouped all results into one summary dashboard as follows:
This dashboard was very useful to do relative comparisons between configurations and understand if our queries increased in performance or not across the different configurations.
Here are some special notes:
1. GP versus BC and Gen4 versus Gen5
This was one of the most interesting items to compare since with GP we are able to have larger storage – and with BC we get Solid State drives increasing performance 8x to 10x, on reading.
Take a look at the vCore 4 for BC and GP, Gen4 and Gen5 comparison in our tests.
Does the above make sense?
BC costs 3-4x more than GP and yet we had identical results except for… the latest generation Gen5!
Remember that this is not 1 test or 10.
These are the aggregated results of 2 days of tests running every 5 minutes (queries) or every hour (loading).
Is this an outlier? More on this later.
2. DTU versus vCore
When it comes to selecting between vCore and DTU model we were interested in identifying what is the equivalent for us between one and the other.
As such, we took the vCore 4 in GP and BC and selected the best approximate DTU configurations.
Roughly both BC and GP Gen4 4 Vcores perform slightly better than a P4 or an S7. However, notice once more a possible outlier that S7 performs substantially better in query execution.
Selecting the S4 would not be a wise decision but since there is no middle ground between S4 and S7, it is or the other.
Unexplained Outliers
As shown above there some inconsistent behaviours.
One of the most critical ones we found was a consistent performance increase from vCore 1 to 8 but on vCore 16 it was worse than a vCore 6!
We did not understand the difference between Gen4 and Gen5 and why Gen5 – the new generation supposedly – performed worse in some cases.
We also noticed that no further performance can be gained on the same query, meaning the queries could not perform better even if the configuration was increased in core or disk type.
This was applicable to both DTU and vCore and makes sense even for bare metal SQL server installations. It basically gets to a point that SQL server must synchronously perform an operation and that it can’t optimize any further.
DTU vs vCore: Performance vs Cost
Before we finish, let’s talk about cost.
If there is one negative thing about Microsoft Azure is its billing and cost monitoring. So, of course, we built our dashboard that breaks down as well per database what the cost per basic component is.
We use this to monitor all our infrastructure across all Microsoft Data Regions.
Using a dashboard like this you could get a handle on what major areas are costing the most.
Clear to us that our platform uses mostly SQL Database (green), but also storage with VMs and App Service after.
If you are using Azure and would like to get a better handle on monitoring your usage, let us know, and we would be happy to share how we monitor cost and performance at ClicData.
Summary
In our six years of leveraging Microsoft Azure, we’ve gained invaluable insights into its capabilities, particularly within Azure SQL Database. Our recent round of extensive testing sheds light on the nuances between different tiers and configurations, offering valuable guidance for those navigating the Microsoft SQL Azure landscape.
Whether you’re a seasoned Azure user or exploring its offerings for the first time, we hope our findings provide clarity and aid in optimizing your database deployment. With Azure’s continuous evolution, staying informed and making informed decisions is paramount for maximizing the value of your cloud investments.
If you have any questions or seek further insights into leveraging Azure for your database needs, don’t hesitate to reach out. We’re here to help you navigate the complexities and unlock the full potential of Microsoft Azure.