1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Azure Database Performance Tuning

Disk Performance

Contents

keyboard_tab
Course Intro & Overview
1
Introduction
PREVIEW1m 15s
2
Overview
PREVIEW1m 21s
Course Summary
16
Summary
4m 12s

The course is part of this learning path

Start course
Overview
Difficulty
Intermediate
Duration
37m
Students
104
Ratings
5/5
starstarstarstarstar
Description

This course focuses on maximizing the performance of hardware and infrastructure relating to database servers. You will learn the best ways to improve an SQL server's performance and that of its databases through infrastructure choice and configuration settings.

Learning Objectives

  • Learn how to set up disks for maximum performance
  • Understand how to boost file performance and how to use instant file initialization
  • Understand how to optimize TempDB and choose the right VM for your workloads
  • Learn how to manage an instance's resources
  • Learn how to configure your database as well as your SQL Server system accounts
  • Understand how to use Intelligent Query Processing to optimize database performance
  • Understand the benefits of using Azure SQL Serverless

Intended Audience

  • Database administrators
  • Solutions architects
  • Anyone interesting in improving the performance of their database

Prerequisites

To the most out of this course, you will need to be familiar with basic Azure concepts, have some knowledge of computer hard drives and networking, and be familiar with databases and their servers.

Transcript

At the most basic level, all storage performance boils down to hardware and in particular hard drives. Once upon a time, in the days of mechanical rotating platters, hard drive performance was measured in terms of rotational speed. A low-end notebook hard disk might spin at 4200 or 5400 rpm, while a high-performance server disk could spin up to 15,000 rpm. It's not just the disc's physical speed that counts but also the much faster cache memory built into the disc and the software smarts that manage the cache.

While spinning discs, that is, hard drives or HDDs are still in use and are suitable for high-volume low-performance storage, we mainly talk in terms of solid-state or SSD drives now. Instead of talking about rpm speeds, SSD drives are measured in random read/write operations per second. And like HDDs, SSDs come in various speed and performance configurations to meet different price points. These price points are reflected in the Azure managed disk options.

Ultra disk is the most performant disc option, followed by Premium SSD, then Standard SSD, while Standard HDD is still available for workloads where performance is unimportant. Not to state the obvious, but the higher the I/O throughput and the lower the latency, the more expensive the disc option.

Azure managed discs, also known as block level storage, can be thought of as physical hard drives or volumes when viewed from a virtual machine, even if that is not physically the case. Managed discs are unaware, and don't care what is stored on them, and are independent of virtual machines; that is, you can dynamically add more managed discs to a VM when the need arises.

When migrating a database from on-premise to the cloud, you will most probably want at least like-for-like disc performance. Because you can't specify the cloud's actual hardware, you need to base your disc choice on I/O operation metrics. There are three types of disc metrics that give a good indication of workload. There is the number of read and write operations carried out per second, also known as IOPS, input-output operations per second.

Throughput is the amount of data that is read or written from and to disk measured by bytes per second. Latency is the length of time taken for a read or write operation to commence once the request has been made. We can see straight away that the type of information or workload a database deals with places a different emphasis on these metrics. For example, when reading or writing large blob objects, like documents or even video, latency is far less critical than throughput in terms of the total operation's completion time.

On the other hand, when dealing with small amounts of data that are being written or read in real-time, latency becomes a more critical factor. As a rule of thumb, Ultra discs have the lowest latency, with a response time typically under one millisecond. Premium SSDs have the next lowest latency, usually in the low millisecond's range. Most production databases use either premium or ultra discs, or some combination of the two, but it usually comes down to a trade-off between performance and price.

You can use Performance Monitor to collect disk metrics on a Windows server. Once opened, select Performance Monitor under monitoring tools in the left-hand pane and then click the green plus button to add counters. Counter is another way of saying metric, as in counting the instances of an event occurring. After you have selected the server you wish to monitor, you are presented with a list of metric categories.

Find Logical Disk and expand it by clicking the small arrow on the list's right to reveal the different counters to track. IOPS can be tracked with disk reads, writes, and transfers per second, where transfers is the sum of reads and writes. Latency is monitored using average disk second read, write and transfer, while throughput is measured by bytes, read and written per second. It would be best if you also allowed an additional 20% headroom or margin for IOPS and throughput peak workloads.

About the Author
Avatar
Hallam Webber
Software Architect
Students
14673
Courses
27
Learning Paths
3

Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a  Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.