The course is part of this learning path
This course focuses on maximizing the performance of hardware and infrastructure relating to database servers. You will learn the best ways to improve an SQL server's performance and that of its databases through infrastructure choice and configuration settings.
- Learn how to set up disks for maximum performance
- Understand how to boost file performance and how to use instant file initialization
- Understand how to optimize TempDB and choose the right VM for your workloads
- Learn how to manage an instance's resources
- Learn how to configure your database as well as your SQL Server system accounts
- Understand how to use Intelligent Query Processing to optimize database performance
- Understand the benefits of using Azure SQL Serverless
- Database administrators
- Solutions architects
- Anyone interesting in improving the performance of their database
To the most out of this course, you will need to be familiar with basic Azure concepts, have some knowledge of computer hard drives and networking, and be familiar with databases and their servers.
It is best practice to have data files and log files on separate volumes to reduce I/O disk contention. On the data file volume, enable read caching, but on the log file volume, disabled all caching. If you don't want to or can't implement disk striping but have multiple disks, you can enhance IO performance using multiple data files. Placing the files on separate disks will achieve a similar, although not as performant result.
SQL Server uses a proportional fill algorithm to distribute data across files, meaning that the files will grow at similar rates. The multiple files scenario could be best described as file striping, where the data distribution is handled by the database engine instead of the operating system. This is especially true when used in conjunction with table partitioning, which we talk about in the Azure SQL Query Performance Tuning course.
Splitting data across multiple files isn't just about performance. It's also about ease of maintenance when databases become large, as in hundreds of gigabytes, and definitely when exceeding one terabyte. Having multiple smaller files makes moving files much more manageable. When it comes to file growth, you don't want to have auto-growth switched on with no limit, even if the file is the only one on the volume. In this situation, it would be better to set the files maximum size to a little less than the disk size and set up an alert to advise when the file is nearing it's full size, either from within the database or operating system.
Another thing to bear in mind is file growth does come at some cost, in that the database engine has to take time out of query processing to increase the file size. So if you use auto-growth, make sure you set the grow by amount to a reasonable size to reduce the number of file growth events.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.