The course is part of this learning path
This course focuses on maximizing the performance of hardware and infrastructure relating to database servers. You will learn the best ways to improve an SQL server's performance and that of its databases through infrastructure choice and configuration settings.
- Learn how to set up disks for maximum performance
- Understand how to boost file performance and how to use instant file initialization
- Understand how to optimize TempDB and choose the right VM for your workloads
- Learn how to manage an instance's resources
- Learn how to configure your database as well as your SQL Server system accounts
- Understand how to use Intelligent Query Processing to optimize database performance
- Understand the benefits of using Azure SQL Serverless
- Database administrators
- Solutions architects
- Anyone interesting in improving the performance of their database
To the most out of this course, you will need to be familiar with basic Azure concepts, have some knowledge of computer hard drives and networking, and be familiar with databases and their servers.
Intelligent query processing is an umbrella term for groups of query optimization features introduced in SQL server 2017 and added to SQL server 2019. When a database compatibility level is set to 150 intelligent query processing is enabled by default. Many of these new features are centered around adaptive machine learning where query performance is improved by analyzing past executions and better analysis of data statistics before executing the query. I'll go into some of the intelligent query processing features present in both Azure SQL and SQL server.
Adaptive joins in batch mode allow query execution to change the type of join to employ mid execution based on the amount of data. Depending on whether the row count of a join is less than or greater than a threshold value, the execution plan will use either a nested loop join or a hash join. That is nested loop when less than the threshold and hash when greater. It's called adaptive as the query engine cannot assist the number of rows produced by a join until an initial scan has taken place and the join type is adapted to the data.
Table variable deferred compilation works in a similar fashion to a adaptive join and that the actual cardinality or row count of the table variable is used in query plan optimization. Prior to deferred compilation the query plan engine would use a fixed value, often one, when formulating a plan. Using one when the actual value could be hundreds of thousands or millions could definitely lead to sub optimal query performance.
Approximate distinct count is a feature designed to be used with very large datasets where speed is more important than accuracy an intended application would be a real time dashboard involving billions of rows, where an approximate value is just as good as an exact one. It returns the approximate count of distinct non-null values in a set specified by an expression. As well as speed, an approximate count has the advantage of using only a small fraction of system resources, such as memory.
Speaking of memory, memory grant feedback for batch mode was introduced with compatibility level 140 and for row mode with compatibility 150. This feature adjusts the memory allocation for a query based on past requirements. If not enough memory was allocated when the query was first run, and it resulted in operations spilling to disk that is slower hard drive storage was used to supplement RAM, then more memory will be allocated in future executions.
Conversely, if a query execution didn't use the memory allocated to it by some margin, then the allocation will be reduced in subsequent executions to conserve system resources. As I said, intelligent query processing is mostly switched on by default in Azure SQL and SQL server 2019. Query stop databases scope configuration to display the databases' current configuration values with the is value default column indicating if the value has been changed. You can see here I've turned off row mode memory grant feedback with the alter database scope configuration command.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.