Storage & Compression
The course is part of this learning path
SQL is the language for interacting with databases, and like all computer languages, there are many ways to achieve the right result. And like other software languages, some ways will be better and more efficient than others. What sets SQL and database programming apart from most other computer programming is the vast amount of data that is regularly dealt with, and how other factors that are not immediately obvious can impact the SQL query code you write.
This course looks at ways that you can see how the SQL Server engine views your queries and how it intends to execute them. Using that information, we look at other factors like table and index design that can positively or negatively impact your query execution.
For any feedback relating to this course, please contact us at email@example.com.
- Understand what impacts query performance when using SQL
- Learn how the SQL Server engine processes queries
- Use execution plans to see how a query is executed
- Instruct the query engine
- Understand how different query types can be used
- Learn how to design tables to make queries more efficient
- Use database file configuration to improve performance
- Understand how database compression can impact performance
This course is aimed at database administrators, data engineers, or anyone who wants to learn how to improve database query execution.
To get the most out of this course, you should be familiar with SQL query syntax, SQL data types, and SQL table structure.
We started off this course by looking at the ways you can view how the SQL Server engine intends to execute your queries. The estimated execution plan, gives you a good graphical insight, as to where most of the processing resources or effort is going to be used when a query is run. Not only will an execution plan suggest an index if it is missing, but the property window of each operation, shows you which if any predicates are being evaluated, and what the output columns of each operation are.
As we saw later in the course, not all indexes suggested by the query optimizing engine actually improve performance. You can use hints to direct the query plan optimizer, to make use of particular indexes. As well as the query plan graphical interface, we looked at using Dynamic Management Objects, Views, and Functions, to query the query store and report on queries in progress.
All of this was a precursor to the main event, which is to see how we can improve the performance of SQL queries. There are two main ways of doing this. One is to create and apply appropriate indexes on tables, but of equal importance, is to have our tables structured efficiently. By efficiently, I mean using the most appropriate and concise datatypes, and eliminating data duplication through table normalization.
The main bottleneck to performance is retrieving data off the hard drive into main memory, to be processed by the CPU. SQL Server retrieves data by the page, so the more records you can fit onto a page, the more data you can process per disk I/O operation. Once we have been through the process of transforming the data into a normalized structure, we had a reasonable representation of datasets, that represent an OLTP scenario, and a flat structure similar to that you might find in a reporting database.
As I said at the beginning of the course, the overall theme in query optimization, is divide and conquer. We see this reflected in non-clustered indexes with included columns. These are analogous to mini-versions of the table, or filter indexes that deal only with the values that we are interested in. Column store indexes are another example of data being distilled down to its most condensed form, thereby enabling each I/O operation, to retrieve the most relevant information.
We then looked at ways to split large tables into partitions that could be located in multiple database files, that would allow us to make use of multiple hard drives and the database engine's multi-threaded design. Finally, we looked at database compression, as another way to get more information on each database page and more data retrieved with each disc access.
Compression can be a useful feature. There are definitely performance trade-offs to take into consideration, where different levels of compression on tables versus indexes, can deliver significantly different results.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.