This course covers the core learning objective to meet the requirements of the 'Designing Database solutions in AWS - Level 3' skill
- Analzy targert AWS database platforms when performing a migration
- Create and deploy an enterprise-wide scalable RDS Database solition to meet and exceed workload performance expectations
- Create an AWS database slution to withstand AWS global infrastructure outages with minimal data loss
In this section, I want to take some time to highlight a few parameters that will showcase some important ideas.
Within the over 400 different parameters you can manage and change for RDS, there are a few themes you will notice:
- There are performance parameters - these will affect the way your database handles itself and the mechanisms that allow it to function.
- There are security-related parameters - these will help to keep your data, users, connections, and everything else secure.
- We have parameters that deal with how data is replicated within RDS - these help to keep your data alive in the event of downtime, crashes, and various catastrophic events.
Let's take a look at a few examples from each category and see how tuning them could affect our database.
Innodb_change_buffering - This parameter fits pretty squarely within the performance category. Its entire job is to let the database know how to deal with unbuffered changes to secondary indexes that are not in the buffer pool. This table shows all the options you have available for this parameter and the default is normally set to all.
Modifying this parameter helps to reduce the large amount of i/o required to keep your secondary indexes accurate after using DML (data manipulation language) operations.
In the past, it was important to keep track of these lagging operations when disk i/o speed were relatively slow compared to today. However, with modern drives and speed increases, this is a parameter that can be pushed into the none position. Leaving it set to all actually causes much longer shutdown times when pushing updates to your RDS databases.
Max_user_connections is a rather self-explanatory example compared to the previous one. It allows you to set a maximum number of simultaneous connections to the database. This value defaults to 0 which actually means there is no max connection cap. However, if you have a very sensitive database that you know should never have more than x number of connections at a time, this could be a very useful parameter to be aware of. Any additional connections above the allotted number will be declined with the message - “Warning: mysql_connect(): User xxxxxxxx already has more than ‘max_user_connections’ active connections”.
One of the biggest pains of development when working with databases is trying to optimize queries that are quick and effective. This parameter sets logging on queries that take over a certain amount of time to complete. That time is set by the long_query_time parameter, which works together with this one to log out queries that run over your determined time limit.
By default this is set off - 0, but can be turned on by simply changing this to 1. Keep in mind, however, just because a query is long, does not mean that it is inefficient or is doing something wrong. This is just one measurement that can be used to help flag issues that may appear.
Stephen is the AWS Certification Specialist at Cloud Academy. His content focuses heavily on topics related to certification on Amazon Web Services technologies. He loves teaching and believes that there are no shortcuts to certification but it is possible to find the right path and course of study.
Stephen has worked in IT for over 25 years in roles ranging from tech support to systems engineering. At one point, he taught computer network technology at a community college in Washington state.
Before coming to Cloud Academy, Stephen worked as a trainer and curriculum developer at AWS and brings a wealth of knowledge and experience in cloud technologies.
In his spare time, Stephen enjoys reading, sudoku, gaming, and modern square dancing.