Introduction & HA Overview
Aurora Single Master
Aurora Multi Master
The course is part of these learning paths
Interested in learning about Amazon Aurora?
Amazon Aurora is a next generation cloud native relational database, providing unrivalled performance and availability features!!
This course explores the various configuration options and techniques that you can use to create highly available Amazon Aurora databases. It starts off by looking at the high availability options available within Amazon Aurora, before diving deeper into more specific features, such as single and multi master setups, read replicas, and how Aurora can be provisioned serverless. Each new topic is accompanied by a real-world demonstration to help you better understand the concepts presented within the course.
For any feedback or questions relating to this course, please contact us as firstname.lastname@example.org.
- Understand how to provision and configure Amazon Aurora in a manner that ensures it is highly available and able to serve all read and write requests.
This course is intended for those responsible for architecting Aurora database setups, with an emphasis on high availability.
To get the most from this course, you should be familiar with basic SQL database concepts. If required, consider taking our "Database Fundamentals for AWS" course first.
The following GitHub repository is referenced within this course:
Let's take a quick look at a demo that shows how easy it is to set up and use a multi master Aurora database cluster.
In this example I’ll perform the following sequence:
- Launch a new multi master Aurora MySQL database cluster within the AWS RDS console.
- Create a new database named demo, and within it create a new table named course.
- Use the AWS RDS console to find and determine the connection points for the multi master database and set them up as environment variables named AURORA_NODE1 and AURORA_NODE2 in the local terminal.
- Launch a Python script that implements load balancing and retry connection logic to continuously insert records into the course table.
- Confirm connections are load balanced across both active master database nodes.
- Crash each of the master database nodes individually.
- Confirm connection failover to the remaining active master database node happens and databases inserts still continue.
Note: The commands and script as demonstrated here onwards are available in the following CloudAcademy Github repository.
Under the Database features, I’ll select the “Multiple Writers” option - this is what makes the cluster a multi master. I’ll set the DB cluster identifier to be “cloudacademy-db-multi”. I’ll configure the credentials to be admin with a password of cloudacademy. For instance size - I’ll simply choose the smallest size.
I’ll then deploy it into an existing Multi AZ VPC. For security groups - I’ll simply allocate an existing one which allows inbound TCP connections to the default MySQL port 3306. Connections will be made from an existing bastion host which has the standard MySQL client already installed on it.
Ok with all that in place, I can now go ahead and click on the “Create Database” button at the bottom. Provisioning is fairly quick and takes just a matter of minutes to complete.
While we are waiting for the database provisioning process to complete - let’s jump over into GitHub and examine the Aurora Multimaster repo. Here within the readme we can see the commands that we will execute to create the demo database and course table.
The “insert-test” python script implements connection load balancing and retry logic. Lines 7 and 8 query environment variables established in the terminal - these are the connection endpoints for each of the 2 master instances. Lines 10, 11, 12 specify the database name and database credentials for authentication.
The “reconnect” function spanning line 14 through to 19 simply calls the reconnect function on the passed in connection and logs out the fact that a reconnection has been attempted - either succeeding or failing.
The remainder of the script starting from line 32 establishes 2 database connections, one to each of the master nodes. The script then inserts 100 course records into the course table. Connection load balancing is performed by simply testing whether the current value of x within the for loop is even or odd - and then alternating the connections - with one of the connections being considered the primary, and the other one taking on the role of the backup connection.
Ok, let’s jump back into the RDS console and confirm that our Multi Master database is ready - which it is. Next, I’ll need to gather both connection endpoints for the masters and then set them up as environment variables within an SSH session on the bastion host.
I’ll now jump into my local terminal and connect to the bastion host using SSH.
Once connected I’ll git clone the aurora multimaster git repo. Once that is completed I’ll navigate into the aurora multi master directory and do a directory listing to examine its contents. From here I then establish both the AURORA_NODE1 and AURORA_NODE1 environment variables and configure them with the connection endpoints previously highlighted and copied.
Next, I’ll split the terminal up into 3 individual panes using tmux. I’ll use the key sequence control plus b plus double quote to split the terminal horizontally, and then again vertically using control plus b plus percent sign. This will allow me to run 3 commands side by side and see all results at the same time. In the first pane, I will set up a watch to perform a “select count star from course” every one second. This will provide us a running count of how many records have been inserted into the course table. In the second pane, I will launch the main Python script which will be performing the inserts into the database implementing connection load balancing and retry logic. Here we can see that it has started to successfully insert new data records using connection load balancing. In the previous pane, we can see that the course table count is now increasing as expected. In the 3rd pane, I will intermittently execute the command “alter system crash” against both of the database nodes individually to simulate a crash. We should expect that the connection retry logic is exercised and that the table count view remains incrementing without any loss. This appears to be the case, which is great.
If we now let the main python script playout - we should see that the final table count ends up at 100 - which it finally does. This is a great result.
In summary, this demonstration highlighted the following:
- How to provision a new Aurora MySQL multi master read-write database.
- Both database nodes are in an active-active or multi master read write configuration.
- Connection load balancing and retry logic implemented within the Python client script is working successfully without any data loss.
If you’ve followed along, please don’t forget to terminate your database cluster to avoid ongoing charges.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes.