1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Amazon Aurora High Availability

Aurora Multi-Master Setup and Use DEMO

Contents

keyboard_tab
Introduction & HA Overview
1
Introduction
PREVIEW1m 6s
Course Summary

The course is part of these learning paths

play-arrow
Start course
Overview
DifficultyIntermediate
Duration34m
Students759
Ratings
4.6/5
starstarstarstarstar-half

Description

Interested in learning about Amazon Aurora?

Amazon Aurora is a next generation cloud native relational database, providing unrivalled performance and availability features!!

This course explores the various configuration options and techniques that you can use to create highly available Amazon Aurora databases. It starts off by looking at the high availability options available within Amazon Aurora, before diving deeper into more specific features, such as single and multi master setups, read replicas, and how Aurora can be provisioned serverless. Each new topic is accompanied by a real-world demonstration to help you better understand the concepts presented within the course.

For any feedback or questions relating to this course, please contact us as support@cloudacademy.com.

Learning Objectives

  • Understand how to provision and configure Amazon Aurora in a manner that ensures it is highly available and able to serve all read and write requests.

Intended Audience

This course is intended for those responsible for architecting Aurora database setups, with an emphasis on high availability.

Prerequisites

To get the most from this course, you should be familiar with basic SQL database concepts. If required, consider taking our "Database Fundamentals for AWS" course first.

Source Code

The following GitHub repository is referenced within this course:

Transcript

Let's take a quick look at a demo that shows how easy it is to set up and use a multi master Aurora database cluster.

In this example I’ll perform the following sequence:

  1. Launch a new multi master Aurora MySQL database cluster within the AWS RDS console.
  2. Create a new database named demo, and within it create a new table named course.
  3. Use the AWS RDS console to find and determine the connection points for the multi master database and set them up as environment variables named AURORA_NODE1 and AURORA_NODE2 in the local terminal.
  4. Launch a Python script that implements load balancing and retry connection logic to continuously insert records into the course table.
  5. Confirm connections are load balanced across both active master database nodes.
  6. Crash each of the master database nodes individually.
  7. Confirm connection failover to the remaining active master database node happens and databases inserts still continue.

Note: The commands and script as demonstrated here onwards are available in the following CloudAcademy Github repository.

Ok, let's begin. Starting off in the AWS RDS console - I’ll create a new Amazon Aurora MySQL Multi Master database.

Under the Database features, I’ll select the “Multiple Writers” option - this is what makes the cluster a multi master. I’ll set the DB cluster identifier to be “cloudacademy-db-multi”. I’ll configure the credentials to be admin with a password of cloudacademy. For instance size - I’ll simply choose the smallest size.

I’ll then deploy it into an existing Multi AZ VPC. For security groups - I’ll simply allocate an existing one which allows inbound TCP connections to the default MySQL port 3306. Connections will be made from an existing bastion host which has the standard MySQL client already installed on it.

Ok with all that in place, I can now go ahead and click on the “Create Database” button at the bottom. Provisioning is fairly quick and takes just a matter of minutes to complete.

While we are waiting for the database provisioning process to complete - let’s jump over into GitHub and examine the Aurora Multimaster repo. Here within the readme we can see the commands that we will execute to create the demo database and course table.

The “insert-test” python script implements connection load balancing and retry logic. Lines 7 and 8 query environment variables established in the terminal - these are the connection endpoints for each of the 2 master instances. Lines 10, 11, 12 specify the database name and database credentials for authentication.

The “reconnect” function spanning line 14 through to 19 simply calls the reconnect function on the passed in connection and logs out the fact that a reconnection has been attempted - either succeeding or failing.

The remainder of the script starting from line 32 establishes 2 database connections, one to each of the master nodes. The script then inserts 100 course records into the course table. Connection load balancing is performed by simply testing whether the current value of x within the for loop is even or odd - and then alternating the connections - with one of the connections being considered the primary, and the other one taking on the role of the backup connection.

Ok, let’s jump back into the RDS console and confirm that our Multi Master database is ready - which it is. Next, I’ll need to gather both connection endpoints for the masters and then set them up as environment variables within an SSH session on the bastion host.

I’ll now jump into my local terminal and connect to the bastion host using SSH.

Once connected I’ll git clone the aurora multimaster git repo. Once that is completed I’ll navigate into the aurora multi master directory and do a directory listing to examine its contents. From here I then establish both the AURORA_NODE1 and AURORA_NODE1 environment variables and configure them with the connection endpoints previously highlighted and copied. 

Next, I’ll split the terminal up into 3 individual panes using tmux. I’ll use the key sequence control plus b plus double quote to split the terminal horizontally, and then again vertically using control plus b plus percent sign. This will allow me to run 3 commands side by side and see all results at the same time. In the first pane, I will set up a watch to perform a “select count star from course” every one second. This will provide us a running count of how many records have been inserted into the course table. In the second pane, I will launch the main Python script which will be performing the inserts into the database implementing connection load balancing and retry logic. Here we can see that it has started to successfully insert new data records using connection load balancing. In the previous pane, we can see that the course table count is now increasing as expected. In the 3rd pane, I will intermittently execute the command “alter system crash” against both of the database nodes individually to simulate a crash. We should expect that the connection retry logic is exercised and that the table count view remains incrementing without any loss. This appears to be the case, which is great.

If we now let the main python script playout - we should see that the final table count ends up at 100 - which it finally does. This is a great result.

In summary, this demonstration highlighted the following:

  1. How to provision a new Aurora MySQL multi master read-write database.
  2. Both database nodes are in an active-active or multi master read write configuration.
  3. Connection load balancing and retry logic implemented within the Python client script is working successfully without any data loss.

If you’ve followed along, please don’t forget to terminate your database cluster to avoid ongoing charges.

About the Author

Students27185
Labs32
Courses93
Learning paths22

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.