Designing for high availability, fault tolerance and cost efficiency
High Availability in RDS
High Availability in Amazon Aurora
High Availability in DynamoDB
SAA-C02- Exam Prep
The course is part of this learning path
This section of the Solution Architect Associate learning path introduces you to the High Availability concepts and services relevant to the SAA-C02 exam. By the end of this section, you will be familiar with the design options available and know how to select and apply AWS services to meet specific availability scenarios relevant to the Solution Architect Associate exam.
- Learn the fundamentals of high availability, fault tolerance, and back up and disaster recovery
- Understand how a variety of Amazon services such as S3, Snowball, and Storage Gateway can be used for back up purposes
- Learn how to implement high availability practices in Amazon RDS, Amazon Aurora, and DynamoDB
Let's take a quick look at a demo that shows how easy it is to set up and use an Aurora Serverless database cluster.
In this example I’ll perform the following sequence:
- Launch a new Aurora Serverless MySQL database within the AWS RDS console.
- Create a new database named demo, and within it create a new table named course.
- Enable the Web Service Data API on the new Aurora Serverless MySQL database.
- Use the AWS CLI to invoke the aws rds-data command to read and write into the course table.
Under the Database features I’ll select the “Serverless” option - this is what makes the cluster serverless. I’ll set the DB cluster identifier to be “cloudacademy-serverless”. I’ll configure the credentials to be admin with a password of cloudacademy.
For capacity settings - I’ll simply go with 2Gb for minimum capacity and 16Gb for maximum capacity. I’ll also enable the “Force Scaling” and “Pause Compute Capacity” settings to ensure cost savings when activity is low.
I’ll then deploy it into an existing Multi AZ VPC. For security groups - I’ll simply allocate an existing one which allows inbound TCP connections to the default MySQL port 3306. Connections will be made from an existing bastion host which has the standard MySQL client already installed on it.
I’ll also enable the “Data API” which will activate a SQL HTTP endpoint.
Ok with all that in place, I can now go ahead and click on the “Create Database” button at the bottom. Provisioning is fairly quick and takes just a matter of minutes to complete.
While the data provisioning is taking place, I’ll head over into the Secrets Manager service and set up a new secret which will be used later on by the AWS CLI to invoke SQL commands using the SQL HTTP endpoint.
I’ll start by clicking on the “Store a new Secret” button. For the secret type I’ll go with the selected default which is “Credentials for RDS database”. I then need to enter in the username and password Mysql database credentials - which for the serverless database I just launched were admin and cloudacademy respectively. I then need to select the cloudacademy-serverless database and then click the Next button. I then need to allocate this secret a name and optionally a description. In this case I’ll call the secret “demo/cloudacademy-serverless”.
I’ll skip assigning any tags and instead just click the Next button. I’ll accept the automatic rotation section defaults - which are to disable automatic rotation - and again just click on the Next button. The review section looks good - I can now click the Store secret button to create and store the secret. Ok, that has worked. Finally, I need to click on the new secret like so and then copy down the secret ARN highlighted here - this will be used later on to set up and authenticate to the Web Service Data API via the AWS CLI in the terminal.
Ok, let’s jump back into the RDS console and confirm that our Serverless database is ready - which it is. Next I’ll need to discover and copy the allocated host name.
I’ll now jump into my local terminal and connect to the bastion host using SSH.
Once connected I’ll use the MySQL client utility to connect to the serverless database using the hostname just copied. Once authenticated into the serverless database - I’ll simply create a new database named demo and then create a new course table within it.
Ok, I’ll now jump into a new terminal session and this time use the AWS CLI to execute a SQL insert statement to populate sample data into the course table. Here I’ll execute the following command:
aws rds-data execute-statement \
I’ll execute it within the us-west-2 region and set the secret-arn to be the arn copied when we setup the RDS secret within the secrets manager service. The resource arn is copied from the serverless database configuration section within RDS. The database is set to demo, and sql will be set to a simple insert SQL statement that will populate a new course record in our course table.
--region us-west-2 \
--secret-arn "" \--
--resource-arn "" \
--database demo \
--sql "show tables" \
Ok - executing this statement has now successfully created a new database record within the demo course table. Let's rerun this command a few more times to insert more data.
Finally - I’ll now run a select statement against the table to return all data in the course table.
In summary, this demonstration highlighted the following:
- How to provision a new Aurora Serverless MySQL database.
- How to enable the Web Service Data API.
- How to connect to an Aurora Serverless database using the AWS CLI, authenticating with a secret stored in the Secrets Manager service.
- How to execute SQL statements using the AWS CLI.
If you’ve followed along, please don’t forget to terminate your database cluster to avoid ongoing charges.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 90+ courses relating to Cloud reaching over 100,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.