1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. How to Architect with a Design for Failure Approach

Setting Up RDS

The course is part of these learning paths

DevOps Engineer – Professional Certification Preparation for AWS
course-steps 35 certification 5 lab-steps 18 quiz-steps 2 description 3
Solutions Architect – Professional Certification Preparation for AWS
course-steps 48 certification 6 lab-steps 19 quiz-steps 4 description 2
SysOps Administrator – Associate Certification Preparation for AWS
course-steps 35 certification 5 lab-steps 30 quiz-steps 4 description 5
Operations on AWS
course-steps 6 certification 2 lab-steps 3
more_horiz See 1 more

Contents

keyboard_tab
Introduction
Testing against failures
play-arrow
Start course
Overview
DifficultyIntermediate
Duration52m
Students4601
Ratings
4.9/5
star star star star star-half

Description

The gold standard for high availability is five 9's, meaning guaranteed uptime 99.999% of the time. That means just five and a half minutes of downtime throughout an entire year. Achieving this kind of reliability requires some advanced knowledge of the many tools AWS provides to build a robust infrastructure.

In this course, expert Cloud Architect Kevin Felichko will show one of the many possible alternatives for creating a high availability application, designing the whole infrastructure with a Design for Failure. You'll learn how to use AutoScaling, load balancing, and VPC to run a standard Ruby on Rails application on an EC2 instance, with data stored on an RDS-backed MySQL database, and assets stored on S3. Kevin will also touch on some advanced topics like using CloudFront for content delivery and how to distribute an application across multiple AWS regions.

Who should take this course

As an intermediate/advanced course, you will need to have some experience with EC2, S3 and RDS, and at least a basic knowledge of AutoScaling, ELB, VPC, Route 53 and CloudFront.

Test your knowledge of the material covered in this course: take a quiz.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome. This is the third lesson in our series on designing for high availability. In this lesson, we're setting up our RDS instance. RDS is AWS' relational database service. It comes in many flavors each with its own set of advantages and disadvantages. For our application, we will be using MySQL as the database engine. RDS makes eliminating a single point of failure extremely simple. If you recall from our last lesson, we set up three subnets: one in each availability zone dedicated to the database tier. Each database instance has specific needs for the number of IP addresses available to it for its managed services such as backups and maintenance. By giving an instance its own subnet it will not need to compete for IP addresses with any other service.

Even though we've created our subnets with the purpose of housing our database instances, we have to explicitly tell RDS which subnets to use. This is where subnet groups come in. A subnet group is simply a collection of subnets where RDS can fire up its instances. To create a subnet group we go to the RDS dashboard via the link in the AWS console. Once there, we click on the subnet groups link. Next, click on the create DB subnet group button. We start by adding a name for our subnet group along with a description. Again your naming convention should reflect your environment. We select our one and only VPC and availability zone drop downs are filled in. Select an availability zone. In the subnet drop down, we can only see subnet IDs which means we need to have a decent memory or a handy reference. The AWS console is in a constant state of improvement and hopefully it will show us our subnet names instead of just the IDs in the near future. So we select the database subnet ID and click add. Repeat this for the remaining availability zones.

When finished, click the create button. This will take us back to the subnet group listings where we can see the subnet group we just created. Now it is time to create our database instance. From the RDS dashboard, click through to the database instances and then click on the launch button. This will walk us through the creation process. Our application will be using the MySQL engine. Since we plan to use a multi-AZ deployment, we ensure that that option is set before continuing.

Next, we are presented with the database details. The default license model and engine version are acceptable for our application. We are going to change the instance class to T1 micro, set provisioned IOPs to no, and the allocated storage down to five gigabytes. These selections are not adviced for a true production environment as it will result in very poor performance, something that we're just not concerned with in this series. Lastly, we create a database identifier and the credentials before hitting next. This will bring us to the advanced setting step. The first configuration option we see is for the VPC which is already defaulted to our one and only VPC. For the database subnet group, we select the group we previously created.

We don't want our instance to be publicly accessible, meaning we need to ensure the publicly accessible option is set to no.

Up next, we select a VPC security group. We already have a security group we created prior to these series that we are using. Now we name the database. This is very important because this is what our application will be using when it connects to our database. The default port is 3306 which is what we want for a MySQL instance.

We do not need to change the parameter or option groups at this time. Parameter and option groups are useful when you need to configure a database setting that is not available from the AWS console.

You can find more information about parameter and option groups in the RDS documentation on AWS. The backup section allows us to set a retention period and backup window. Retention period will be based off of your requirements. A backup window is useful if you want to limit back ups to a time when user activity is low.

For our application, the default settings are acceptable. The maintenance section gives you the option of having minor versions automatically applied to your instances. If you want upgrades to automatically happen, you can also set a maintenance window. Again the default options are suitable for us. Finally, we launch the database instance which begins the creation process and puts us back on the database instance list. Creating a multi-availability zone database will take quite a while. So let's fast forward to what it looks like when it finally completes. Let's select our instance and check out the details. To connect to our instance from our application, we need to take note of the end point alias. This alias in conjunction with the database name and credentials is what will be used to make a connection. It always points to the primary instance. If the primary becomes unavailable, a standby instance is promoted as primary and the alias is redirected to it. The unreachable instance will become a standby once it is available again. The switch over may cause a brief service disruption between the time the primary is determined to be unhealthy and a standby is promoted. However, this is just a small inconvenience that can be masked with other techniques and is outweighed by the fact that RDS handles this for us without any intervention. At any point, we can see which availability zone hosts the primary instance from the screen.

Our primary instance is currently running in us-east-1a. In a later lesson, we will use this screen to demonstrate a successful failover to new availability zone. With our RDS instance operational, we can move on to setting up our auto scaling policies.

About the Author

Kevin is a seasoned technologist with 15+ years experience mostly in software development.Recently, he has led several migrations from traditional data centers to AWS resulting in over $100K a year in savings. His new projects take advantage of cloud computing from the start which enables a faster time to market.

He enjoys sharing his experience and knowledge with others while constantly learning new things. He has been building elegant, high-performing software across many industries since high school. He currently writes apps in node.js and iOS apps in Objective C and designs complex architectures for AWS deployments.

Kevin currently serves as Chief Technology Officer for PropertyRoom.com, where he leads a small, agile team.