1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS - Domain Seven: Scalability and Elasticity

Expanding to Multiple Regions

Start course

Welcome to domain Seven - Scalability and Elasticity - in the Solution Architect Professional for AWS learning path. In this group of lectures, we will walk through building a flexible, available and highly resilient application in the Amazon web services environment.


Hi, and welcome back to Domain Seven of our Solution Architect Professional for AWS, where we're talking about scalability and elasticity. Now in this lesson, we'll go over the steps needed to expand our application from a single region to multiple regions. While AWS makes this possible, it can still be quite complicated. We will not be going through a step by step instruction of every detail. Instead, we will just touch on certain aspects that you need to know. So let's get started. First up, we need to create RDS read replicas for our application. From the RDS dashboard, we head to our instance. Under the Instance Actions button, click the Create Read Replica button. Now read replicas will default to the same settings used for the RDS instance. In this case, a T1 micro-instance with standard storage. We will give this instance an identifier so that it's unique among all read replicas. Now recall from our architecture diagram, each database subnet will have its own read replica. Change the subnet group to our one and only subnet group. We will start with the US East 1A availability zone. The other settings are good. So we can create the read replica. It will take a while for the creation to complete. We will repeat this for the other availability zones in our region and in our new region. We have some configuration to do for our application. The application will need to be modified to support reading from a read replica and writing to a primary RDS instance. A challenge we experience with this setup is that each read replica has its own end point URL. One option would be to add an EC2 instance running an HAProxy to handle database queries. Writing to the primary RDS instance from another region requires a VPN connection. This can be accomplished using open VPN and such VPC pairing is limited to VPC's within a region. The only remaining piece is to script the database fail over so a read replica in a new region is promoted to the primary database instance if the existing primary is unavailable. One of the team points out that we should consider migrating our database to an Aurora cluster as Aurora supports cross region read replicas. We could then promote our cross region replica to be the new primary from the RDS console or ideally automatically. The promotion process typically takes a few minutes depending on our workloads. Better still, if we set the priority of our cross region replica cluster to be tier I or II, Aurora should automatically fail over to the cross region replica if all or some of the three replicas in the master region AZs become unavailable. That would really simplify our high availability requirements. If you have an Amazon Aurora replica in the same or different availability zone, when failing over, Aurora flips to kay-ni-cal name or the C-name record for your DB instance to point at the healthy replica, which is in turn promoted to become the new primary. Start to finish that generally happens within a minute. So Aurora also supports cluster end points, so if our Aurora database cluster in US East One has two Aurora replicas in different availability zones from its primary instance. By connecting to the cluster endpoint we can send both read and write traffic to their primary instance. We can also connect to the endpoint for each Aurora replica and send queries directly to those DB instances if we want to. Then in the unlikely event that the primary instance in US East One or the availability zone that contains the primary instance fails, then RDS will promote one of the Aurora replicas to be the new primary instance. And update the domain name service record for the cluster endpoint to point to the new primary instance. Now assuming that our whole AZ goes down, remember Aurora has three AZs by default. So if we lose all three of those and say we lose a region then theoretically our fail over mechanism should fail over to the next high tier priority read replica. If we set our cross region read replica cluster to be the high priority read replica, then there is no reason why that shouldn’t just automatically fail over to another region. That's certainly what it seems to be telling us from Jeff Barr's blog which is always very good for keeping up-to-date with what's happening in AWS. So our application will continue to send read and write traffic to our Aurora DB cluster by using the cluster endpoint. That's going to create minimal interruption of service so we can have up to 15 read replicas with Aurora and the ranking tier is how Aurora selects which read replica to send traffic or request traffic to after it is a failure. We agree to test this design hypothesis as it appeared to work in our preliminary tests. We agreed to request funding for a proven concept and to build a cost benefit analysis for migrating our current MySQL database to Amazon Aurora. Okay so back to our current build. After finishing the modifications to our application to support the current read replica format we need to create an AMI and copy the AMI to our new region. From the actions menu, click the copy AMI option, select the destination region and begin the copy. This is required because AMIs are limited to one region. Next we will create a cloud formation template for our existing environment. Cloud formation is a way to build a template version of our AWS resources that can be used to create a new environment or for disaster recovery. Under the AWS dashboard, click on the cloud formation link two options are presented to us. The first allows us to create a stack using a sample template. Uploading a template or linking to a template. The second option is CloudFormer. CloudFormer will build a template from the environment which is exactly what we want. After naming the template, accepting the access control settings in options we can review the settings. Everything looks good. We acknowledge the AWS warning about IIM rules and click the create button. It will take a fair amount of time for AWS to go through our resources to generate the templates. CloudFormer will save the template it generates to an S3 bucket. We can modify this template as needed. In our case, we have to modify it since CloudFormer does not generate entries for all of our resources properly. After the template has been created, we can hit the settings to pull the CloudFormer URL from the outputs tab. The URL links to the EC2 instance that CloudFormer creates in our source region that is a T1 micro instance responsible for managing our template. Clicking the URL will take us through a landing page that we can use to launch a new environment in the region. The CloudFormer visit walks us through each AWS resource that is possible to launch in the selected region. Once launched, CloudFormer will build the resources. When created, our entire stack will be running an EU West One. In the event it fails during creation and where the template has been configured to roll back, all resources created in the stack are deleted. Finally we need to update Route 53 to support our new region. We want the new region to be a fallback region which means we need to update our secondary ELB subdomain target to our EU West One elastic load balancer. Should US East one become unavailable content will be delivered to the EU West One region.

About the Author
Andrew Larkin
Head of Content
Learning Paths

Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built  70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+  years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.