The course is part of this learning path
Scalability and Elasticity
Welcome to domain Seven - Scalability and Elasticity - in the Solution Architect Professional for AWS learning path. In this group of lectures, we will walk through building a flexible, available and highly resilient application in the Amazon web services environment.
Hello and welcome to Domain Seven, Scalability and Elasticity, in the Solution Architect Professional for AWS Learning Path. In this group of lectures, we'll walk through building a flexible, available, and highly resilient application in the Amazon Web Services environment. And our brief is quite clear on the key components, elasticity and scalability. In terms of being elastic we need to be able to deal with any type of outage or change in our environment so that we can continue to deliver a good customer experience for our end users. And the business has already made clear that being able to ensure that our application is close to the end user, is a key priority. So we need to be ready to expand into multiple regions. Now choosing your region is an important decision that should be based on factors relevant to your AWS usage and business requirements. To start with a design for failure approach, we should always avoid assuming services will be readily available. While it's extremely unlikely, it is always a possibility that an availability zone or even a region becomes overwhelmed or unavailable. A possible scenario to consider would be an inability to start new instances or launch new services. So we will need to ensure our applications are not availability-zone dependent. So our high availability design goal for this project is to achieve five nines of availability. That is roughly five and half minutes of downtime per year. It is considered the gold standard of high availability for applications. In order for us to hit our 5/9ths goal, we need to design a self-healing architecture. We need to build a fault-tolerant infrastructure. We need to eliminate all single points of failure and provide a graceful degradation of services if there are any outages. And this can be accomplished with a combination of AWS service offerings. We have some challenges we need to identify before we can start. And these challenges center around moving our high availability design to multiple regions. When initiated, services are scoped for the region they are launched within. What may be incredibly simple to configure and use within a single region, can quickly turn into an administrative headache, when spanning multiple regions. This will most likely force changes to the hosted application we propose. Latency and data synchronization pose their own sets of challenges. When data has to be shared across regions and inter-region transfer is done over the internet. Therefore, an application's data access routines have to be designed with an intelligent read-write methodology in mind. There's no single, right way to do this. It depends on the complexity and goals of the individual application. Now our application for this project will offer just one possible solution to this dilemma. Let's take a look at the application. Our application will be the sample from a great book called Rails Tutorial by Michael Hartl. It is a simple Twitter-like application, built in Ruby on Rails 4. So without tweaking the application, we will leverage the built-in power of AWS to create a highly available system. Our architecture will look like this: This architecture diagram shows a pretty standard high availability design for web applications. Users will direct their browsers to our URL. CloudFront will deliver the user a cached version of the object requested. If this does not exist, the request will be forwarded to the elastic load balancer in our US-EAST-1 region subnets. If US-EAST-1 is unavailable, CloudFront will fall back to the EU-WEST-1 region. Regardless of the region, the elastic load balancer will direct traffic to one of the availability zones. Our Web EC2 instance will build the response using Amazon RDS, if the request involves a database store. When the response works its way back out, CloudFront will cache the response for the next request to the same URL. As we progress, we'll call back to this diagram, as needed, to show where we are targeting. A few notes before we start building our solution, We are not going to go over security best practices in this group of lectures, however, we have not skipped security in building our environment. We're just not showing all of the steps that would be necessary for a deployment of this nature. One thing to remember is, security of your account and your applications running within it are very important and should not be taken lightly. So we are selecting the low-cost, low-configuration options today. In a true production environment, we would not select a T1 Micro instance for our RDS instance. Our focus is on high availability. We have pre-built and configured our AMI to save a bit of time here. This is just one of the many possible high availability scenarios. AWS has an architecture blog that shows different scenarios for different use cases and you need to have a look at that before you go and sit your exam. The concepts we go over in this series, are the same regardless of our design. Everything we are doing is from the AWS console. You can also do this from the AWS Command Line Interface. Lastly, we could use Elastic Beanstalk or Amazon OpsWorks to get us most of the way to a highly available solution. Okay, let's get started building our solution.
About the Author
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.