Building the infrastructure
Testing against failures
The course is part of these learning paths
The gold standard for high availability is five 9s, meaning guaranteed uptime 99.999% of the time. That means just five and a half minutes of downtime throughout an entire year. Achieving this kind of reliability requires some advanced knowledge of the many tools AWS provides to build a robust infrastructure.
In this course, expert Cloud Architect Kevin Felichko will show one of the many possible alternatives for creating a high availability application, designing the whole infrastructure with a Design for Failure. You'll learn how to use AutoScaling, load balancing, and VPC to run a standard Ruby on Rails application on an EC2 instance, with data stored on an RDS-backed MySQL database, and assets stored on S3. Kevin will also touch on some advanced topics like using CloudFront for content delivery and how to distribute an application across multiple AWS regions.
Who should take this course
Test your knowledge of the material covered in this course: take a quiz.
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
In this lesson, we are going to add CloudFront in front of our elastic load balancer. CloudFront is a content delivery network with edge locations located all around the world. It can host both static and dynamic content. Our focus will be on the latter use case. Using CloudFront for dynamic content can be tricky and requires intimate knowledge of how an application will be used. However, there are many benefits to adding a caching layer. Related to our design goals, the caching layer will serve content to our end user during brief service disruptions. An example comes from our last lesson where our RDS instance had a fail over. There was a 34-second disruption between the time the fail over began and the time it ended.
Assuming our cache is set up properly, our users would have read-only access to our content even during the disruption. Our applications CloudFront design is simple. We will have one origin with two behaviors. One behavior will be for our read-only content and the second will be the pass through, posts, deletes, etc. that builds our content. The read only behavior will be the first in the precedence order. Pass through will be the last behavior and serves as our default behavior. This means we need to create the pass through behavior first as part of the initial creation of the distribution.
Let's begin by creating our CloudFront distribution. From the AWS console, click on the CloudFront link followed by create distribution. We can choose between two options, web and RTMP. Web will deliver standard HTTP and HTTPS content. RTMP delivers streaming content to end users through the Adobe Flash Media server RTMP protocol meaning users can start playing content before it is finished downloading.
Web delivery is all our application needs so we can continue. For the origin domain name, we are presented with a list of our internal resources or we can enter a custom source.
We will enter our custom source ELB.cloud-e.co, which we will add to Route 53 later. This will fill in the origin ID automatically which is fine for us. Origin protocol policy can either use HTTP only or match the request of the viewer.
Because we do not have HTTPS content, the first option is acceptable. No need for us to change the ports for HTTP and HTTPS.
The default cache behavior settings does exactly what it sounds like. It sets the default cache behavior for our origin. The path pattern cannot be changed as this is the default behavior. It will accept all requests that pass through to it.
The viewer protocol policy specifies how to handle requests. The HTTP and HTTPS option accepts all web requests. The option called redirect HTTP to HTTPS will cause CloudFront to send back a response code of 301 telling the requester to redirect to an HTTPS URL. HTTPS only will reject all HTTP requests. We will select HTTP and HTTPS option, allowed HTTP methods tells CloudFront which request to accept. As previously mentioned, the default behavior will accept all HTTP methods. Forward headers will use the request headers from the client as part of the caching. We have three options, none, whitelist and all.
None will ignore headers when caching objects. Whitelist allows us to specify which headers contribute to the cache. All will use every header. None is the most efficient and will work in our situation.
Object caching can either take its queue from our application or we can specify the cache length as part of that behavior.
Our default behavior will be to use the application's cache headers. The forward cookies setting much like forward headers offers us three options, none, whitelist and all. We want to pass through all cookies so we select the all option since our pass through content requires session cookies for authorization purposes.
Forward query strings is not needed for our application so we can leave it set to no. The last two settings in this section are not needed. Under distribution settings we choose a price class. Price class specifies which edge locations to use. We want to use all edge locations.
The only other option we are concerned about in this section is the alternate domain names. This will be the www subdomain since we want CloudFront sitting in front of our elastic load balancer. We will need to change the Route 53 settings to accommodate this change.
The remaining options will remain unchanged since they are not relevant to our design. We can now create the distribution. This will take quite a while to create as it has to propagate across all of the edge locations. While this is happening we can modify our Route 53 settings. Take note of the CloudFront URL. We will need this in Route 53. Back at the AWS dashboard, we navigate to Route 53 management and head to our record sets.
We need to change our primary and secondary www subdomain records to a new subdomain. In order to have our dynamic content work with our DNS fail over, CloudFront has to be in front of the fail over subdomains. We will use ELB as the subdomain. After modifying both records we create our new record set. It will be the new www subdomain that is a C name pointed to our CloudFront distribution. If enough time has passed, we can head back to our CloudFront distribution and see that the status has changed from in progress to deployed. However, we are not done with our setup. We still need to create our read-only behavior. Under the behaviors tab, click the create behavior button. The path pattern we want to use is anything under the users folder. Origin is our last deployed balancer. Viewer protocol policy will stay as HTTP and HTTPS. We will only accept the get and head HTTP methods for this content. We are not going to forward headers. For object caching we will customize it to cache all content for 60 seconds. Gets will not require cookies so there's no need to pass them through. This will also make caching global to all users and not just to a specific user tied to the session. The remaining options are acceptable so we can save the behavior. Just like the initial creation it takes some time for the settings to propagate. Now that the changes have propagated, we can test out the behavior. We will bring up a user with existing microposts in one browser. In a second browser, we have logged in as that user and we'll add a new micropost. Based on the first test, our content generation is working as we expected. Back in the first browser, refreshing the page does not yet show the new micropost. Refreshing the page after one minute will show the new entry. Our site is now delivered through CloudFront and can handle minor service disruptions. In our next lesson, we will look at what it takes to expand this setup to multiple AWS regions.
About the Author
Kevin is a seasoned technologist with 15+ years experience mostly in software development.Recently, he has led several migrations from traditional data centers to AWS resulting in over $100K a year in savings. His new projects take advantage of cloud computing from the start which enables a faster time to market.
He enjoys sharing his experience and knowledge with others while constantly learning new things. He has been building elegant, high-performing software across many industries since high school. He currently writes apps in node.js and iOS apps in Objective C and designs complex architectures for AWS deployments.
Kevin currently serves as Chief Technology Officer for PropertyRoom.com, where he leads a small, agile team.