1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Solution Architect Professional for AWS - Domain Seven: Scalability and Elasticity

Using CloudFront for Dynamic Content

play-arrow
Start course
Overview
DifficultyAdvanced
Duration52m
Students814

Description

Welcome to domain Seven - Scalability and Elasticity - in the Solution Architect Professional for AWS learning path. In this group of lectures, we will walk through building a flexible, available and highly resilient application in the Amazon web services environment.

Transcript

In this lesson, we are going to add CloudFront in front of our elastic load balancer. CloudFront is a content delivery network with edge locations located around the world. It can host both static and dynamic content. Our focus will be on the latter use case. Using CloudFront for dynamic content can be tricky, and requires intimate knowledge of how an application will be used. However, there are many benefits to adding a caching layer. Related to our design goals, the caching layer will serve content to our end user during brief service disruptions. An example comes from our last lesson, where our RDS instance had a fail-over. There was a 34 second disruption between the time the fail-over began and the time it ended. Assuming our cache is set up properly, our users would have read-only access to our content even during a disruption. Our application's CloudFront design is simple. We will have one origin with two behaviors. One behavior will be for our read-only content, and the second will be the pass through, posts, deletes, et cetera, that are required to build our content. The read-only behavior will be the first in the president's order. Passing through will be the last behavior, and serves as our default behavior. This means we need to create the pass through behavior first, as part of the initial creation of the distribution. So, let's begin by creating our CloudFront distribution. From the AWS console click on the CloudFront link, followed by Create Distribution. We can choose between two options, web and RTMP. Web will deliver standard HTTP as an HTTP content, RTMP delivers streaming content to end users through the Adobe Flash Media server RTMP protocol, meaning users can start playing content before it is finished downloading. Web delivery is all our application needs, so we can continue. For the Origin Domain Name, we are presented with a list of our internal resources, or we can enter our own custom source. We'll enter our customer source elb.cloud-e.co, which we will add to route 53 later. This will fill in the origin ID automatically, which is great. Origin Protocol Policy can either use HTTP Only or match the request to the viewer. Because we don't have HTTP as content for this demo, the first option is acceptable. No need for us to change the ports for HTTP and HTTPS. The Default Cache Behavior Settings does exactly what it sounds like. It sets the default cache behavior for our origin. The path pattern cannot be changed, as this is the default behavior. It will accept all requests that pass through it. The Viewer Protocol Policy specifies how to handle requests. The HTTP and HTTPS option accepts all web requests. The option called Redirect HTTP to HTTPS will cause CloudFront to send back a response code of 301, telling the requester to redirect to an https url. We will select HTTP Options, Allowing HTTP Methods, tells CloudFront which requests to accept. As previously mentioned, the default behavior will accept all http methods. Forward Headers will use the request headers from the client as part of the caching. Now, for this we have three options, None, Whitelist, and All. None will ignore headers when caching objects, Whitelist allows us to specify which headers contribute to the cache. All will use every header. None is the most efficient, and will work in our situation. Object Caching can either take its cue from our application, or we can specify the cache length as part of that behavior. Our default behavior will be to use the application's cache headers. The Forwarded Cookie setting, much like Forward Headers, offers us three options, None, Whitelist, and All. We want to pass through all cookies, so we select the All option, since our pass through content requires session cookies for authorization purposes. Forward Query Strings is not needed for our application, so we can leave it set to No. The last two settings in this section aren't needed. Under Distribution Settings we choose a Price Class. Price Class specifies which edge locations to use. We want to use all edge locations. The only other option we are concerned about in this section is the Alternate Domain Names. This will be the www sub-domains, since we want CloudFront sitting in front of our elastic load balancer. We'll need to change the route 53 settings to accommodate this change. The remaining options will remain unchanged, since they are not relevant to our design. We can now create the distribution. This will take quite a while to create, as it has to propagate across all of the edge locations. While this is happening, we can modify our route 53 settings. Take note of the CloudFront URL, we will need this in route 53. Back at the AWS dashboard, we navigate to route 53 management and head to our record sets. We need to change our primary and secondary www sub-domain records to a new sub-domain. In order to have our dynamic content work with our DNS fail-over, CloudFront has to be in front of the fail-over sub-domains. We will use ELB as the sub-domain. After modifying both records, we create our new record set. It will be our new www sub-domain that is a C name pointing to our CloudFront distribution. If enough time has passed, we can head back to our CloudFront distribution and see that the status has changed from in progress to deployed. However, we're not done with this step. We still need to create our read-only behavior. Under the Behaviors tab, click the Create Behavior button. The Path Pattern we want to use is anything under the user's folder. Origin is our last deployed balancer. Viewer Protocol Policy will stay as HTTP and HTTPS. We will only accept the GET and HEAD HTTP methods for this content. We're not going to forward headers. For Object Caching we will customize it to cache all content for 60 seconds. Get will not require cookies, so there's no need to pass them through. This will also make caching global to all users and not just to a specific user tied to a session. The remaining options are acceptable, so we can save the behavior. Just like in the initial creation, it takes some time for the settings to propagate. Now that the changes have propagated, we can test out the behavior. We will bring up a user with existing micro posts in one browser. In a second browser, we have logged in as that user, and we'll add a new micro post. Based on the first test, our content generation is working as we expected. Back in the first browser, refreshing the page does not show the new micro post. Refreshing the page after one minute will show the new entry. Our site is now delivered through CloudFront, and can handle minor service disruptions. In our next lesson, we will look at what it takes to expand the setup to multiple AWS regions.

About the Author

Students58001
Courses94
Learning paths35

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.