1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. AWS Solutions Architect Associate Level Certification Course: Part 3 of 3

Deployment Design Optimization

Start course
1h 10m

AWS Solutions Architect Associate Level Certification Course - Part 3 of 3

Having completed parts one and two of our AWS certification series, you should now be familiar with basic AWS services and some of the workings of AWS networking. This final course in our three-part certification exam preparation series focuses on data management and application and services deployment.

Who should take this course?

This is an advanced course that's aimed at people who already have some experience with AWS and a familiarity with the general principles of architecting cloud solutions.

Where will you go from here?

The self-testing quizzes of the AWS Solutions Architect Associate Level prep materialis a great follow up to this series...and a pretty good indicator of your readiness to take the AWS exam. Also, since you're studying for the AWS certification, check out our AWS Certifications Study Guide on our blog.


If you deploy resources on AWS, or anywhere else for that matter, then you have an interest in making those resources available to your users without interruption or data loss. Carefully planning your deployments design means making sure that when something fails, and eventually something certainly will fail, your system is robust enough to take the loss, continue delivering it's services and eventually, fully recover. A system build with this kind of disaster ready design is known as highly available and fault tolerant. Through the first 25 videos of this certification series we've certainly touched individually on many of the tools and approaches you'll need to create such a robust deployment. But it's certainly worth our while to put them all together for one birds eye view look at the whole subject. Lets imagine a user clicking on the web address we've associated with our application. This first contact with our system might come through the DNS records and routing rules of our route 53 configurations. Route 53 can certainly have a big influence on our user's choices of finding what they're looking for. As you'll remember, using route 53, you can create a routing policy with fail able record sets that based on health check results can reroute users between elastic IP addresses and servers in separate regions which is significant if you recall that elastic load balancing only works within a singular AWS region.

Once passed through route 53, and depending on the kind of content he's after, a user might never actually need to access a web server or web app instance but instead might be sent cached pages and data from a distribution of AWS' content delivery network, cloud front. Besides removing much of the operating load and cost from your ec2 servers cloud front distributions because they're hosted on s3 can have a much much higher reliability rate. If the content your user wants isn't static and therefore can't be delivered by cloud front, then its request will be filtered through an elastic load balancer that having performed its own health checks of your server resources will distribute requests among the highest performing servers you've got available, usually within your virtual private cloud.

In many cases you would have launched multiple instances from prebuilt AMIs configured as application servers and multiple availability zones. That way even if one server goes down, the others can take its place. Even if one availability zone somehow loses connectivity, your instances in another zone will still be accessible. Naturally a server meant to handle only a percentage of site traffic will probably not remain fully functionality if it is suddenly inundated with thousands of extra requests redirected from your failed servers. So you will have created instances in each availability zone as part of an auto scaling group. This way should the load on a server group approach preset limits, the launch of new instances built on the original AMI and identically provisioned will be triggered to take up the extra load. That should successfully anticipate failures among your server instances. But what about your MySQL or Aurora RDS data services or whatever other database platform you choose. Database servers are prone to failure no less than any other computing network or service. You might remember that the RDS setup wizard provided a simple drop down menu option for selecting a multi az deployment. Going with multi az means that RDS will create a synchronized backup image of your database. If there should be a failure that affects your primary database, RDS will automatically and seamlessly redirect all application server requests to the mirrored backup. No users should ever notice the switch. So constant and regular health checks are being performed on your services by least three services, route 53, elastic load balancer, and RDS.

Instantly accessible and geographical disbursed backup to your data are, to one degree or another, always available through cloud front which is on s3 redundant ec2 instances and RDS' multi ac and images of your actually server instances are safely stored as AMIs.

You're in pretty good shape. Besides all that however, we mustn't ignore another key component of a highly available and fault resistant deployment, instance data.

By which I mean the elastic block store volumes that you might associate with your ec2 instances. It might be all kinds of reasons for keeping data on EBS volumes rather than say on RDS or as part of the root on instance storage that comes with ec2 volumes themselves. In fact, you really should be especially careful to limit the kind of data you store on default root ec2 volumes. As relative to most other storage devices, these volumes are particularly volatile. But since even distinct obvious EBS instances can fail, AWS provides a simple mechanism for taking and storing regular snapshots of whatever your volumes contain on Amazon's s3. In theory, if you're sufficiently worried about it, you can write scripts to monitor EBS volume health and if there is trouble, restore and associate snapshots with their production instances.

About the Author
David Clinton
Linux SysAdmin
Learning Paths

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.