Throughout our series of posts, we have already seen a variety of architectural patterns that allow us to design scalable and resilient solutions using Amazon Web Services (AWS) resources. However, even the best design can have flaws and may show signs of bottlenecks over time or as the demand for your application increases. This could be caused by additional load created by an influx of additional customers using your application or an increasing amount of data that needs indexing in your relational storage tier, to only name a couple.
As the saying goes; the devil is in the detail and your service quality can degrade for a large variety of reasons. While you may not be able to predict and detect each and every potential issue through load testing, you can use a number of architectural patterns to ensure that you continue to interact with your customers or users and therefore have a higher chance of keeping them satisfied.
No matter how well you plan your design, it’s unavoidable that some dependencies or processes will live beyond the control of the calling process. A typical response to this -the Circuit Breaker Pattern – was originally described by Michael Nygard. Many sources already talk about applying the pattern in the application development space. The same concept can also be implemented in the AWS infrastructure layer.
Your key ingredients for this are the Route 53 managed DNS service in combination with Route 53 health checks. Route 53 allows you to create primary and secondary DNS record sets for a given record. This is best explained with an example.
Primary DNS record set
Imagine your web site is hosted on a number of web servers that are load balanced using an AWS Elastic Load Balancer (ELB). So in Route 53 you would create an alias record set that points to the ELB endpoint.
We then set the routing policy to failover with a record type to Primary. This advices Route 53 to only send traffic to the IP address of the configured endpoint if the associated resource status is healthy.
For this, to work you also need to create a Route 53 health check and associate it within the current record set.
In its most basic configuration, you would point the health check to the same target as the DNS entry. Most of today’s modern web applications, though, rely on a variety of service tiers. Therefore you may want to consider the deployment of a custom health service as mentioned in my earlier post on AutoScaling. This way, the status of all sub-services contributing to the overall user experience can be included in your web site’s overall calculation.
Secondary DNS record set
Next, we need to configure the secondary recordset with the IP of your failover solution. Route 53 will respond with this target when the primary is considered unhealthy. Again, in its most basic form, this could just be a public S3 bucket with a static web page that is enabled for website hosting.
When setting up the static site, you need to ensure that the bucket has the same name as your domain as described above. When you finished configuring the static web site, you can jump back to Route 53 to associate the secondary DNS alias record for your domain. This time we are selecting the S3 bucket as the target.
We face many different needs in our daily work, each of which demands its own unique solution. For this reason, treat this post as nothing more than an appetizer.
AWS’s Route 53 allows for far more complex scenarios, and cascading DNS configurations allow you to combine regional, weighted and failover records to cater for a wide variety of use cases.
Your solution can also be more sophisticated than a basic static web page that is hosted on S3: you could also fail over to a secondary data center in a different region or a secondary environment that may provide a custom set of features to your site’s users.
This again may be controlled by the logic in your health reporting service in combination with your application logic. You may, for example, still be able to take orders when the warehouse service is unavailable, though you may not be able to display real-time availability information. However, you will want to load an alternative website to notify your customers of site overloads. The combination of an intelligent application and infrastructure design can also ensure that existing customers with an active transaction (e.g. a full shopping basket) can continue to check-out, while new visitors to the site are asked for a bit of patience.
As mentioned before, every solution is different. Therefore it is important for you to understand the capabilities offered by modern Cloud offerings. This way you can consider solutions that are beyond the limitations of your traditional infrastructure services. Start exploring our rich training content on CloudAcademy to get ahead of the game and learn more about the features and services provided by AWS.
Top 5 AWS Salary Report Findings
At the speed the cloud tech space is developing, it can be hard to keep track of everything that’s happening within the AWS ecosystem. Advances in technology prompt smarter functionality and innovative new products, which in turn give rise to new job roles that have a ripple effect on t...
New on Cloud Academy: Red Hat, Agile, OWASP Labs, Amazon SageMaker Lab, Linux Command Line Lab, SQL, Git Labs, Scrum Master, Azure Architects Lab, and Much More
Happy New Year! We hope you're ready to kick your training in overdrive in 2020 because we have a ton of new content for you. Not only do we have a bunch of new courses, hands-on labs, and lab challenges on AWS, Azure, and Google Cloud, but we also have three new courses on Red Hat, th...
Cloud Academy’s Blog Digest: Azure Best Practices, 6 Reasons You Should Get AWS Certified, Google Cloud Certification Prep, and more
Happy Holidays from Cloud Academy We hope you have a wonderful holiday season filled with family, friends, and plenty of food. Here at Cloud Academy, we are thankful for our amazing customer like you. Since this time of year can be stressful, we’re sharing a few of our latest article...
Google Cloud Platform Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2019, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the second consecuti...
New Lab Challenges: Push Your Skills to the Next Level
Build hands-on experience using real accounts on AWS, Azure, Google Cloud Platform, and more Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how pe...
New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More
Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...
Understanding Enterprise Cloud Migration
What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...
6 Reasons Why You Should Get an AWS Certification This Year
In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...
AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around
The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...
Application Load Balancer vs. Classic Load Balancer
What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...
Advantages and Disadvantages of Microservices Architecture
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Kubernetes Services: AWS vs. Azure vs. Google Cloud
Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...