Architecting on AWS: How to Use AutoScaling to Achieve Elastic Computing

Our last post in this series has provided you with an overview of our example architecture on AWS. In this post, we are going into some more detail in focusing on elasticity using AWS EC2 (Elastic Compute Cloud), and in particular, we will see how to use AutoScaling to make your computing infrastructure elastic and highly available.

But what is that elasticity thing that people keep on going on about? According to Wikipedia elasticity is defined as “the degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible.”

This is different from scalability, or, if you like, a specialization of scalability. Scalability provides the ability to increase (or decrease) the number of resources in scaling up (more powerful instances) or out (additional instances), which is usually done through manual intervention. Elasticity does the same but in an autonomic manner, independent from human interaction.
But what does that mean for EC2? Sometimes EC2 instances only tend to be considered as virtual machines that are hosted in the cloud. However, this doesn’t take into account the auxiliary services that come as part of EC2. Therefore it is missing one key enabler to elasticity as defined above: AutoScaling.

AutoScaling is the ‘magic’ ingredient that allows a system that is hosted on EC2 to dynamically adapt to changes in demand. But how does it actually work?
how to use autoscaling

How AutoScaling works

AutoScaling has two components: Launch Configurations and Auto Scaling Groups.

  • Launch Configurations hold the instructions for the creation of new instances. The instructions describe what type of instance AutoScaling needs to launch (e.g. t2.medium, m3.large), what Amazon Machine Image (AMI) the new instance is going to be based on, what roles or what storage is going to be associated with the instance, and so on.
  • Scaling Groups, on the other hand, manage the scaling rules and logic, which are defined in policies. Those could be based on schedule or CloudWatch metrics. The CloudWatch service allows you to monitor all resources and applications that you have deployed on AWS. CloudWatch allows you to define alarms on metrics, which the AutoScaling policies subscribe to. Through the use of metrics you can for example implement rules that elastic scale your environment based on the performance of your deployed instances or traffic volumes on the network.

This doesn’t have to be the limit though. Since CloudWatch is collecting metrics from each and every resource deployed within your environment you can choose a variety of different sources as inputs to your scaling events. Assume you have deployed an application on EC2 that is processing requests from a queue like the Simple Queuing Service. With CloudWatch you can monitor the length of the queues and scale your computing environment in or out based on the number of items in the queue at the time. And since CloudWatch also supports the creation of custom metrics through the API, you can actually use any of your application logging outputs as a trigger for utility compute scaling events.

How to use AutoScaling to achieve elastic computing

Ignoring CloudWatch you can also use the AutoScaling APIs to amend your scaling configuration, trigger scaling events or define the health of an instance. Defining the health status of your instances allows you to go beyond the internal health checking that is done by AutoScaling, which is basically just confirming whether an instance is still running or not. As part of your internal application logic, you could set the health status as a result of certain error conditions. Once set to unhealthy, AutoScaling will take the instance out of service and spin up a fresh new instance instead.

Auto Scaling can also have use outside of the traditional elasticity needs. Auto Scaling is commonly used in a smaller environment to ensure that no less than a certain amount of instances are running at any point in time. So if you are just starting up with that flash new application that no one knows about just yet, or you are deploying an internal facing business application, it is still good practice to make those instances part of an Auto Scaling group. This brings a number of advantages with it.

Firstly and most importantly: you are forcing yourself to design your application in a way that lends itself to the paradigm of disposable infrastructure. Therefore you will ensure that no state or data is ever going to be stored on the instance.

Secondly, you ensure that the launch of a new instance is fully automated. While you may not yet start to use configuration management tools like Chef, Puppet or PowerShell DSC, you will set yourself on the right path in either maintaining a ‘master’ AMI image or make use of the default AMIs in combination with bootstrapping through the instance’ user data.
Finally, with the first two strategies implemented, you are ready to scale your environment in case that your idea becomes the hype of the month.

Summary

In summary, we have provided you with a variety of examples that allow you to understand the use of elasticity and scalability in relation to EC2 and provided you with a summary of the services involved.

For scaling, particular using elastic scaling you need to be conscious about the other services in your environment that form part of your solution. For example, you may need to consider whether your relational database can continue to respond to the increase in demand from the additional web or application servers. If you are utilizing the Elastic Load Balancer (ELB) to distribute the load between your instances, you need to be aware that the ELB is also designed as an elastic service, which is based on EC2. For huge spikes in demand, unfortunately, you don’t quite get the elasticity you would wish for. As you are ‘warming-up’ your own environment in spinning up new instances in anticipation for an expected increase in demand (e.g. through the launch of a marketing campaign), you are best to also contact the AWS support in advance of the expected spike to ensure that the ELB is ready to respond to the demand immediately.

You can learn more how to design a scalable and elastic infrastructure on AWS using Cloud Academy’s AWS training library. In particular, you might benefit from watching our course How to Architect with a Design for Failure Approach, where AutoScaling is used to help to achieve high availability and fault-tolerance in a common architecture.

Avatar

Written by

Christian Petters

As a Solutions Architect, Christian is helping organisations to find the most appropriate solution to address their unique business problems. He is passionate about the opportunities that are provided by modern cloud services and covers topics in AWS and MS Azure with a particular focus on Microsoft technologies.


Related Posts

Avatar
Dzenan Dzevlan
— November 20, 2019

Application Load Balancer vs. Classic Load Balancer

What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...

Read more
  • ALB
  • Application Load Balancer
  • AWS
  • Elastic Load Balancer
  • ELB
Albert Qian
Albert Qian
— November 13, 2019

Advantages and Disadvantages of Microservices Architecture

What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...

Read more
  • AWS
  • Docker
  • Kubernetes
  • Microservices
Nisar Ahmad
Nisar Ahmad
— November 12, 2019

Kubernetes Services: AWS vs. Azure vs. Google Cloud

Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Kubernetes
Avatar
Stuart Scott
— October 31, 2019

AWS Internet of Things (IoT): The 3 Services You Need to Know

The Internet of Things (IoT) embeds technology into any physical thing to enable never-before-seen levels of connectivity. IoT is revolutionizing industries and creating many new market opportunities. Cloud services play an important role in enabling deployment of IoT solutions that min...

Read more
  • AWS
  • AWS IoT Events
  • AWS IoT SiteWise
  • AWS IoT Things Graph
  • IoT
Avatar
Cloud Academy Team
— October 23, 2019

Which Certifications Should I Get?

As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...

Read more
  • AWS
  • Azure
  • Certifications
  • Cloud Computing
  • Google Cloud Platform
Valery Calderón Briz
Valery Calderón Briz
— October 22, 2019

How to Go Serverless Like a Pro

So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...

Read more
  • AWS
  • Lambda
  • Serverless
Avatar
Stuart Scott
— October 16, 2019

AWS Security: Bastion Hosts, NAT instances and VPC Peering

Effective security requires close control over your data and resources. Bastion hosts, NAT instances, and VPC peering can help you secure your AWS infrastructure. Welcome to part four of my AWS Security overview. In part three, we looked at network security at the subnet level. This ti...

Read more
  • AWS
Avatar
Sudhi Seshachala
— October 9, 2019

Top 13 Amazon Virtual Private Cloud (VPC) Best Practices

Amazon Virtual Private Cloud (VPC) brings a host of advantages to the table, including static private IP addresses, Elastic Network Interfaces, secure bastion host setup, DHCP options, Advanced Network Access Control, predictable internal IP ranges, VPN connectivity, movement of interna...

Read more
  • AWS
  • best practices
  • VPC
Avatar
Stuart Scott
— October 2, 2019

Big Changes to the AWS Certification Exams

With AWS re:Invent 2019 just around the corner, we can expect some early announcements to trickle through with upcoming features and services. However, AWS has just announced some big changes to their certification exams. So what’s changing and what’s new? There is a brand NEW ...

Read more
  • AWS
  • Certifications
Alisha Reyes
Alisha Reyes
— October 1, 2019

New on Cloud Academy: ITIL® 4, Microsoft 365 Tenant, Jenkins, TOGAF® 9.1, and more

At Cloud Academy, we're always striving to make improvements to our training platform. Based on your feedback, we released some new features to help make it easier for you to continue studying. These new features allow you to: Remove content from “Continue Studying” section Disc...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • ITIL® 4
  • Jenkins
  • Microsoft 365 Tenant
  • New content
  • Product Feature
  • Python programming
  • TOGAF® 9.1
Avatar
Stuart Scott
— September 27, 2019

AWS Security Groups: Instance Level Security

Instance security requires that you fully understand AWS security groups, along with patching responsibility, key pairs, and various tenancy options. As a precursor to this post, you should have a thorough understanding of the AWS Shared Responsibility Model before moving onto discussi...

Read more
  • AWS
  • instance security
  • Security
  • security groups
Avatar
Jeremy Cook
— September 17, 2019

Cloud Migration Risks & Benefits

If you’re like most businesses, you already have at least one workload running in the cloud. However, that doesn’t mean that cloud migration is right for everyone. While cloud environments are generally scalable, reliable, and highly available, those won’t be the only considerations dri...

Read more
  • AWS
  • Azure
  • Cloud Migration