This section of the Solution Architect Associate learning path introduces you to the core computing concepts and services relevant to the SAA-C03 exam. We start with an introduction to the AWS compute services, understand the options available and learn how to select and apply AWS compute services to meet specific requirements.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Learn the fundamentals of AWS compute services such as EC2, ECS, EKS, and AWS Batch
- Understanding how load balancing and autoscaling can be used to optimize your workloads
- Learn about the AWS serverless compute services and capabilities
Both of the dynamic scaling methods we talked about in the previous section are examples of reactionary scaling. The scaling system that is being used has a metric that it is trying to keep optimized in some way, be it by keeping the metric within an acceptable range or even at a particular number. When an outside force acts on the system by adding more load (a bunch of users checking their socials on their lunch break for example) that system acts in a reactionary way by adding more instances because of the load increase.
For predictive scaling the goal is that the system should be to get ahead of the load. A predictive system will try to scale out before an event happens so that you are always on target. Now the question is, how do we determine when new load is coming ahead of time?
Well, this service uses machine learning to understand your workloads. It is able to learn when your traffic normally rises and falls throughout the day. Based on that knowledge it will provision new instances just before they are needed and will start to get rid of them as traffic trails off.
This type of scaling is particularly good for cyclical traffic. The type of traffic where your users always are on at a certain time of day ( like normal business hours or nights and weekends). It also finds a good use when you have recurring on and off workloads. This might be things like batch processing or analytics that are called on a regular pattern-like basis.
Since this service uses machine learning to function, that means we will need to give it some time to learn the patterns in our traffic. The good news is that time is relative, and predictive scaling can use archived data from CloudWatch to create its scaling model. As long as there is at least 24 hours of historical data already laid out, you can start using predictive scaling.
The service can find patterns in your CloudWatch metrics all the way back 14 days in the past. And with this data, it will start to create a forecast of your system’s future needs. And this forecast data is updated daily based on the most recent CloudWatch metric data.
If this sounds interesting and you want to try it without possibly jeopardizing your user's experience, you can run the predictive auto scaling in forecast-only mode. This allows the system to make predictions based on your data without taking any actions.
Having this option lets you see how well it would be doing IF you were to let it take full control. You can see the predictions yourself and compare them to reality by checking out the graph it creates for you in the ec2 autoscaling console.
If you are happy with how the forecast looks, you can switch the predictive auto scaling into forecast and scale mode. This means it will be able to take over auto scaling functionality and provision new instances based on the forecasted model.
Something to keep in mind however is that when you use the forecast, ec2 autoscaling scales the number of instances at the start of every hour. So it might not be as real time as you might be hoping for.
You can use both predictive auto scaling and dynamic autoscaling at the same time to provide an even closer approximation for what your users might require. This will require a bit of tuning to get just right, but I think it will provide you quite a nice bit of coverage and should keep your users happy because of your high availability. This will tend to cost a little bit more, so that's up to you.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.