Introduction to Alibaba Auto Scaling
The course is part of this learning path
Alibaba Auto Scaling automatically creates and releases ECS instances based on pre-defined rules in order to scale services to match demand. Furthermore, it can configure server load balancer and relational database service white lists, without any manual intervention.
In this course, you will learn about the Alibaba Auto Scaling service and how it operates. You will learn about the core concepts of the service, scaling groups, scaling configurations, and scaling rules (manual and automatic). For each section of the course, there are guided demonstrations from the Alibaba Cloud platform that you can follow along with, giving you the practical experience necessary to set up auto scaling on your own environment.
If you have any feedback relating to this course, feel free to contact us at email@example.com.
- Understand the core concepts and components of Alibaba Auto Scaling
- Learn how to create, modify, enable, disable, and delete a scaling group
- Learn how to create, modify and delete the scaling configuration that provides the virtual servers in the scaling group
- Understand the different types of scaling rules that are available
- Learn how to use manual and automatic scaling operations
This course is intended for anyone who wants to learn how to set up auto scaling in their Alibaba Cloud environments.
To get the most out of this course, you should already have a basic knowledge of Alibaba Cloud or another cloud vendor.
Welcome to session seven, Autoscaling Automated Scaling Operations. In session six, we talked about manually triggering scaling rules to add or remove instances from a scaling group. Scaling rules are embedded within a scaling group and each scaling group has its own scaling rules.
In this session, we're going to talk about automatically triggering event tasks to add or remove ECS instances in a scaling group. There are two types of automated task and they, are Scheduled Tasks and Event Triggered Tasks. Scheduled tasks sit outside of scaling groups. So if you were to delete a scaling group any scheduled tasks will remain, this is because a single task can be attached to more than one scaling group. Tasks are created from the auto scaling main page.
Let's cover scheduled tasks first. You can create up to 20 scheduled tasks, each one having different input parameters. With scheduled scaling, you instruct autoscaling to perform a scaling operation at a time based on the executed act and the recurrence period. For example, scaling at 4:00 PM every day.
If the recurrence period is not set, the scheduled task is executed once at the specified date and time. If the recurrence period is set, the scheduled task is periodically executed based on the specified point of time. This can be daily, weekly, monthly, or by a cron expression. A cron expression supports scaling by minute, hour, day, week, and month.
For the scaling method, you can select an existing scaling rule, however, you can only specify a simple scaling rule for a scheduled task, or you can configure the number of instances in a scaling group. You can specify the minimum or maximum number of instances in a group. This updated configuration will override any previous configurations.
If a scheduled task fails to trigger the execution of a scaling rule, because the scaling group is currently executing a scaling activity or because the scaling group is disabled, the scheduled task will automatically be retried.
There's something called the launch expiration time. Inside of this time window, schedule tasks will keep periodically retrying. Once the launch expiration time window has expired, the scheduled tasks will be abandoned.
Where multiple tasks are scheduled at similar times to execute a scaling rule for a particular scaling group, the earliest task triggers the scaling activity first. Other tasks will attempt to execute the rule within their launch expiration time, but a scaling group can only execute one scaling activity at a time, so those other tasks might have to retry several times.
If another scheduled task is still triggering attempts within its launch expiration time, after the scaling activity is finished the scaling rule will be executed and the corresponding scaling activity will be triggered.
If multiple tasks are scheduled at the same time, the last scheduled task is the one that will be executed.
Now let's cover how we can create and manage event triggered tasks. With event triggered scaling, you instruct autoscaling to perform a scaling operation based on a monitoring type, a reference period a condition to be met, and the scaling rule. There are two monitoring types available: System monitoring, which specifies the monitoring metrics that are collected by cloud monitor, and custom monitoring, which specifies custom monitoring metrics that you report to cloud monitor.
The reference period is a time period for data collection, aggregation, and calculation. The finer the granularity of the refresh interval, the more sensitive the alert triggering mechanism. The condition is a calculation rule used for the selected metric within a reference period and the scaling rule can be an existing rule in a scaling group or created at the same time as the triggered task.
For example, if the monitored metric was CPU utilization, then the condition could be, if CPU utilization is greater than or equal to 70% for whatever the reference period was set to, trigger that scaling rule.
Event triggered tasks do not need to be unique. Event triggered tasks cannot execute during a scaling groups cool down period, unlike manual or scheduled tasks. An event triggered task will be rejected if the scaling group is already executing another scaling activity, they will just not run. So if another activity is executing an event triggered task will give up. It doesn't have a retry window like a scheduled task does, it will simply fail and the next time the alarm is triggered to run, that task will then run again.
That concludes this session on automated scaling operations. In the next session, I will demonstrate creating a scheduled task and a triggered task based on metrics. I look forward to seeing you there.
David’s IT career started in 1990, when he took on the role of Database Administrator as a favor for his boss. He redirected his career into the Client Server side of Microsoft with NT4, and then progressed to Active Directory and each subsequent version of Microsoft Client/Server Operating Systems. In 2007 he joined QA as a Technical Trainer, and has delivered training in Server systems from 2003 to 2016 and Client systems from XP onwards. Currently, David is a Principal Technical Learning Specialist (Cloud), and delivers training in Azure Cloud Computing, specializing in Infrastructure Compute and Storage. David also delivers training in Microsoft PowerShell, and is qualified in the Alibaba Cloud Space.