Introduction to Alibaba Auto Scaling
The course is part of this learning path
Alibaba Auto Scaling automatically creates and releases ECS instances based on pre-defined rules in order to scale services to match demand. Furthermore, it can configure server load balancer and relational database service white lists, without any manual intervention.
In this course, you will learn about the Alibaba Auto Scaling service and how it operates. You will learn about the core concepts of the service, scaling groups, scaling configurations, and scaling rules (manual and automatic). For each section of the course, there are guided demonstrations from the Alibaba Cloud platform that you can follow along with, giving you the practical experience necessary to set up auto scaling on your own environment.
If you have any feedback relating to this course, feel free to contact us at firstname.lastname@example.org.
- Understand the core concepts and components of Alibaba Auto Scaling
- Learn how to create, modify, enable, disable, and delete a scaling group
- Learn how to create, modify and delete the scaling configuration that provides the virtual servers in the scaling group
- Understand the different types of scaling rules that are available
- Learn how to use manual and automatic scaling operations
This course is intended for anyone who wants to learn how to set up auto scaling in their Alibaba Cloud environments.
To get the most out of this course, you should already have a basic knowledge of Alibaba Cloud or another cloud vendor.
Welcome to session two, auto Scaling core concepts. In this session, we'll look at some of Auto Scaling core concepts. Auto Scaling supports the following key functions, scaling operations, server load balancer auto configuration, and RDS database whitelist auto configuration. In order to make full use of the Auto Scaling service you need to be familiar with some of the key concepts associated with Auto Scaling.
The first key concept to know is, what is Auto Scaling? Auto Scaling is a management service that allows users to automatically adjust elastic computing resources according to business needs and policies. Within the Auto Scaling service, there are related concepts that are also important to know.
The first is, a scaling group. A scaling group is a collection of ECS instances with a similar configuration. So similar memory, network, and operating system that are deployed in an application scenario together to provide, for example a server website front-end, or a web application back end. The scaling group defines the maximum and minimum number of ECS instances, that the group can contain. And any associated server load balancer and or RDS database instance configurations.
Within a scaling group, you will have what's called a scaling configuration. A scaling configuration defines the configuration information for the ECS instances in the scaling group. This can be where you want to use an existing ECS instance as a template to create new instances, or you want to use an instance launch template to create ECS instances or you do not have a proper instance configuration and want to create an empty scaling group.
Next, there are scaling rules. A scaling rule defines specific scaling actions, for instance, adding or removing ECS instances. Scaling rules can automatically adjust the number of ECS instances, according to the minimum number of instances and the maximum number of instances that are set for the scaling group. This means that the number of ECS instances in the group cannot exceed those limits.
For example, if the number of ECS instances is set to 50 in a scaling rule, but the maximum number value of the scaling group is set to 45, then the scaling group will only be able to grow the group up to 45 ECS instances. Next, we have scaling activities. When a scaling rule is successfully triggered, a scaling activity event is generated. And it's the scaling activity event that adds or removes ECS instances from the scaling group. Only one scaling activity can be executed at a time in a scaling group.
Next, we have a scaling trigger task. This is a task that triggers a scaling rule, such as a scheduled task that you've set up yourself or an event triggered task. There are two types of scaling trigger tasks that are supported. Scheduled tasks and event trigger tasks. With the scheduled scaling task, you instruct Auto Scaling to perform a scaling operation at a specified time. For example, scale up to X instances at 4:00 PM every day. But this type of scaling operation is designed for cases where you can predict that you will have an increase in demand.
Event triggered tasks are for scenarios where you cannot predict in advance what the load on your platform will be. In this case you can scale up and down dynamically based on cloud monitor metrics, such as CPU network, or memory usage. And finally, we have the cool-down period. The cool-down period, or cool down time, refers to a period during which Auto Scaling cannot execute new scaling activities. The cool-down period starts after the last ECS instances are added or removed from the scaling group by a scaling activity.
Each time a scaling activity is triggered, a cool-down period is entered into. During this period, any further requests from cloud monitor alarm tasks are rejected by the scaling group and new activities cannot take place. This is to keep the group from growing or shrinking too quickly. However, other tasks that are manually executed can immediately trigger a scaling activity without waiting for the cool-down time to expire.
Usage of the Auto Scaling service usually follows the same pattern. There are generally six steps. Step one, create a scaling group, and configure the maximum and minimum number of ECS instances that the group can hold. And you can associate a server load balancer and maybe an RDS database with the group.
Step two, create a scaling configuration. You choose the attributes for the ECS instances the Auto Scaling will add or remove from the group such as the operating system, image, ID and instance type or family. Step three, enable the scaling group, with the scaling configuration you created in step two. Step four, create a scaling rule. For example, add ECS instances. Step five, optionally, create a scheduled task that will trigger the scaling rule that you created in step four. For step six, create an event trigger task to scale out or in based on metrics.
So to recap what we've covered in this session, the Auto Scaling service is comprised of a scaling group which includes the scaling configuration, scaling rules scaling activities, scaling trigger tasks and the cool-down period. Scaling trigger tasks fall into two categories, scheduled tasks and event triggered tasks. And it's worth noting that deleting a scaling group also deletes the scaling configuration, scaling rules, and scaling activities associated with the scaling group. But scheduled tasks themselves are independent of the scaling group, and if you delete the scaling group, any schedule tasks that were created will remain.
That concludes this session. I look forward to talking to you in the next session, session three, Auto Scaling operations scaling groups.
David’s IT career started in 1990, when he took on the role of Database Administrator as a favor for his boss. He redirected his career into the Client Server side of Microsoft with NT4, and then progressed to Active Directory and each subsequent version of Microsoft Client/Server Operating Systems. In 2007 he joined QA as a Technical Trainer, and has delivered training in Server systems from 2003 to 2016 and Client systems from XP onwards. Currently, David is a Principal Technical Learning Specialist (Cloud), and delivers training in Azure Cloud Computing, specializing in Infrastructure Compute and Storage. David also delivers training in Microsoft PowerShell, and is qualified in the Alibaba Cloud Space.