1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Developing for Autoscaling on Azure

Key Autoscaling Considerations

play-arrow
Start course
Overview
DifficultyBeginner
Duration44m
Students192
Ratings
5/5
star star star star star

Description

Develop your skills for autoscaling on Azure with this course from Cloud Academy. Learn how to improve your teams and development skills and understand how they relate to scalable solutions. What's more, in this course you can analyze and execute how to deal with transient faults.

This Course is made up of 19 lectures that will guide you through the process from beginning to end. 

To discover more Azure Courses visit our content training library.

Learning Objectives

  • Learn how to develop applications for autoscale
  • Prepare for the Azure AZ-300 certification
  • Design and Implement code that addresses singleton application instances

 

Intended Audience

This course is recommended for:

  • IT Professionals preparing for Azure certification
  • IT Professionals that need to develop applications that can autoscale

Prerequisites

There are no prior requirements necessary in order to do this training course, although an understanding of MS Azure will prove helpful

 

Transcript

When planning an autoscale solution, there are a few key points to consider. First and foremost, you should consider whether you can accurately enough predict the load that an application will experience. You need to do this in order to be able to leverage scheduled autoscaling. Doing so allows you to add and remove instances in order to meet anticipated increases in demand. In cases where this is impossible, you should use reactive autoscaling which is based on runtime metrics. This offers the ability to handle unpredictable changes in the demand for an application. You can also combine both approaches to create a strategy that will add resources based on a schedule that reflects times when you know an application will be the busiest. This ensures that the capacity is available for an application when it's required without causing any delay resulting from the startup of new instances. By defining metrics that allow reactive autoscaling during peak periods, you can ensure that the application can handle sustained yet unpredictable spikes in demand. It's often going to be difficult at best to completely understand the relationship between metrics and capacity requirements. 

This is especially true when an application is first deployed. As such, it makes sense to provision a bit of extra capacity in the beginning. After deployment, monitor and tune any autoscaling rules that have been implemented in order to bring the capacity closer to the actual load of the application. After deploying an application configure any autoscaling rules that are necessary. Once this is done monitor the performance of the application over time, keeping in mind that autoscaling isn't necessarily an instantaneous process. It's going to take time for it to react to metrics such as CPU utilization either exceeding or falling below thresholds that have been defined. After you've monitored performance for a period of time use results to adjust how the system or application scales if necessary. If you're autoscaling service fabric keep in mind that node types in the cluster consist of VM scale sets on the backend.

 As such, you're going to need to set up auto scale rules for each node type. When doing so, be sure that you consider the number of nodes that are required for you to set up autoscaling. The reliability level that you choose is going to drive the minimum number of nodes that you must have for the primary node type. Any time you configure multiple rules and policies those rule and policies are often going to conflict with one another. As such, it's important to understand how autoscale handles such conflict resolution to ensure that there are always enough instances running. First and foremost, scale-out options always have priority over scale-in options. Any time that multiple scale-out operations conflict with one another, the rule that takes precedence will be the one that initiates the largest increase in the number of instances. When it comes to scale-in conflicts, the rule that initiates the smallest decrease in the number of instances will take precedence.

About the Author

Students2342
Courses10

Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.

In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.

In his spare time, Tom enjoys camping, fishing, and playing poker.