Develop your skills for Autoscaling with this course from Cloud Academy. Learn how to improve your teams and development skills and understand how they relate to scalable solutions.
This Course is consists of 5 quick fire lectures that will provide you with the fundamentals of autoscaling as often used in cloud based architectures.
- Learn how to develop applications which autoscale
- Learn how to architect scalable cloud architectures
This course is recommended for:
- Cloud Architects
- Cloud System Operators
There are no prior requirements necessary in order to do this training course, although basic exposure to AWS, Azure, and/or GCP would be useful
While it may be tempting to do so, simply throwing resources at a system or deploying additional instances of a process are not necessarily guarantees of improved performance. As such, it's important to consider some key application design strategies when you are designing an autoscale strategy. When planning a design, ensure that the system is designed to be horizontally scalable. You don't want to make assumptions about instance affinity. It's important that you avoid designing solutions that require code to always run in a specific process instance. Likewise, when you are scaling a website or cloud service horizontally, don't assume that a series of requests from a single resource will necessarily be routed to the same instance.
To avoid this, it's best to design services to be stateless, so that you can avoid the requirement that a series of requests from an application be routed to the same service instance. If you are designing a service that reads and processes messages in a queue, it's best that you not make any assumptions about which instance of a service is going to handle specific messages, because autoscaling could very well launch additional instances of the service as the queue grows. In cases such as this, you should consider the competing consumers pattern, which allows multiple concurrent consumers to process messages that are received on the same messaging channel.
By leveraging the competing consumers pattern, you can allow a system to process multiple messages concurrently, which optimizes throughput, improved scalability, and balances the workload. Long running tasks that are part of any solution should be designed to support scaling out and scaling in. Failure to support both scaling out and scaling in can prevent an instance of a process from cleanly shutting down if the system scales in. Worse yet, data could be lost if a process is terminated forcibly. Ideally, you should leverage the pipes and filters pattern to break up processing of a long running task into smaller, more discreet chunks. By breaking down a task that performs complex processing into a series of separate elements that can be reused, you can improve performance, scalability, and reusability while allowing task elements that perform the processing to be deployed and scaled independently.
As an alternative to the pipes and filters pattern, you could also implement a checkpoint mechanism, which can then record state information about a long-running task at regular intervals. Saving the state information in durable storage allows it to be accessed by any instance of the process running the task. By doing this, if the process is shut down, any work that it was performing can be resumed from the last checkpoint by using another instance.
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.