Overview, Basic Concepts and First Steps
The course is part of this learning path
This course from Kevin McGragh, VP of Architecture at Spotinst.com, explains how to leverage excess cloud capacity from providers such as AWS, Microsoft Azure, and Google Cloud Platform to optimize the costs of cloud computing using spot instances.
Everyone working with Cloud compute workloads, from start-ups to large corporations.
A basic understanding of cloud computing and cloud computing billing models. if you are new to cloud computing we recommend completing our What is Cloud Computing course first.
This course will enable you to:
- Recognize and explain how to run and manage workloads on excess cloud capacity using Spot Instances.
- Recognize and explain the risks and benefits of the spot market.
- Recognize and implement Spot Instances to reduce cloud compute costs.
- AWS spot instances - How do they work?
- Overview, Basic Concepts and First Steps
- The Advantages of Spot Instances
- Best Practices, Workloads and Use Cases
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
- - [Instructor] Overview, basic concepts, and first steps. What is spare cloud capacity? The premise of cloud computing is that whenever there is a need to provision hundreds or even thousands of instances, you can do so with just the click of a button. This means that cloud service providers, such as AWS, must ensure they always have capacity available. In other words, cloud service providers must always have unused capacity ready for instant growth. For our cloud service provider, this presents a financial dilemma. With millions of cloud customers, and constantly changing demand, projecting future scale is extremely difficult.
On one hand, they must always have spare capacity, but on the other hand, they do not want idle resources that continually add to overhead costs. The introduction of Spot Instances. Large cloud providers found an interesting way to utilize their spare capacity. In 2009, AWS announced its Spot Instance offering. Spot Instances presented a new method of selling this spare capacity while still ensuring their customers have room for provisioning thousands of servers with a single click. In terms of hardware and technical parameters, Spot Instances are exactly the same as regular instances. Only pricing and availability are different. Spot Instances, which can be acquired for large discounts when compared to regular on-demand prices come with a major caveat. Whenever spare capacity is needed for standard compute, the Spot Instance can be terminated, with a best effort, two minute notice.
Different providers. While AWS was the first to sell spare capacity, other providers have followed suit. We will now take a look at offerings across AWS, Azure and Google before coming back to focus on the technical implementation of AWS Spot Instances. The first offering, AWS, created a market based on supply and demand causing the price for compute to change as use and spare capacity fluctuate. Savings in the marketplace can reach up to 90% compared to that of on-demand. The only caveat is Spot Instances can be interrupted by Amazon at any time. Amazon also continually adds features which make it easier to use Spot Instances, such as Spot Fleet, which automates the management of Spot Instances, and Hibernation, which allows you to save your Spot Instances data and memory upon termination and reboot automatically when the Spot Instance is available again. The second provider we will discuss is the Google compute platform, also known as GCP. The GCP offering for spare capacity is called Preemptible VM. Preemptible VM's can cost up to 80% less than regular instances and last up to 24 hours. During this time, Google might terminate or preempt the instance if it needs the capacity for other tasks. Pricing is fixed, so you will always get predictable costs without tracking markets. While easier to start with, this also obfuscates how much capacity is available at any given time. Lastly, let's take a look at Microsoft Azure. The Microsoft Azure offering is called Low-priority VMs. Originally released only for Azure batch, Low-priority VMs are now also compatible with Azure scale sets. There is no time limit with Azure Low-priority VMs, but as with other clouds, when capacity is needed for other tasks, Microsoft can revoke these instances at any time. No warning will be provided by Azure. It is up to the application and the implementer to handle Spot capacity changes. For a detailed comparison of Azure Low-priority VMs, GCP Preemptible VMs, and AWS Spot Instances, please refer to the matrix that is included in this course.
To wrap up section one of cloud service providers, there are two major takeaways. One, they offer massive savings, but two, they come at a huge risk of downtime if not managed properly. With up to 90% savings across providers, learning how to integrate a massive savings that Spot Instances and spare capacity have to offer can be the key to cutting cloud costs. From large enterprises, such as Verizon and Sony, to growing tech giants like Yelp, to see ground startups, any company can potentially save thousands to millions of dollars a month, all by learning how to effectively and efficiently reduce the risk of using Spot capacity. For the remainder of this course, we will be focusing on AWS Spot Instances. We will start by reviewing everything you need to know about AWS and the Spot market from basic concepts to a demo of how to run a workload on Spot. From here, we will demonstrate how to run Spot Instances more consistently by using cloud forms such as Spot Fleet or third-party management platforms, such as Spotinst. Finally, we will dive deep into different use cases and workloads from development and staging environments to mission critical and production workloads.
Kevin McGrath is the VP of Architecture for Spotinst, specializing in Cloud Native and Serverless product designs. Kevin is responsible for researching and evaluating innovative technologies and processes leaning on his extensive background in DevOps and delivering Software as a Service within the communications and IT infrastructure industry.
Kevin started his career at USi, the first Application Service Provider (ASP) 20 years ago. It was here he began delivering enterprise applications as a service in one of the first multi-tenant shared datacenter environments. After USinternetworking was acquired by AT&T, Kevin served in the office of the CTO at Sungard Availability Services where he specialized in migrating legacy workload to cloud native and Serverless architectures.
Kevin holds a B.A. in Economics from the University of Maryland and a Masters in Technology Management from University of Maryland University College.