This is a short refresher of the 4 AWS Compute services announced at Re:invent 2018 which will cover:
Learning Objective
- It aims to provide an awareness of what each of the Compute services is used for and the benefit that they can bring to you within your organization
Intended Audience
- This course would be beneficial to anyone who is responsible for implementing, managing, and securing Compute services within AWS
Prerequisites
- You should have a basic understanding of AWS core services to help you understand how each of these services fit into the AWS landscape
Related Training Content
Understanding AWS Lambda to Run and Scale your code
One of the great things embedded in Amazon Web Services DNA is their unparalleled vision and innovation in the compute resource space. In the cloud first era, AWS has constantly redefined what a compute resource is and should be. From the very start, AWS provided compute resource in the form of instances, which are virtualized servers, which quickly became the norm for customer computer resource.
In more recent times, this innovation has continued in the form of services like ECS, Fargate, EKS and Lambda, supported by underlying container technologies such as Docker. A constant theme in this innovation has been the miniaturization of compute resources, shrinking from instances to containers to serverless functions. Launching and leveraging smaller units of computer resource provides benefits to both AWS and its customers. AWS can distribute, balance and pack these smaller compute units more densely across its global, regional and zonal physical resources. Customers can optimize their varying-usage-requirements-versus-spend equation.
AWS has leveraged container technology as an enabler for much of this miniaturization, providing a key advantage of faster launch times for the compute resource. However, running containers at scale in highly multi-tenanted environments has its own challenges, particularly when it comes to enforcing and ensuring security. With all this in mind, AWS introduced Firecracker.
Firecracker is their latest rethink to address the requirements of running multi-tenanted secured micro sized workloads. Firecracker provides a new type of virtualization technology, which utilizes Linux KVM, which is kernel-based virtual machines, and provides a RESTful-based API interface to a virtual machine manager, a VMM. Spinning up and configuring micro VMs is performed via the RESTful API. With Firecracker, you can launch literally thousands of micro virtual machines, requiring only five megabytes of memory overhead per VM with sub-second launch times. A micro VM has all the advantages typically associated with a virtual machine, but in a smaller and more compact footprint, accomplished without compromising on security and boundary isolation between guests on the same multi-tenanted host.
Firecracker has been designed and developed with the following key tenants. Built-in security. Compute security barriers enable multi-tenant workloads and cannot be mistakenly disabled by customers. Light-weight virtualization. Transient or stateless workloads are used over long-running or persistent workloads. Firecracker's hardware resources overhead is known and guaranteed. Compute oversubscription. All of the hardware compute resources exposed by Firecracker to guests can be securely oversubscribed. To get Firecracker up and running, you'll need to access a bare metal server running Linux with KVM. AWS provides the i3 metal instance, but you can also run it on your workstation or other provider-supplied bare metal service. Firecracker's niche looks to be aimed at short-lived bursts of computer activity, since this is also evidenced by the fact that both Lambda and Fargate services now use Firecracker under the hood and were likely influencing its design. However, there really is no reason why longer-lived workloads can't also be processed using this technology.
That brings me to the end of this lecture. Next, I will discuss AWS Outposts.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.