Skip to main content

AWS Firecracker: Next Generation Virtualization for MicroVMs

“Firecracker is an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services.”

One of the great things embedded in Amazon Web Services DNA is their unparalleled vision and innovation in the compute resource space. In the cloud first era, AWS has constantly redefined what a compute resource is and should be. From the very start, AWS provided compute resource in the form of instances (virtualized servers), which quickly became the norm for customer compute resource. In more recent times, this innovation has continued in the form of services like ECS, Fargate, EKS, and Lambda, supported by underlying container technologies such as Docker. A constant theme in this innovation has been the miniaturization of compute resources, shrinking from instances to containers to serverless functions. Launching and leveraging smaller units of compute resource provides benefits to both AWS and its customers. AWS can distribute, balance and pack these smaller compute units more densely across its global, regional, and zonal physical resources. Customers can optimize their varying usage requirements versus spend equation.

AWS has leveraged container technology as an enabler for much of this miniaturization, providing a key advantage of faster launch times for the compute resource. However, running containers at scale in highly multi-tenanted environments has its own challenges, particularly when it comes to enforcing and ensuring security.

With all this in mind, AWS announced that they have made Firecracker open source at re:Invent 2018. Firecracker is their latest rethink to address the requirements of running multi-tenanted secured micro sized workloads. Firecracker provides a new type of virtualization technology which utilizes Linux KVM (Kernel-based Virtual Machines) and provides a RESTful-based API interface to a virtual machine manager (VMM). Spinning up and configuring microVMs is performed via the RESTful API. With Firecracker, you can launch literally thousands of micro-virtual machines, requiring only 5MiB of memory overhead per VM with sub-second launch time (<125ms). A microVM has all of the advantages typically associated with a virtual machine, but in a smaller and more compact footprint, accomplished without compromising on security and boundary isolation between guests on the same multi-tenanted host.

Firecracker Host Integration

Firecracker has been designed and developed with the following key tenets :

  1. Built-In Security: We provide compute security barriers that enable multi-tenant workloads and cannot be mistakenly disabled by customers. Customer workloads are simultaneously considered sacred (shall not be touched) and malicious (shall be defended against).
  2. Light-weight Virtualization: We focus on transient or stateless workloads over long-running or persistent workloads. Firecracker’s hardware resources overhead is known and guaranteed.
  3. Minimalist in Features: If it’s not clearly required for our mission, we won’t build it. We maintain a single implementation per capability.
  4. Compute Oversubscription: All of the hardware compute resources exposed by Firecracker to guests can be securely oversubscribed.

To get Firecracker up and running you’ll need access to a bare metal server running Linux with KVM. AWS provides the i3. metal instance, but you can also run it on your workstation or other provider-supplied bare metal servers. If you’re considering using the i3.metal instance for running Firecracker, take into account the cost of running this beast (36 hyper-threaded cores, 512 GiB, 15.2TB SSD – costing $4.992 per hour On-Demand in Oregon).

From what I can tell having briefly read the docs and played with the technology, there aren’t many, if any, restrictions on what workload types can be processed within a microVM running on Firecracker. Having said that, and as the product name potentially implies, Firecracker’s niche spot is probably aimed at short-lived bursts of compute activity, since this is also evidenced by the fact that both Lambda and Fargate services now use Firecracker under the hood and were likely influencing its design. However, there really is no reason why longer-lived workloads can’t also be processed using this technology.

Let’s quickly summarize Firecracker key features:

  • Millisecond launch time can be as low 125ms with 5MiB memory overhead
  • Fully fledged micro virtual machines – not just containers
  • Ring-fenced security and isolation enforced between microVMs on the same host
  • Authored in Rust (https://www.rust-lang.org/)
  • Requires Linux and KVM
  • Now used internally by Fargate and Lambda
  • Open sourced under the Apache version 2.0 license
  • Documentation portal: https://firecracker-microvm.github.io/
  • Source: https://github.com/firecracker-microvm/firecracker

Firecracker looks to be both promising and popular for provisioning compute resources. In the first 24 hours since announcement, the Firecracker GitHub repository has already accumulated 21 Pull Requests from community contributors, with several more new ones appearing during the time this blog post was authored. Indeed, this latest AWS compute resource innovation has *sparked* a lot of interest.

Go ahead and light the fuse…

Are you at re:Invent this year? Come visit us at booth #1809 and speak to a member of our team to see how we can transform your cloud training. 

Written by

Jeremy is currently employed as a Cloud Researcher and Trainer - and operates within CloudAcademy's content provider team authoring technical training documentation for both AWS and GCP cloud platforms. Jeremy has achieved AWS Certified Solutions Architect - Professional Level, and GCP Qualified Systems Operations Professional certifications.

Related Posts

— November 29, 2018

Amazon Elastic Inference – GPU Acceleration for Faster Inferencing

“Add GPU acceleration to any Amazon EC2 instance for faster inference at much lower cost (up to 75% savings)”So you’ve just kicked off the training phase of your multilayered deep neural network. The training phase is leveraging Amazon EC2 P3 instances to keep the training time to a...

Read more
  • Amazon Web Services
  • Elastic Inference
  • re:Invent 2018
— November 29, 2018

IoT Thrives on AWS — re:Invent IoT Announcement Roundup

The Internet of Things (IoT) embeds technology into any physical thing to enable never before seen levels of connectivity. IoT is revolutionizing industries and creating many new market opportunities, with management consulting firm McKinsey predicting the IoT market reaching up to $581...

Read more
  • Amazon Web Services
  • IoT
  • re:Invent 2018
— November 29, 2018

New Security & Compliance Service: AWS Security Hub

This morning’s Andy Jassy keynote was followed by the announcement of over 20 new services across a spectrum of AWS categories, including those in Security and Compliance, Database, Machine Learning, and Storage.  One service that jumped out to me was the AWS Security Hub, currently...

Read more
  • Amazon Web Services
  • re:Invent 2018
  • Security
— November 28, 2018

NEW: Custom Key Stores in KMS – Backed by CloudHSM

Another new announcement has made by AWS here at re:Invent, this time in the security category.The Key Management Service (KMS) stores and generates encryption keys that can be used by other AWS services and applications to encrypt your data.  A main component of KMS is the Customer...

Read more
  • Amazon Web Services
  • KMS
  • re:Invent 2018
— November 26, 2018

New Amazon S3 Features Announced at re:Invent

In true AWS style, a number of new features and services were announced yesterday, the day before the official start of re:Invent.Three of these announcements were related to Amazon S3 which included: S3 Intelligent Tiering (A new storage class) Batch Operations for Object M...

Read more
  • Amazon S3
  • Amazon Web Services
  • re:Invent 2018
  • S3