“Firecracker is an open source virtualization technology that is purpose-built for creating and managing secure, multi-tenant containers and functions-based services.”
One of the great things embedded in Amazon Web Services DNA is their unparalleled vision and innovation in the compute resource space. In the cloud first era, AWS has constantly redefined what a compute resource is and should be. From the very start, AWS provided compute resource in the form of instances (virtualized servers), which quickly became the norm for customer compute resource. In more recent times, this innovation has continued in the form of services like ECS, Fargate, EKS, and Lambda, supported by underlying container technologies such as Docker. A constant theme in this innovation has been the miniaturization of compute resources, shrinking from monolithic instances to containers to serverless functions. Launching and leveraging smaller units of compute resource provides benefits to both AWS and its customers. AWS can distribute, balance and pack these smaller compute units more densely across its global, regional, and zonal physical resources. Customers can optimize their varying usage requirements versus spend equation.
AWS has leveraged container technology as an enabler for much of this miniaturization, providing a key advantage of faster launch times for the compute resource. However, running containers at scale in highly multi-tenanted environments has its own challenges, particularly when it comes to enforcing and ensuring security.
With all this in mind, AWS announced that they have made Firecracker open source at re:Invent 2018. Firecracker is their latest rethink to address the requirements of running multi-tenanted secured micro sized workloads. Firecracker provides a new type of virtualization technology which utilizes Linux KVM (Kernel-based Virtual Machines) and provides a RESTful-based API interface to a virtual machine manager (VMM). Spinning up and configuring microVMs is performed via the RESTful API. With Firecracker, you can launch literally thousands of micro-virtual machines, requiring only 5MiB of memory overhead per VM with sub-second launch time (<125ms). A microVM has all of the advantages typically associated with a virtual machine, but in a smaller and more compact footprint, accomplished without compromising on security and boundary isolation between guests on the same multi-tenanted host.
Firecracker has been designed and developed with the following key tenets :
- Built-In Security: We provide compute security barriers that enable multi-tenant workloads and cannot be mistakenly disabled by customers. Customer workloads are simultaneously considered sacred (shall not be touched) and malicious (shall be defended against).
- Light-weight Virtualization: We focus on transient or stateless workloads over long-running or persistent workloads. Firecracker’s hardware resources overhead is known and guaranteed.
- Minimalist in Features: If it’s not clearly required for our mission, we won’t build it. We maintain a single implementation per capability.
- Compute Oversubscription: All of the hardware compute resources exposed by Firecracker to guests can be securely oversubscribed.
To get Firecracker up and running you’ll need access to a bare metal server running Linux with KVM. AWS provides the i3. metal instance, but you can also run it on your workstation or other provider-supplied bare metal servers. If you’re considering using the i3.metal instance for running Firecracker, take into account the cost of running this beast (36 hyper-threaded cores, 512 GiB, 15.2TB SSD – costing $4.992 per hour On-Demand in Oregon).
From what I can tell having briefly read the docs and played with the technology, there aren’t many, if any, restrictions on what workload types can be processed within a microVM running on Firecracker. Having said that, and as the product name potentially implies, Firecracker’s niche spot is probably aimed at short-lived bursts of compute activity, since this is also evidenced by the fact that both Lambda and Fargate services now use Firecracker under the hood and were likely influencing its design. However, there really is no reason why longer-lived workloads can’t also be processed using this technology.
Let’s quickly summarize Firecracker key features:
- Millisecond launch time can be as low 125ms with 5MiB memory overhead
- Fully fledged micro virtual machines – not just containers
- Ring-fenced security and isolation enforced between microVMs on the same host
- Authored in Rust (https://www.rust-lang.org/)
- Requires Linux and KVM
- Now used internally by Fargate and Lambda
- Open sourced under the Apache version 2.0 license
- Documentation portal: https://firecracker-microvm.github.io/
- Source: https://github.com/firecracker-microvm/firecracker
Firecracker looks to be both promising and popular for provisioning compute resources. In the first 24 hours since announcement, the Firecracker GitHub repository has already accumulated 21 Pull Requests from community contributors, with several more new ones appearing during the time this blog post was authored. Indeed, this latest AWS compute resource innovation has *sparked* a lot of interest.
Go ahead and light the fuse…
Are you at re:Invent this year? Come visit us at booth #1809 and speak to a member of our team to see how we can transform your cloud training.
Top 5 AWS Salary Report Findings
At the speed the cloud tech space is developing, it can be hard to keep track of everything that’s happening within the AWS ecosystem. Advances in technology prompt smarter functionality and innovative new products, which in turn give rise to new job roles that have a ripple effect on t...
New on Cloud Academy: Red Hat, Agile, OWASP Labs, Amazon SageMaker Lab, Linux Command Line Lab, SQL, Git Labs, Scrum Master, Azure Architects Lab, and Much More
Happy New Year! We hope you're ready to kick your training in overdrive in 2020 because we have a ton of new content for you. Not only do we have a bunch of new courses, hands-on labs, and lab challenges on AWS, Azure, and Google Cloud, but we also have three new courses on Red Hat, th...
Cloud Academy’s Blog Digest: Azure Best Practices, 6 Reasons You Should Get AWS Certified, Google Cloud Certification Prep, and more
Happy Holidays from Cloud Academy We hope you have a wonderful holiday season filled with family, friends, and plenty of food. Here at Cloud Academy, we are thankful for our amazing customer like you. Since this time of year can be stressful, we’re sharing a few of our latest article...
Google Cloud Platform Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2019, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the second consecuti...
New Lab Challenges: Push Your Skills to the Next Level
Build hands-on experience using real accounts on AWS, Azure, Google Cloud Platform, and more Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how pe...
New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More
Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...
Understanding Enterprise Cloud Migration
What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...
6 Reasons Why You Should Get an AWS Certification This Year
In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...
AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around
The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...
Application Load Balancer vs. Classic Load Balancer
What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...
Advantages and Disadvantages of Microservices Architecture
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Kubernetes Services: AWS vs. Azure vs. Google Cloud
Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...