Container Virtualization: What Makes It Work So Well?

Various implementations of container virtualization (including Docker) are filling compute roles once reserved for hypervisor virtualization.

Increasing demand for efficient and secure application portability across environments and operating systems has forced the industry to look for more powerful virtualization designs.
While the hypervisor virtualization model is still used to successfully deploy countless applications across the full range of environments, it does suffer from certain built-in limitations, including…

  • Increased overhead of running a fully installed guest operating system.
  • Inability to freely allocate resources to processes.
  • Significant overhead from calls to the hypervisor from the guest OS can sometimes reduce application performance.

Container virtualization exists largely to address some of these challenges.

What is container virtualization and how does it work differently than hypervisor virtualization?

Container virtualization (often referred as operating system virtualization) is more than just a different kind of hypervisor. Containers use the host operating system as their base, and not the hypervisor.

Rather than virtualizing the hardware (which requires full virtualized operating system images for each guest), containers virtualize the OS itself, sharing the host OS kernel and its resources with both the host and other containers. If you want to deepen your understanding of the pros and cons of server virtualization, here’s a full explanation.

Hypervisor vs. Container virtualization designHypervisor vs. Container virtualization design

 

Containers provide the bare essentials required for any application to run on a host OS. You could think of them as stripped down Virtual Machines running just enough software to deploy an application.

Many applications – like database servers which work best using block storage – require direct access to hardware resources. Getting that access to disks and network devices through a hypervisor’s hardware emulation will often negatively affect their performance. Container virtualization helps by simply bypassing the emulation layer. As you can see in the above image, containers use the same server hardware and an operating system on top of it, but no hypervisor above that.

What, then, provisions a virtual machine and allocates resources for it? What, in other words, plays the role of the hypervisor for translating VM requests to the underlying hardware?

Technically, the Linux kernel itself has no clue that there are containers living upstairs, but containers can nevertheless share kernel resources through features like Namespaces, cgroups, and chroot. These allow containers to isolate processes, fully manage resources, and ensure appropriate security.

Let’s learn some more about those three tools:

  • Namespaces are a way for Linux systems to wrap a global system resource in an abstraction that makes it appear to the processes within the namespace that they have their own isolated instance of the global resource.

Namespaces, in other words, provide processes with a virtual view of their operating environments. So if your process doesn’t actually require all of the properties of a parent process, and can get by with nothing more than a host name, mount point, network, PID, or its own set of users, then namespaces can get the job done.

If namespaces help processes to get to their environment, chroot can isolate those namespaces from rest of the system and thereby protect against attacks or interference from other containers on the same host. Using namespaces and chroot, you can create a logical container with its own view of the system, but you’ll also need to allocate resources to your new container. This can be done by using cgroups. cgroups (an abbreviation of “control groups”) is a Linux kernel feature that limits, accounts for, and isolates the use of resources like CPU, memory, disk I/O, and network, from of a collection of processes.

It’s these three ingredients that create the magic of container virtualization. The basic architecture is currently used in quite a collection of implementations, the most famous of which is probably Docker.

Now that we’ve got some idea of how they work, let’s review some of the elements that, at least in many scenarios, give the container virtualization its advantage:

  • Containers, compared to hypervisor virtualization, are more likely to be secure as, by design, their applications are logically restricted to their own environment.
  • Containers provide significantly better performance, as they use native, rather than emulated resources.
  • Launching a container is much faster than a virtual machine.
  • Containers offer better control on underlying resources.

After this introduction to the technology’s principles, you might like to try Cloud Academy’s Introduction to Virtualization Technologies.

For a full list of all learning material dedicated to containers, browse Cloud Academy’s library. 

And feel free to join in by adding your comments below.

Cloud Academy