Ansible and AWS: Cloud IT Automation Management
With things moving a bit more slowly through the holiday season, we’re going to re-run some of our most popular posts from 2015. Enjoy! The kinds ...Learn More
Container virtualization (also known as Operating System Virtualization) is an important player among the many types of virtualization technology currently available. If we look at traditional hypervisor virtualization in Linux, we’ll see an entire Operating System running as a guest OS on top of a host OS using some kind of hypervisor like Xen. This approach places a considerable load on backend hypervisor to isolate the Guest OS completely by emulating the underlying hardware.
Paravirtualization was a development that requires a guest OS to be modified in order to gain more direct communication with the host system hardware. This provides an edge in terms of performance over full virtualization. However, modification in guest OS presents a substantial limitation because not all of the Operating Systems we may want to run as guests will be available as Open Source.
Container virtualization, as we discuss, is much lighter and more efficient than traditional hypervisors. In this form of virtualization, virtual machines are carved out of host Operating Systems sharing the same OS kernel. These carved VM’s are referred to as Containers.
Containers basically provide encapsulation for a set of processes to run in isolation from the rest of the system. The applications running inside containers act as if they are running on a separate environment or OS with its exclusive set of resources. A good example of container’s usefulness is having two websites running in two different containers requiring two different versions of PHP installed and running. Containers ensure that the two different PHP versions co-exist on the system without conflict.
Since Containers do not involve the intensive overhead of guest Operating Systems and support hypervisor, more applications can run on a host OS compared to any other virtualization approach. Moreover, booting and restarting container applications is much faster than doing the same with VMs. This approach almost guarantees less downtime.
Containers provide an isolated view of the system resources to the applications. In order to provide this contained view to the applications, Containers use some of the kernel features called namespaces, cgroups and chroot to carve off a contained area. An end result is a virtual machine without the hypervisor. Containers are a smart method of attaining isolation and resource control.
We’ll briefly look into these kernel features and then move to lxc which combines these features and provides an effective tool to create and manage Containers.
Namespaces allow you to isolate an application view of the Operating System in various ways. Namespaces, in other words, provide processes with a virtual view of their operating environments. Or you can say that they provide a process with their own view of an operating system. This offers you your own little view of an OS, like user ID networking and process tree.
Cgroup: To shape how the resources are consumed in the container, you can isolate and shape things like CPU usage memory usage disk IO, network IO, and other methods.
Lxc an acronym for Linux containers is a lightweight virtualization technology that implements the containment concept in Linux. It interfaces with kernel namespaces, cgroups, and other features to create and manage containers. As it uses kernel features to realize containment, we receive near-native performance with Lxc compared with other virtualization technologies that simply emulate the hardware.
Let’s go through a basic set of commands in Lxc which is useful for creating, starting, and destroying containers. This will help you gain a better understanding of container environments.
We have Lxc installed on our Ubuntu system kernel version: The below set of commands were tried on Ubuntu and there may be slight deviation if you try them on other Linux flavors.
Once Lxc package is installed in the system we can get a list of Lxc commands.
These commands can be used for creating, starting, stopping, connecting, destroying containers in addition to various other utilities. Now, as Lxc is installed and we have Lxc commands at our disposal, let’s verify the environment making sure everything is fine using lxc-checkconfig.
This command checks the current kernel for Lxc support.
In the output to the command, we see separate section for namespaces, which are used for isolation, while cgroups are used for resource allocation. If everything looks fine, create Linux containers.
Lxc is nice enough to provide templates which can be used for creating root file system for containers. These templates are allotted on a per distribution basis. All the available Lxc templates can be found at
Let’s create a container named “dummyContainer” using one of these templates.
sudo lxc-create –n <container name> -t <template-name>. sudo lxc-create –n dummyContainer -t busybox.
Now we have a dummyContainer ready. Since containers are completely isolated it should have an Operating System, supporting libraries, and the application code. Once the container is created we can check the containers root file system stored under /var/lib/lxc.
Here we have a folder titled with the Linux container’s name. Inside that, we have 2 items.
One is the configuration file for this Linux container. The other is the directory holding the containers root file system. The configuration file contains the default settings for how the container will function. We can modify this config file as per our needs. For now, we’ll leave the default setting. Once everything is configured and the root file system for the container is created we can start our dummyContainer.
sudo lxc-start –d –n <container name>. (-d is for running in background) sudo lxc-start –d –n dummyContainer
Let’s list the running process to see if the newly created container is there. ps –xaf will list all the set of process running.
As we can see the container process is running in the host OS. And since there is isolation via namespaces container has its own little view of the world. We have a view of the containers isolated view.
lxc-info –n <container-name> lxc-info –n dummyContainer
We immediately see dummyContainer running, its process ID, and its resource consumption list. Let’s login into the container and see the isolated view.
sudo lxc-console –n <container-name> sudo lxc-console –n dummyContainer.
Username and password is the same as was prompted when we created the container, which is root/root (Please check the snapshot attached with lxc-create).
Once we login into the container, let’s check the process list by running ps –aux. We’ll see a limited set of process running. The host OS process is not seen from the container while the host OS will show all the process running in containers as they are actually running on host OS with isolated namespaces.
The host OS can manipulate the containers live because this is simply a process to the host OS.
With this, we have a fairly good understanding of what lxc is, and why containers are fast compared to traditional virtualization.
Finally, we’ll stop the container with:
sudo lxc-stop –n <container-name> sudo lxc-stop –n dummyContainer.
You learned ways of creating our own containers and managing them using lxc. Of course, there are more enhanced services available, like Docker, that create and manage containers but my intention was to illustrate the internals and concepts behind containers provisioning. I hope this blog was helpful in understanding the basics of containers provisioning. Feel free to comment about this post below. Never stop learning. We don’t.
In part 1 of my webinar series on Kubernetes, I introduced Kubernetes at a high level with hands-on demos aiming to answer the question, "What is Kubernetes?" After polling our audience, we found that most of the webinar attendees had never used Kubernetes before, or had only been expos...
Whether you're looking to become a cloud engineer or you're a manager wanting to gain more knowledge, learn the basics of how cloud computing works. Are you wondering about how cloud computing actually works? We can help explain the basic principles behind this technology. Cloud comput...
What is Ansible? Ansible is an open-source IT automation engine, which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment. We'll start to explore how to automate repetitive system administratio...
When it comes to building and configuring IT infrastructure, especially across dozens or even thousands of servers, developers need tools that automate and streamline this process. Enter Puppet, one of the leading DevOps tools for automating delivery and operation of software no matter ...
As Head of Content at Cloud Academy I work closely with our customers and my domain leads to prioritize quarterly content plans that will achieve the best outcomes for our customers. We started 2018 with two content objectives: To show customer teams how to use Cloud Services to solv...
2018 was a banner year in cloud computing, with Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) all continuing to launch new and innovative services. We also saw growth among enterprises in the adoption of methodologies supporting the move toward cloud-native...
Today, cloud technology platforms and best practices around them move faster than ever, resulting in a paradigm shift for how organizations onboard and train their employees. While assessing employee skills on an annual basis might have sufficed a decade ago, the reality is that organiz...
How building Cloud Academy helped us understand the challenges of transforming large teams, and how data and planning can help with your cloud transformation. When we started Cloud Academy a few years ago, our founding team knew that cloud was going to be a revolution for the IT indu...
If you want to deliver digital services of any kind, you’ll need to compute resources including CPU, memory, storage, and network connectivity. Which resources you choose for your delivery, cloud-based or local, is up to you. But you’ll definitely want to do your homework first. In this...
Now that you’ve decided to invest in the cloud, one of your chief concerns might be maximizing your investment. With little time to align resources with your vision, how do you objectively know the capabilities of your teams? By partnering with hundreds of enterprise organizations, we’...
It’s no secret that cloud, its supporting technologies, and the capabilities it unlocks is disrupting IT. Whether you’re cloud-first, multi-cloud, or migrating workload by workload, every step up the ever-changing cloud capability curve depends on your people, your technology, and your ...
In the IT world, failure is inevitable. A server might go down, an app may fail, etc. Does your team know what to do during a major outage? Do you know what instances may cause a larger systems failure? Chaos engineering, or chaos as a service, will help you fail responsibly. It almo...