Introduction to Containers
The course is part of these learning paths
Containers are a bit of an “it” thing in technology right now. The reason for this is simple: they’re a very powerful tool which can streamline your development and ops processes, save companies money, and make life for developers much easier. However, the flip side of this is that they’re a new paradigm to understand, and require that apps be built with a specific architecture to take full advantage of their features. In this course written and presented by Adrian Ryan, you'll learn what containers are, the benefits of using them, and how to containerize an app. Let's get started!
Containers are a bit of a it thing in technology right now. The reason for this is simple, they're a very powerful tool which can streamline your development and ops processes, Save your company money, and make life for your developers much easier. However, the flip side of this is that they're a new paradigm to understand and require that apps be built with a specific architecture to take full advantage of their features.
In this lesson, we'll walk you through this new paradigm so that you can get a handle on what containers are and how they work. In the next lesson we'll focus on the benefits to using them and finally we'll show you how to architect an app to take advantage of their features. So what is a container? One way to think about them is that they're lightweight virtual machines. With a virtual machine, you have to virtualize an entire operating system as well as the software you want to run. This makes VMs really resource heavy.
The operating systems is often the single largest most resource-intensive piece of hardware on your computer and so running multiple OS's on the same computer, just so that you can have separate environments, uses a lot of your resources. To overcome this issue, the Linux operating system began implementing containers. The idea is simple, if you're running a Linux OS on your computer already, why run a new OS for each VM? Instead you can use the core of the OS, called the kernel, for each VM. This way the VMs only run the software that they need to.
The difficulty with this is that it is important that the VMs not be able to affect each other, or the underlying computer they're running on. And containers need to replicate this functionality. So the Linux team had to implement some safety features into the kernel itself. Features such as being able to block off different parts of the kernel processor and memory for the different containers, so that the code running on one container can't accidentally access another container through the kernel. Now that these containers were implemented at the kernel level, any amount of software could be run inside of one and it would be like running it in its own VM, or own physical machine. And because all Linux distros share the same fundamental Linux kernel, you can easily run containers with different distros, just as easily as you can run containers using the same distro.
The software that makes each distribution unique all runs on top the kernel and it's only the kernel that is shared across all the containers and the host OS. Once containers are implemented at the most basic fundamental part of the Linux OS, software which made it easier to implement these Linux containers begin to pop up. One of the first and most successful container software projects is called Docker. Docker makes it easy to define, manage and use Linux containers by simply writing plain text documents to define the software that you want running inside of a particular container.
In addition, Docker and other companies began building software that could link containers together into a single app, as well as orchestrate spinning them up and down in the cloud rapidly. In addition to Docker, there are other container systems. But I'm mostly going to use Docker as the example of how containers work in this course, because it is the most frequently used and frankly the most easily explained of all these systems.
As an example, a Dockerfile is used to define a container. This Dockerfile starts with a standard image, usually provided by a software project, such as, a Linux distro or web technology like node.js. From there you can add new pieces to that image in a certain order, usually by running commands telling the image to install and setup new software. Once the file is written and saved, it can be sent in plain text to any two other people and built in just a few seconds on any computer that has Docker installed.
This is very different from VMs, where you have to send a multi-gigabyte executable file to other people who want to run the VM. Here you're just sending a few kilobytes of instructions on how to build the container yourself. Docker builds these images in layers, for instance, if you were to run a Docker build on the Stocker file, first Docker would take the official Ubuntu image, then it would run apt-get install-y software-properties-common-python to create a new image, and then from that image, it would run add-apt-repository et cetera, to create another image, and so on, and so on.
Once the Docker file is built, you can run the image inside a container, or copy it to multiple times, to run it in as many containers as you want. For instance, here there are four instances of container A and two instances of container B. Further software can be used to network containers to each other, the same way VMs or physical machines can be networked together. So that your containers can communicate with each other to create one large system built with many small containers. Here we have a fairly standard networking arrangement, where all the containers live in one VPN, and load balancers direct the traffic in into each subnet towards the least used container at the moment.
The details of how this is implemented may change depending on what sort of system your containers are running on. And we'll dive deeper into the specifics in a later lesson. But this should give you an idea of how to use them. But that's just on Linux. What if you want to run containers on another operating system?
While Docker lets you run Linux containers on Mac or Windows by first starting a really lightweight Linux VM that mostly just runs the kernel, and then running all the other containers inside that VM. So this is slower than running Linux containers on the Linux, because you do have a VM, but it's faster than the old paradigm of using a bunch of VMs, because you're only running one, and you get the other benefits of containers along with it. In addition, Microsoft has been working to build Windows containers.
These are containers that are built into the Windows operating systems, so that instead of running a Linux distro and a container, you can run Windows in Windows software in a container. Windows has been working really closely with Docker on this project, so they work with Docker. However, running a Windows container on Linux or Mac doesn't really work at this point.
Finally, there's currently no way to run Mac OS containers. It's just not something Apple has implemented in the Mac OS kernel, and since Mac OS is almost never used to run servers, no one is really asking for it. There are container systems for other operating systems, like BSD, but that's a bit out of scope for this course, since they're rarely used commercially. So that's what containers are. A way to run multiple computers on a single machine. Each with a different operating system software installed, in the fast and secure manner. In the next lesson we'll talk about why you'd want to use containers. And then show you how to containerize a app.
About the Author
Adrian M Ryan is an educator and product manager. He was an early employee at General Assembly, has co-founded an education startup and a consultancy, and he loves teaching. He grew up in rural Alaska, and while he now lives in New York City he makes sure to find time to get out in the woods hiking whenever possible.