The course is part of these learning pathsSee 1 more
This lecture will explain at a high-level what Docker is and how it benefits the cloud community.
First, we will define Docker containers in the simplest terms possible. We will use the concept of an actual physical shipping container to better explain how it is used to keep your application separate from the infrastructure and all of its dependencies.
We will discuss how your code is handled by containers, and what happens when your code changes versions.
Since most IT people are familiar with VMs, many compare containers to VMs. You will learn the differences between containers and virtual machines:
- the VM OS is running on virtualized hardware
- software required to emulate hardware is running a full OS on top
- they can be slow to start up
- they consume a lot of resources that you might not need
- they require patching
- they can take up a lot of space on disk
- they contain thousands of unnecessary files
Finally, we will discuss how to create Docker containers for different distributions, such as bundling your code for a specific OS.
Welcome back. Before we dive in to talking about how we use Docker, we need to cover the high-level concepts. So in this lesson I'll answer the question: What is Docker? So here's my attempt at a simple explanation. Docker is a container platform that allows you to separate your application from the underlying infrastructure by bundling up your code and all of its dependencies into a self-contained entity that will run the same on any supported system.
Picture an actual physical shipping container. You can use a shipping container to transport anything that will fit inside. Since shipping containers come in standard sizes, they're easier to manage because they're consistent. It really doesn't matter what's inside the container because the infrastructure used to transport them is the same.
The ships are able to transport them, cranes are able to move them on and off ships, trucks are able to transport them, and none of that infrastructure is concerned with what's inside the container. When it comes to software development, it's not the initial development that's challenging; it's everything that comes after that.
Deploying code is a challenge. Managing dependencies is a challenge. Handling rollbacks is a challenge. All of these things are made more challenging because the development environments are seldom identical to production, and this is where Docker containers can help. Docker containers are similar to their physical namesake.
They allow you to take whatever software you need to run, bundle it up into a consistent format, and run it on any infrastructure that knows how to handle a container. The end result is kind of like having a single executable for all of your code. It's similar to an app on your phone. All of the code is bundled up into a single unit, and it's going to run the same way on my phone and yours.
With Docker, regardless of the programming language you use or the Linux distribution your code runs on, you can wrap it all up into one unit called a container; and the container knows how to run your app. If your app relies on a specific version of ImageMagick, then you can include it in the container.
Then any time you run that container, you know that you have the correct version. If later you need to update the version of ImageMagick, then you create new container with whatever version you need and any time you run that container, it's going to run correctly because it has everything it needs inside.
Having your code run inside of a container means that it's isolated from other processes and other containers. So each container can have its own process space, its own network stack, resource allocation, et cetera. Docker containers are often compared to virtual machines because VMs are familiar to most IT people.
However, they are pretty different. When you run a virtual machine, the OS that you're using in the VM is running on virtualized hardware. That means that you're using software to emulate hardware and then running a full OS on top. Now virtual machines are great; however, they can be a bit slow to start up, they can consume a lot of resources that you might not need, and they require patching.
Also, they can take up a lot of space on disk, especially considering they contain thousands of files that your app really doesn't need. Here's a diagram from Docker's documentation showing three applications, each running in a VM. So you could imagine that the first one is CentOS, the second is Ubuntu, and the third is Debian.
Each OS is running on the hypervisor and consuming a lot of system resources. Now here's a look at how Docker is different. Notice the host OS and then the Docker engine here in place of the guest OS. So then you have your binaries and libraries that your application needs followed by your application code.
So what Docker is doing is it's cutting out the guest OS, allowing you to bundle up the system libraries, any binaries that your app needs, as well as your code, all into one unit that can be executed via the Docker engine. Where VMs run the entire OS, Docker shares the kernel with the host OS. This means that containers can start up about as quickly as any other process since they really don't contain all of the files required to run the entire OS, and they don't consume all of the resources that a guest OS does.
So it allows you to run more of them on the same hardware. In Linux, everything is just a file, which means the difference between Ubuntu and CentOS is really just a set of files. By files, I mean things like binaries for package management, configuration files, different services, et cetera. With Docker, you use the kernel of the host OS.
However, because the difference between distributions is really just different files, you can create Docker containers based on the different distributions. So if your application requires some libraries that run on Ubuntu, then you base your container on Ubuntu. This means you can bundle up your code, as well as just the system libraries that you need, into one container.
Once all of your code and dependencies are in a container, they're going to run the same way anywhere because everything required to have the code run is inside the container. So if you use Docker containers for all of your applications, then that's going to allow you to standardize on how you deploy all of your applications.
All right, admittedly, this is an intentionally overly simplistic explanation of Docker. However, it should serve as a solid enough intro that we can build on that in the following lessons. So if you're ready to dive a little bit deeper, let's cover the architecture of Docker in the next lesson. So if you're ready, I'll see you in the next lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.