Docker containers Webinar Part 1: How They Work, from Idea to Dev

The Docker containers Webinar: on October 19, I held Part 1 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the webinar material. It also includes some additional items that I’ve added based on the QA session. Finally, I will highlight some of the best audience questions and wrap up with our plans for Part 2. You can watch part 1 here.

Docker & Containers

I’m guessing that you’ve heard about Docker by now. Docker is taking the industry by storm by making container technologies accessible to IT professionals. First of all, let’s start with the basics: What is Docker, what are Docker containers, and how are they related?

Docker is one part of the suite of tools provided by Docker Inc. to build, ship, and run Docker containers. Docker containers start from Docker images. A Docker image includes everything needed to start the process in an isolated container. It includes all of the source code, supporting libraries, and other binaries required to start the process. Docker containers run Docker images. Docker containers build on Linux kernel features such as LXC (Linux containers), Cgroups (control groups), and namespaces to fully isolate the container from other processes (or containers) running on the same kernel. That’s a lot to unpack. Here’s how Docker Inc. describes Docker:

“Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment. — What’s Docker?

Docker Containers Examples

Here are some real-world examples. Say your team is building several different web applications, which are most likely written in different languages. One application may be written in Ruby and another in Node.js.

Each application requires its own system packages to compile things like libraries. Deploying these types of polyglot applications makes infrastructure more complex. As a result, Docker solves this problem by allowing each team to package the entire application as a Docker image. The image can be used to start your application where it runs the same regardless of the environment. The benefits are a clean hand-off between development and production (build images, then deploy them), development and production parity, and infrastructure standardization.

Best of all, each Docker container will be fully isolated from others so that engineers can allocate more or fewer compute resources (such as CPU, Memory, or IO) to individual containers. This ensures that each Docker container has the exact amount of resources that it requires.

First-time users and those considering Docker (or any other container technology) ask the question: What’s the difference between containers and virtual machines?

Containers vs. Virtual Machines

Naturally, this question came up in the webinar. I answered it in the QA on the community forum as well:

“Docker’s about page has a good summary. Docker containers (and other container technologies) work by isolating processes and their resources using kernel features. This allows running multiple containers on a single kernel. Virtual machines are different. In this scenario there are multiple independent kernel with running on a single hypervisor. Each kernel running on the hypervisor sees a complete set of virtualized hardware. Nothing is virtualized in Docker’s case. Containers vs VMs also have different compute footprints–notably in memory. They use less memory because they don’t need to run an entire operating system. Finally, containers are also intended to run a single process. Virtual machines on the other hand can run many processes. Docker focuses on “operating system” virtualization while virtual machines focus on hardware virtualization.”

In summary, containers run a single process. Virtual machines may run any number of processes. Containers run a single kernel. Virtual machines run on a hypervisor. Containers require less memory because you don’t need to allocate memory to a completely separate kernel. Both technologies allow resource control.

Use Cases for the Development Phase

Building software is one of the most complex human activities. It is constantly changing and full of complications. Complexity multiplies when engineers use multiple languages and data stores. Consequently, workflows become more complicated and bootstrapping new team members never go as expected. Containers may be applied to the software development phase for drastic productivity increases—especially in polyglot teams. Docker is a great tool to leverage during the development phase. Here are some examples:

  • Automating development environments. Say you’re building a C program. You’ll need a bunch of libraries and other things installed on the system. This can be packaged as a Dockerfile and committed to source control. As a result, every team member will have the same environment independent of their own system.
  • Managing data stores. Perhaps you have one project that depends on database version A. Another project runs on version B. Running both versions may not be possible with your package manager. However, it’s trivial to start a container for version A and B, and then point the application to talk to the containers.
  • Improve cross OS development. Consider a team using Linux, OSX, and Windows. Building the application on each platform will create many problems. Instead, if you package the application as a Dockerfile, each team member can always run the same thing.
  • Development & production parity. Build and use an image in development. Then use it for staging and production. You can be certain that the same code is running the same way.

Use Cases for the Deployment Phase

Building software is only half the battle. After we’ve created it, we’ve got to deploy. This is where containers really shine. I’m a bit production biased these days so I’ll list the most important (and my favorite!) point first:

  • Standardizing deployment infrastructure. This one is massive! DevOps and traditional teams can build standardized infrastructure to run and scale any application. Even if a new language comes out, it’s no problem. Deploy with Docker and it doesn’t matter what’s inside the container. Running and orchestrating containers in production is the hottest topic right now. Watch this space.
  • Isolating CI builds. CI systems can be fickle. Each project may change the machine in some way: You may need to install some random software or drop artifacts everywhere. Don’t even get me started on project dependencies. With containers, all of these problems are a thing of the past. Run each build in an ephemeral container and throw it away afterward. No fuss, no muss.
  • Testing new versions. It’s a happy day. The newest version of Language X was just released and it’s time to migrate. You just want to test it out, so you set up a virtual machine to not break your existing setup. This is a resource heavy and time-consuming process. Docker makes this easy. Simply change the image tag from language:x to language:y.
  • Distributing software. You’ve just finished your tool in language X. Unfortunately, your tool has a ton of dependencies that your users may not be knowledgeable enough to install. Build a Docker image and push it to a Docker registry. Now anyone can pull down your image and run your software. This is especially nice for handing builds over to your QA team.

Installation & Toolchain

screen-shot-2016-10-26-at-10-27-37-am
Docker can be installed on Windows, OSX, and Linux systems. The Windows and OSX versions run a Linux system with a Docker daemon. The Docker client is configured to talk to the virtual machine. The distribution’s package manager makes it easy to install Docker on Linux. Once Docker is installed, you can start using the larger Docker toolchain components.
Everything is built on top of the Docker Engine. The Docker Engine is the daemon running on a computer that manages all containers. The docker command is a client. It makes API requests to the Docker Engine. This means that Docker follows a client/server model. They communicate over HTTP.

Next comes Docker Registry. The Docker Registry is an image store. Users can push images to the registry so that other users can pull images to their installations. Users may employ the official registry for distributing public images. Paid plans are available if you need private images. You can also host your own Docker registry. The Docker community maintains a set of official images, including those for databases like MySQL, PostgreSQL, MongoDB, and many languages. Odds are, there is an official image for your use case.

Docker Compose is a tool for developing and shipping multi-container applications based on a configuration file. You’ll definitely come into contact with this common tool.  Docker Compose does all the heavy lifting and makes it easier to share and develop more complex applications.

Docker Machine is a tool for bootstrapping Docker hosts. A Docker host is a machine that runs the Docker Engine. Docker Machine can create machines on cloud providers like AWS, Azure, GCP, and Rackspace. It can also create “local” machines using VirtualBox or VMWare.

It’s hard to cover these tools well in a text format. Therefore, I recommend that you check out the Introduction to Docker course or watch the demo in the webinar. Both of these resources demonstrate basic Docker functionalities and how to use Docker Compose to build a multi-container application.

Part 2: From Dev to Production

The first session introduced the Docker concept and how to develop applications using Docker. The next session will focus on deploying Docker applications. I’ll cover production orchestration tools and wrap up with a cool demo on creating a multi-stage application with Docker Compose and Docker Machine. The webinar is currently planned for November, so stay tuned for the announcement. I hope to see you there!

Cloud Academy