The Docker containers Webinar: on October 19, I held Part 1 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the webinar material. It also includes some additional items that I’ve added based on the QA session. Finally, I will highlight some of the best audience questions and wrap up with our plans for Part 2. You can watch part 1 here.
Docker & Containers
I’m guessing that you’ve heard about Docker by now. Docker is taking the industry by storm by making container technologies accessible to IT professionals. First of all, let’s start with the basics: What is Docker, what are Docker containers, and how are they related?
Docker is one part of the suite of tools provided by Docker Inc. to build, ship, and run Docker containers. Docker containers start from Docker images. A Docker image includes everything needed to start the process in an isolated container. It includes all of the source code, supporting libraries, and other binaries required to start the process. Docker containers run Docker images. Docker containers build on Linux kernel features such as LXC (Linux containers), Cgroups (control groups), and namespaces to fully isolate the container from other processes (or containers) running on the same kernel. That’s a lot to unpack. Here’s how Docker Inc. describes Docker:
“Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment. — What’s Docker?“
Docker Containers Examples
Here are some real-world examples. Say your team is building several different web applications, which are most likely written in different languages. One application may be written in Ruby and another in Node.js.
Each application requires its own system packages to compile things like libraries. Deploying these types of polyglot applications makes infrastructure more complex. As a result, Docker solves this problem by allowing each team to package the entire application as a Docker image. The image can be used to start your application where it runs the same regardless of the environment. The benefits are a clean hand-off between development and production (build images, then deploy them), development and production parity, and infrastructure standardization.
Best of all, each Docker container will be fully isolated from others so that engineers can allocate more or fewer compute resources (such as CPU, Memory, or IO) to individual containers. This ensures that each Docker container has the exact amount of resources that it requires.
First-time users and those considering Docker (or any other container technology) ask the question: What’s the difference between containers and virtual machines?
Containers vs. Virtual Machines
Naturally, this question came up in the webinar. I answered it in the QA on the community forum as well:
“Docker’s about page has a good summary. Docker containers (and other container technologies) work by isolating processes and their resources using kernel features. This allows running multiple containers on a single kernel. Virtual machines are different. In this scenario there are multiple independent kernel with running on a single hypervisor. Each kernel running on the hypervisor sees a complete set of virtualized hardware. Nothing is virtualized in Docker’s case. Containers vs VMs also have different compute footprints–notably in memory. They use less memory because they don’t need to run an entire operating system. Finally, containers are also intended to run a single process. Virtual machines on the other hand can run many processes. Docker focuses on “operating system” virtualization while virtual machines focus on hardware virtualization.”
In summary, containers run a single process. Virtual machines may run any number of processes. Containers run a single kernel. Virtual machines run on a hypervisor. Containers require less memory because you don’t need to allocate memory to a completely separate kernel. Both technologies allow resource control.
Use Cases for the Development Phase
Building software is one of the most complex human activities. It is constantly changing and full of complications. Complexity multiplies when engineers use multiple languages and data stores. Consequently, workflows become more complicated and bootstrapping new team members never go as expected. Containers may be applied to the software development phase for drastic productivity increases—especially in polyglot teams. Docker is a great tool to leverage during the development phase. Here are some examples:
- Automating development environments. Say you’re building a C program. You’ll need a bunch of libraries and other things installed on the system. This can be packaged as a Dockerfile and committed to source control. As a result, every team member will have the same environment independent of their own system.
- Managing data stores. Perhaps you have one project that depends on database version A. Another project runs on version B. Running both versions may not be possible with your package manager. However, it’s trivial to start a container for version A and B, and then point the application to talk to the containers.
- Improve cross OS development. Consider a team using Linux, OSX, and Windows. Building the application on each platform will create many problems. Instead, if you package the application as a Dockerfile, each team member can always run the same thing.
- Development & production parity. Build and use an image in development. Then use it for staging and production. You can be certain that the same code is running the same way.
Use Cases for the Deployment Phase
Building software is only half the battle. After we’ve created it, we’ve got to deploy. This is where containers really shine. I’m a bit production biased these days so I’ll list the most important (and my favorite!) point first:
- Standardizing deployment infrastructure. This one is massive! DevOps and traditional teams can build standardized infrastructure to run and scale any application. Even if a new language comes out, it’s no problem. Deploy with Docker and it doesn’t matter what’s inside the container. Running and orchestrating containers in production is the hottest topic right now. Watch this space.
- Isolating CI builds. CI systems can be fickle. Each project may change the machine in some way: You may need to install some random software or drop artifacts everywhere. Don’t even get me started on project dependencies. With containers, all of these problems are a thing of the past. Run each build in an ephemeral container and throw it away afterward. No fuss, no muss.
- Testing new versions. It’s a happy day. The newest version of Language X was just released and it’s time to migrate. You just want to test it out, so you set up a virtual machine to not break your existing setup. This is a resource heavy and time-consuming process. Docker makes this easy. Simply change the image tag from language:x to language:y.
- Distributing software. You’ve just finished your tool in language X. Unfortunately, your tool has a ton of dependencies that your users may not be knowledgeable enough to install. Build a Docker image and push it to a Docker registry. Now anyone can pull down your image and run your software. This is especially nice for handing builds over to your QA team.
Installation & Toolchain
Docker can be installed on Windows, OSX, and Linux systems. The Windows and OSX versions run a Linux system with a Docker daemon. The Docker client is configured to talk to the virtual machine. The distribution’s package manager makes it easy to install Docker on Linux. Once Docker is installed, you can start using the larger Docker toolchain components.
Everything is built on top of the Docker Engine. The Docker Engine is the daemon running on a computer that manages all containers. The docker command is a client. It makes API requests to the Docker Engine. This means that Docker follows a client/server model. They communicate over HTTP.
Next comes Docker Registry. The Docker Registry is an image store. Users can push images to the registry so that other users can pull images to their installations. Users may employ the official registry for distributing public images. Paid plans are available if you need private images. You can also host your own Docker registry. The Docker community maintains a set of official images, including those for databases like MySQL, PostgreSQL, MongoDB, and many languages. Odds are, there is an official image for your use case.
Docker Compose is a tool for developing and shipping multi-container applications based on a configuration file. You’ll definitely come into contact with this common tool. Docker Compose does all the heavy lifting and makes it easier to share and develop more complex applications.
Docker Machine is a tool for bootstrapping Docker hosts. A Docker host is a machine that runs the Docker Engine. Docker Machine can create machines on cloud providers like AWS, Azure, GCP, and Rackspace. It can also create “local” machines using VirtualBox or VMWare.
It’s hard to cover these tools well in a text format. Therefore, I recommend that you check out the Introduction to Docker course or watch the demo in the webinar. Both of these resources demonstrate basic Docker functionalities and how to use Docker Compose to build a multi-container application.
Part 2: From Dev to Production
The first session introduced the Docker concept and how to develop applications using Docker. The next session will focus on deploying Docker applications. I’ll cover production orchestration tools and wrap up with a cool demo on creating a multi-stage application with Docker Compose and Docker Machine. The webinar is currently planned for November, so stay tuned for the announcement. I hope to see you there!
What Exactly Is a Cloud Architect and How Do You Become One?
One of the buzzwords surrounding the cloud that I'm sure you've heard is "Cloud Architect." In this article, I will outline my understanding of what a cloud architect does and I'll analyze the skills and certifications necessary to become one. I will also list some of the types of jobs ...
Disadvantages of Cloud Computing
If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery — cloud-based or local — is up to you. But you’ll definitely want...
What is Kubernetes? An Introductory Overview
In part 1 of my webinar series on Kubernetes, I introduced Kubernetes at a high level with hands-on demos aiming to answer the question, "What is Kubernetes?" After polling our audience, we found that most of the webinar attendees had never used Kubernetes before, or had only been expos...
How Does Cloud Computing Work?
Whether you're looking to become a cloud engineer or you're a manager wanting to gain more knowledge, learn the basics of how cloud computing works. Are you wondering about how cloud computing actually works? We can help explain the basic principles behind this technology. Cloud comput...
What is Ansible?
What is Ansible? Ansible is an open-source IT automation engine, which can remove drudgery from your work life, and will also dramatically improve the scalability, consistency, and reliability of your IT environment. We'll start to explore how to automate repetitive system administratio...
What is Puppet? Get Started With Our Course
When it comes to building and configuring IT infrastructure, especially across dozens or even thousands of servers, developers need tools that automate and streamline this process. Enter Puppet, one of the leading DevOps tools for automating delivery and operation of software no matter ...
2018 Was a Big Year for Content at Cloud Academy
As Head of Content at Cloud Academy I work closely with our customers and my domain leads to prioritize quarterly content plans that will achieve the best outcomes for our customers. We started 2018 with two content objectives: To show customer teams how to use Cloud Services to solv...
2019 Cloud Computing Predictions
2018 was a banner year in cloud computing, with Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) all continuing to launch new and innovative services. We also saw growth among enterprises in the adoption of methodologies supporting the move toward cloud-native...
Introducing Assessment Cycles
Today, cloud technology platforms and best practices around them move faster than ever, resulting in a paradigm shift for how organizations onboard and train their employees. While assessing employee skills on an annual basis might have sufficed a decade ago, the reality is that organiz...
Cloud Skills: Transforming Your Teams with Technology and Data
How building Cloud Academy helped us understand the challenges of transforming large teams, and how data and planning can help with your cloud transformation. When we started Cloud Academy a few years ago, our founding team knew that cloud was going to be a revolution for the IT indu...
Announcing Skill Profiles Beta
Now that you’ve decided to invest in the cloud, one of your chief concerns might be maximizing your investment. With little time to align resources with your vision, how do you objectively know the capabilities of your teams? By partnering with hundreds of enterprise organizations, we’...
A New Paradigm for Cloud Training is Needed (and Other Insights We Can Democratize)
It’s no secret that cloud, its supporting technologies, and the capabilities it unlocks is disrupting IT. Whether you’re cloud-first, multi-cloud, or migrating workload by workload, every step up the ever-changing cloud capability curve depends on your people, your technology, and your ...