Docker Webinar Part 3: Production & Beyond

Last week, we wrapped up our three-part Docker webinar series. You can watch the Docker Webinar session on the webinars page and find the slides on Speakerdeck. Docker Webinar part one introduced Docker, container technologies, and how to get started in your development environment. It ended with a demo of using Docker Compose for development environments.  Docker Webinar part two covered production deployment options, including the different options in the orchestration space and other production concerns. It wrapped up with a short Kubernetes demo. Docker webinar part three closed the whole series by providing some jumping off points and discussing some of the recurring questions from the previous session.

This post elaborates on some of the common questions around Docker and takes a look at what’s next. Let’s start by covering some of the more entry level questions.

Docker webinar part 3: What is Docker and what should I do with it?

Docker is one part of the suite of tools provided by Docker Inc. to build, ship, and run Docker containers. Docker containers start from Docker images. A Docker image includes everything needed to start the process in an isolated container. It includes all of the source code, supporting libraries, and other binaries that are required. Docker containers run Docker images. Docker containers build on Linux kernel features such as LXC (Linux containers), Cgroups (control groups), and namespaces to fully isolate the container from other processes (or containers) running on the same kernel. In a nutshell, this allows you distribute applications as independent images and run them on any system with the Docker daemon. This provides engineers with important benefits and new approaches.

Docker (and other container technologies) are naturally suited to polygot engineering teams. Docker provides a way to standardize development workflows and deployments. This also increases development and production parity.

Let’s circle back to some concrete things you can and should be using Docker for. It’s easy to get started with Docker in your development environment. Naturally, different projects require different stores and even different versions between projects. This problem is easily solved with Docker containers. Simply start a container for, say, MySQL version 5.x for project A, and version 6.x for project B. Docker Compose makes this easy enough. You can also start containerizing your development process and even CI servers. Containerizing your development process has a few benefits. First up, the setup is tech stack independent apart from Docker (or any other tooling involved). Second, your setup is already moving towards infrastructure as code since all dependencies are listed.

Containerizing your development process has a few benefits. First up, the setup is tech stack independent apart from Docker (or any other tooling involved). Second, your setup is already moving toward infrastructure as code since all dependencies are listed in the Docker file and even dependent databases in a docker-compose.yml file if you opt for that. Third, you can start building Docker images and use them to deploy to production and non-production environments. You can read up on more concrete examples in the part one wrap up.

How is Docker different from Virtual Machines?

This is a common question and here is the short answer. Docker runs all processes on a single kernel. Virtualization runs multiple kernels via a hypervisor on a single host kernel. Docker containers focus on running a single process. Virtual Machines are for running entire operating systems and multiple processes. Docker containers are restricted to the host kernel. That means Docker running on Linux can only run Linux images. Docker on Windows can only run Windows images.

Virtual Machines are the other way around. You may have a Windows host with a Linux guest or vice versa. Virtual Machines also require more compute resources (CPU, Memory, etc.) because each VM requires memory for an entire system and userland. Docker containers share the same host resources. Your machine may only have enough compute to run one VM at a time, but the same machine could run many many more containers. The official Docker website explains this as well. You can also get a free ebook that describes more of the differences (and similarities).

Should I use Docker for Production?

This is a great question! The answer is, you guessed it: It depends. All engineering decisions are about tradeoffs. In our profession, it’s rare that any answer is an absolute yes (unless we’re talking about whether code should have tests). Deciding to build a production system on Docker is no different. There are many tradeoffs and it may make sense in some situations and not others. Start with considering how complex your application is.

Start by considering the complexity of your application. A team building a single web application will be better off using Heroku (or similar) because it solves a lot of the same problems and doesn’t introduce a significant new abstraction layer. Instead, a team building a distributed system using service-oriented or microservices architecture will have different considerations. They have many components written in different languages and need a way to keep things manageable (and distributed systems definitely require management). Docker makes more sense for this team because it provides standard infrastructure to support the system’s infrastructure and technical growth.

The decision whether to use Docker for production ultimately boils down to a few key factors, including: application scale (number of independently deployed components and tech stacks), infrastructure (e.g. is there a hosted service? do we need to roll our own, pre-existing requirements?), and the time and talent on hand. So, if this is right for your team, how do we put Docker in production?

How do I use Docker in Production?

This is probably the hottest question around containers and Docker right now. The short answer and easiest option is to use Google Container Engine. This gives you immediate access a production ready, hosted Kubernetes cluster that you can deploy to.

The longer answer is that you should use an orchestration tool. Deploying containers is about solving problems at scale. It’s not about handling one container, but how to handle hundreds (or thousands of containers) and compose them into larger systems. Orchestration tools solve this problem by building clusters of compute resources (which may be Virtual Machines in the cloud, physical hardware, or both) and providing APIs to deploy, expose, and scale containers running on the cluster.
While there are many orchestration tools in the ecosystem right now, there are a few key players that you should consider. I would recommend checking out Kubernetes (my favorite for container/cloud native applications), DCOS (for container and non-containerized workloads), Docker Swarm Mode / Docker Datacenter (if you want a first party offering and direct access to the docker daemon), and AWS Elastic Container Service (for those AWS based companies who like first party offerings).
You should look into all of the offerings in this space before deciding to roll your own. Odds are, you can bend the orchestration tool to fit your needs and it will be better than anything you (or your team) would create. Take it from someone who knows. However, you may need to roll your own in some scenarios. Using Docker does not magically negate past approaches. The golden image approach still works perfectly well enough. Check out part 2 for more in-depth information on production concerns.

What is the future for Docker?

My blog post from October covers container technologies other than just Docker. Docker Inc. announced that they are open sourcing “containerd,” which is an extraction from the larger Docker project. This bolsters my position in the post.

Right now, there is fierce competition in the production orchestration/deployment space. There are communities developing around each of the orchestration tools and each with different goals. Projects outside the official Docker Inc. umbrella are keen to create solutions that do not dependent on the Docker runtime, but instead support something with different technical values and separate from Docker Inc. This is why a separate “containerd” project in a neutral foundation is important. The open source community and businesses can build better products.

The future for Docker is clear to me. It will be orchestration based and the cloud providers that want to stay relevant will aggressively move into this space by providing turn-key solutions to this problem. The future will be containerized, and containers will no longer imply Docker. Instead, we’ll see a more polygot world where container tools can target different container runtimes.

Well, that’s a wrap for the Docker webinar series! I hope that you have enjoyed these sessions and that you have learned something new about the technology, ecosystem, and real world applications. I had a blast in these sessions, especially answering audience questions. Stay tuned for our future webinars on all things containers, infrastructure, and DevOps.

Good luck out there, and happy shipping!

Cloud Academy