Docker deployment – Webinar Series Part 2: From Dev to Production

Docker deployment: I recently held Part 2 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the main topics that we covered. It also includes some additional items based on the QA session. You can watch part 2  and read the detailed QA on the community forum.

In Part 1 of the webinar, I talked about Docker in the abstract and showed some concrete applications for the development phase. In Part 2, we focused on Docker deployment containers to production. At this point, we have solved our development problems. The next frontier, then, is production and Docker deployment, and for those, we have many options. The goal in Part 2 is to help you choose the best options for your use case. To do so, we will introduce some options, along with their pros and cons. This way, you know what is out there and can choose the one that makes sense for you.

Getting started in production: Docker Deployment made easy

You will need to get a feel for the size of your application and how much time and energy you can invest in your production infrastructure. Here are some questions to ask yourself:

  • How many applications are being deployed?
  • Are these applications dependent on other applications?
  • How many containers are part of each application?
  • Am I running a full fledged distributed system?
  • Is it OK to run on a cloud provider or must it be on-prem?
  • How does geographical location impact my application?
  • Do I want a hosted solution?

I believe that the answers fall into a few buckets. You may have a simple application with a web server and a database. Maybe you have a few of these applications. Maybe you have a few applications with a few processes that don’t depend on other applications (e.g. not service oriented architecture or microservices). Or, perhaps you have so many services that you’re managing a large distributed system.
Here are our contenders:

Golden Images

With Golden Images, the general idea is to build everything into an image (a VMWare image or an Amazon Machine Image) and deploy that. This creates an inherently immutable infrastructure. Immutable infrastructure also scales horizontally. A Golden Image is a powerful Docker deployment strategy that scales up to larger infrastructures. If you’re using AWS, you can take your AMI and throw it in an ASG behind an ELB and off you go. Golden images work well for Docker containers. Here, instead of building images for a number of different tech stacks, engineers can build one image that can run any Docker image. All in all, it’s a nice solution for many teams because it is flexible and doesn’t require jumping into a clustered solution.


Heroku supports Docker. It is one of the oldest and most mature PaaS (Platform as a service) and offers a refined UX and many integrations with third party services. Heroku easily provides access to logs, environments variables, and other things required to debug and operate production software. It has first and third party DBaaS offerings so it may be a one-stop shop for your production needs. Unfortunately, Heroku can be costly (but perhaps cheaper than having operational responsibilities) and is off the table if you need an on-premise solution.

Docker Cloud

Docker Cloud is the best way to deploy and manage Dockerized applications. Docker Cloud makes it easy for new Docker users to manage and deploy the full spectrum of applications, from single container apps to distributed microservices stacks, to any cloud or on-premises infrastructure. — Docker Inc.

Docker Cloud is a curious offering. It’s almost a PaaS in that it is a managed service, but you must provide your own compute resources (physical machines, virtual machines, or cloud instances). Like Heroku, Docker Cloud provides a UI and integrations for Docker deployment pipelines. Experienced Docker users can pass on this option.

Docker Swarm

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts. — Docker Inc.

After a major redesign, Docker Swarm is now officially included as an opt-in in Docker 1.12. It’s also the first orchestration tool in the list. Docker Swarm offers a few key upsides. It can be integrated with any Docker client (such as the Docker client or Docker compose). Swarm runs anywhere as well. Create a swarm from a mix of physical or virtual machines, even machines for cross-cloud providers. Unfortunately, Swarm is a bit new compared to other contenders on the list.

Docker Datacenter / Universal Control Plane

Docker Datacenter delivers an integrated platform for developers and IT operations to collaborate in the enterprise software supply chain. Bringing security, policy and controls to the application lifecycle without sacrificing any agility or application portability. Docker Datacenter integrates to your business – from on premises and VPC deployment models, open APIs and interfaces, to flexibility to support a wide variety of workflows. — Docker Inc.

Docker Datacenter is Docker Inc.’s enterprise offering that is built on top of Docker Swarm. It includes Docker Trusted Registry, a UI for managing containers, users, and role-based permissions. The current version (1.x) runs an older version of Swarm. This should be addressed in the next major version. I would recommend waiting for the next version because Docker Inc. is putting all their effort behind the new Swarm implementation.

AWS Elastic Container Service (ECS)

Amazon EC2 Container Service (Amazon ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon Elastic Compute Cloud (Amazon EC2) instances. Amazon ECS lets you launch and stop container-based applications with simple API calls, allows you to get the state of your cluster from a centralized service, and gives you access to many familiar Amazon EC2 features. — AWS

ECS is AWS’s offering for people who already use AWS and who want something integrated into other AWS offerings. ECS, like most AWS products, is a bit low level. It tends to functional well but the UX is not great. Luckily, since everything is API driven, third party tools (like Empire) can build on them to fill the UX gap in general usage around working with containers to bootstrapping new clusters. The Docker client cannot be used directly with ECS.

Kubernetes / Google Container Engine (GKE)

Kubernetes is an open-source system for automating Docker deployment, scaling, and management of containerized applications. — Kubernetes

Kubernetes is a container native application orchestration system built on 15 years of experience running production workloads on Google. You can run a single container application on Kubernetes or a 10, 100, or 1000 container scale application. However, it’s primarily designed to target larger systems. Kubernetes can be used as a lower level abstraction to build higher abstractions. (OpenShift is actually built on Kubernetes.) You can run Kubernetes with a mix of virtual or physical hardware. If that’s not your bag, then you can use Google Container Engine (GKE) for a hosted solution.
Bootstrapping and maintaining a distributed system to run a distributed system is no small task, so the hosted option is appealing. However, keep in mind this note of caution. Because Kubernetes is a very active project under heavy development, you should be prepared for some breaking changes if you’re riding on the edge. Kubernetes supports other container runtimes as well. It cannot be used directly with the Docker client. Check out our previous webinar on Google Cloud and Google Container Engine to learn more.

Mesos / DCOS (DataCenter OS)

Apache Mesos abstracts CPU, memory, storage, and other compute resources away from machines (physical or virtual), enabling fault-tolerant and elastic distributed systems to easily be built and run effectively. — Apache Mesos

Mesos (and the ecosystem it supports) is the largest and most powerful distributed systems kernel. Functionality is provided via frameworks. The marathon framework provides support for long-running processes (such as an API sever). Chronos provides support for cron-style things. Mesos (more so than Kubernetes) shines as a lower level abstraction and not as a stand-alone tool. It’s also different from Kubernetes because it supports non-containerized tasks. DCOS is an entire layer built on top of Mesos primitives. DCOS offers a slick UX and one click (or command) installs for things like Kafka, Storm, or Spark. Naturally, it can also do all the other things you’d expect from an orchestration system. Microsoft Azure offers hosted Mesos. Neither Mesos nor DCOS may be used directly with the Docker client.


My recommendations for teams fall into four categories:

  • Heroku for small and non-distributed applications (Heroku’s features are really hard to beat)
  • Golden Images or something built on ECS (e.g. Empire/Convox) for small distributed applications or when you want to run your own infrastructure
  • Kubernetes for container native applications or large distributed systems (GKE if you don’t want to run an infrastructure)
  • Mesos / DCOS if you need to support an entire organization with a mix of containerized and non-containerized work loads

Check out the webinar video for more details into my reasons behind these recommendations. You’ll also find demo on creating your first Kubernetes pod and service.
Part 3 will wrap up the series by answering recurring questions from part one and two in a town hall forum. We’ll announce the date and time for our next event soon, so stay tuned on our webinars page. I hope you’ll join me! In the meantime, I’m more than happy to answer your questions about Docker (in dev, production, or anywhere in between).

Thank you for attending @ Docker Deployment Webinar


Cloud Academy