Docker Webinar Part 3: Production & Beyond

Last week, we wrapped up our three-part Docker webinar series. You can watch the Docker Webinar session on the webinars page and find the slides on Speakerdeck. Docker Webinar part one introduced Docker, container technologies, and how to get started in your development environment. It ended with a demo of using Docker Compose for development environments.  Docker Webinar part two covered production deployment options, including the different options in the orchestration space and other production concerns. It wrapped up with a short Kubernetes demo. Docker webinar part three closed the whole series by providing some jumping off points and discussing some of the recurring questions from the previous session.

This post elaborates on some of the common questions around Docker and takes a look at what’s next. Let’s start by covering some of the more entry level questions.

Docker webinar part 3: What is Docker and what should I do with it?

Docker is one part of the suite of tools provided by Docker Inc. to build, ship, and run Docker containers. Docker containers start from Docker images. A Docker image includes everything needed to start the process in an isolated container. It includes all of the source code, supporting libraries, and other binaries that are required. Docker containers run Docker images. Docker containers build on Linux kernel features such as LXC (Linux containers), Cgroups (control groups), and namespaces to fully isolate the container from other processes (or containers) running on the same kernel. In a nutshell, this allows you distribute applications as independent images and run them on any system with the Docker daemon. This provides engineers with important benefits and new approaches.

Docker (and other container technologies) are naturally suited to polygot engineering teams. Docker provides a way to standardize development workflows and deployments. This also increases development and production parity.

Let’s circle back to some concrete things you can and should be using Docker for. It’s easy to get started with Docker in your development environment. Naturally, different projects require different stores and even different versions between projects. This problem is easily solved with Docker containers. Simply start a container for, say, MySQL version 5.x for project A, and version 6.x for project B. Docker Compose makes this easy enough. You can also start containerizing your development process and even CI servers. Containerizing your development process has a few benefits. First up, the setup is tech stack independent apart from Docker (or any other tooling involved). Second, your setup is already moving towards infrastructure as code since all dependencies are listed.

Containerizing your development process has a few benefits. First up, the setup is tech stack independent apart from Docker (or any other tooling involved). Second, your setup is already moving toward infrastructure as code since all dependencies are listed in the Docker file and even dependent databases in a docker-compose.yml file if you opt for that. Third, you can start building Docker images and use them to deploy to production and non-production environments. You can read up on more concrete examples in the part one wrap up.

How is Docker different from Virtual Machines?

This is a common question and here is the short answer. Docker runs all processes on a single kernel. Virtualization runs multiple kernels via a hypervisor on a single host kernel. Docker containers focus on running a single process. Virtual Machines are for running entire operating systems and multiple processes. Docker containers are restricted to the host kernel. That means Docker running on Linux can only run Linux images. Docker on Windows can only run Windows images.

Virtual Machines are the other way around. You may have a Windows host with a Linux guest or vice versa. Virtual Machines also require more compute resources (CPU, Memory, etc.) because each VM requires memory for an entire system and userland. Docker containers share the same host resources. Your machine may only have enough compute to run one VM at a time, but the same machine could run many many more containers. The official Docker website explains this as well. You can also get a free ebook that describes more of the differences (and similarities).

Should I use Docker for Production?

This is a great question! The answer is, you guessed it: It depends. All engineering decisions are about tradeoffs. In our profession, it’s rare that any answer is an absolute yes (unless we’re talking about whether code should have tests). Deciding to build a production system on Docker is no different. There are many tradeoffs and it may make sense in some situations and not others. Start with considering how complex your application is.

Start by considering the complexity of your application. A team building a single web application will be better off using Heroku (or similar) because it solves a lot of the same problems and doesn’t introduce a significant new abstraction layer. Instead, a team building a distributed system using service-oriented or microservices architecture will have different considerations. They have many components written in different languages and need a way to keep things manageable (and distributed systems definitely require management). Docker makes more sense for this team because it provides standard infrastructure to support the system’s infrastructure and technical growth.

The decision whether to use Docker for production ultimately boils down to a few key factors, including: application scale (number of independently deployed components and tech stacks), infrastructure (e.g. is there a hosted service? do we need to roll our own, pre-existing requirements?), and the time and talent on hand. So, if this is right for your team, how do we put Docker in production?

How do I use Docker in Production?

This is probably the hottest question around containers and Docker right now. The short answer and easiest option is to use Google Container Engine. This gives you immediate access a production ready, hosted Kubernetes cluster that you can deploy to.

The longer answer is that you should use an orchestration tool. Deploying containers is about solving problems at scale. It’s not about handling one container, but how to handle hundreds (or thousands of containers) and compose them into larger systems. Orchestration tools solve this problem by building clusters of compute resources (which may be Virtual Machines in the cloud, physical hardware, or both) and providing APIs to deploy, expose, and scale containers running on the cluster.
While there are many orchestration tools in the ecosystem right now, there are a few key players that you should consider. I would recommend checking out Kubernetes (my favorite for container/cloud native applications), DCOS (for container and non-containerized workloads), Docker Swarm Mode / Docker Datacenter (if you want a first party offering and direct access to the docker daemon), and AWS Elastic Container Service (for those AWS based companies who like first party offerings).
You should look into all of the offerings in this space before deciding to roll your own. Odds are, you can bend the orchestration tool to fit your needs and it will be better than anything you (or your team) would create. Take it from someone who knows. However, you may need to roll your own in some scenarios. Using Docker does not magically negate past approaches. The golden image approach still works perfectly well enough. Check out part 2 for more in-depth information on production concerns.

What is the future for Docker?

My blog post from October covers container technologies other than just Docker. Docker Inc. announced that they are open sourcing “containerd,” which is an extraction from the larger Docker project. This bolsters my position in the post.

Right now, there is fierce competition in the production orchestration/deployment space. There are communities developing around each of the orchestration tools and each with different goals. Projects outside the official Docker Inc. umbrella are keen to create solutions that do not dependent on the Docker runtime, but instead support something with different technical values and separate from Docker Inc. This is why a separate “containerd” project in a neutral foundation is important. The open source community and businesses can build better products.

The future for Docker is clear to me. It will be orchestration based and the cloud providers that want to stay relevant will aggressively move into this space by providing turn-key solutions to this problem. The future will be containerized, and containers will no longer imply Docker. Instead, we’ll see a more polygot world where container tools can target different container runtimes.

Well, that’s a wrap for the Docker webinar series! I hope that you have enjoyed these sessions and that you have learned something new about the technology, ecosystem, and real world applications. I had a blast in these sessions, especially answering audience questions. Stay tuned for our future webinars on all things containers, infrastructure, and DevOps.

Good luck out there, and happy shipping!

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.

Related Posts

Sanket Dangi
— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
Andrew Larkin
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature p...

Read more
  • AWS
  • AWS certifications
Andrew Larkin
— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...

Read more
  • AWS
  • EC2
  • re:Invent 2018
Guy Hummel
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...

Read more
  • Aviatrix
  • AWS
  • VPC
Jeremy Cook
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
Guy Hummel
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
Stuart Scott
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
Stuart Scott
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
Cloud Academy Team
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
  • SpotInst
Guy Hummel and Jeremy Cook
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
Stuart Scott
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS