Docker Webinar Part 3: Production & Beyond

Last week, we wrapped up our three-part Docker webinar series. You can watch the Docker Webinar session on the webinars page and find the slides on Speakerdeck. Docker Webinar part one introduced Docker, container technologies, and how to get started in your development environment. It ended with a demo of using Docker Compose for development environments.  Docker Webinar part two covered production deployment options, including the different options in the orchestration space and other production concerns. It wrapped up with a short Kubernetes demo. Docker webinar part three closed the whole series by providing some jumping off points and discussing some of the recurring questions from the previous session.

This post elaborates on some of the common questions around Docker and takes a look at what’s next. Let’s start by covering some of the more entry level questions.

Docker webinar part 3: What is Docker and what should I do with it?

Docker is one part of the suite of tools provided by Docker Inc. to build, ship, and run Docker containers. Docker containers start from Docker images. A Docker image includes everything needed to start the process in an isolated container. It includes all of the source code, supporting libraries, and other binaries that are required. Docker containers run Docker images. Docker containers build on Linux kernel features such as LXC (Linux containers), Cgroups (control groups), and namespaces to fully isolate the container from other processes (or containers) running on the same kernel. In a nutshell, this allows you distribute applications as independent images and run them on any system with the Docker daemon. This provides engineers with important benefits and new approaches.

Docker (and other container technologies) are naturally suited to polygot engineering teams. Docker provides a way to standardize development workflows and deployments. This also increases development and production parity.

Let’s circle back to some concrete things you can and should be using Docker for. It’s easy to get started with Docker in your development environment. Naturally, different projects require different stores and even different versions between projects. This problem is easily solved with Docker containers. Simply start a container for, say, MySQL version 5.x for project A, and version 6.x for project B. Docker Compose makes this easy enough. You can also start containerizing your development process and even CI servers. Containerizing your development process has a few benefits. First up, the setup is tech stack independent apart from Docker (or any other tooling involved). Second, your setup is already moving towards infrastructure as code since all dependencies are listed.

Containerizing your development process has a few benefits. First up, the setup is tech stack independent apart from Docker (or any other tooling involved). Second, your setup is already moving toward infrastructure as code since all dependencies are listed in the Docker file and even dependent databases in a docker-compose.yml file if you opt for that. Third, you can start building Docker images and use them to deploy to production and non-production environments. You can read up on more concrete examples in the part one wrap up.

How is Docker different from Virtual Machines?

This is a common question and here is the short answer. Docker runs all processes on a single kernel. Virtualization runs multiple kernels via a hypervisor on a single host kernel. Docker containers focus on running a single process. Virtual Machines are for running entire operating systems and multiple processes. Docker containers are restricted to the host kernel. That means Docker running on Linux can only run Linux images. Docker on Windows can only run Windows images.

Virtual Machines are the other way around. You may have a Windows host with a Linux guest or vice versa. Virtual Machines also require more compute resources (CPU, Memory, etc.) because each VM requires memory for an entire system and userland. Docker containers share the same host resources. Your machine may only have enough compute to run one VM at a time, but the same machine could run many many more containers. The official Docker website explains this as well. You can also get a free ebook that describes more of the differences (and similarities).

Should I use Docker for Production?

This is a great question! The answer is, you guessed it: It depends. All engineering decisions are about tradeoffs. In our profession, it’s rare that any answer is an absolute yes (unless we’re talking about whether code should have tests). Deciding to build a production system on Docker is no different. There are many tradeoffs and it may make sense in some situations and not others. Start with considering how complex your application is.

Start by considering the complexity of your application. A team building a single web application will be better off using Heroku (or similar) because it solves a lot of the same problems and doesn’t introduce a significant new abstraction layer. Instead, a team building a distributed system using service-oriented or microservices architecture will have different considerations. They have many components written in different languages and need a way to keep things manageable (and distributed systems definitely require management). Docker makes more sense for this team because it provides standard infrastructure to support the system’s infrastructure and technical growth.

The decision whether to use Docker for production ultimately boils down to a few key factors, including: application scale (number of independently deployed components and tech stacks), infrastructure (e.g. is there a hosted service? do we need to roll our own, pre-existing requirements?), and the time and talent on hand. So, if this is right for your team, how do we put Docker in production?

How do I use Docker in Production?

This is probably the hottest question around containers and Docker right now. The short answer and easiest option is to use Google Container Engine. This gives you immediate access a production ready, hosted Kubernetes cluster that you can deploy to.

The longer answer is that you should use an orchestration tool. Deploying containers is about solving problems at scale. It’s not about handling one container, but how to handle hundreds (or thousands of containers) and compose them into larger systems. Orchestration tools solve this problem by building clusters of compute resources (which may be Virtual Machines in the cloud, physical hardware, or both) and providing APIs to deploy, expose, and scale containers running on the cluster.
While there are many orchestration tools in the ecosystem right now, there are a few key players that you should consider. I would recommend checking out Kubernetes (my favorite for container/cloud native applications), DCOS (for container and non-containerized workloads), Docker Swarm Mode / Docker Datacenter (if you want a first party offering and direct access to the docker daemon), and AWS Elastic Container Service (for those AWS based companies who like first party offerings).
You should look into all of the offerings in this space before deciding to roll your own. Odds are, you can bend the orchestration tool to fit your needs and it will be better than anything you (or your team) would create. Take it from someone who knows. However, you may need to roll your own in some scenarios. Using Docker does not magically negate past approaches. The golden image approach still works perfectly well enough. Check out part 2 for more in-depth information on production concerns.

What is the future for Docker?

My blog post from October covers container technologies other than just Docker. Docker Inc. announced that they are open sourcing “containerd,” which is an extraction from the larger Docker project. This bolsters my position in the post.

Right now, there is fierce competition in the production orchestration/deployment space. There are communities developing around each of the orchestration tools and each with different goals. Projects outside the official Docker Inc. umbrella are keen to create solutions that do not dependent on the Docker runtime, but instead support something with different technical values and separate from Docker Inc. This is why a separate “containerd” project in a neutral foundation is important. The open source community and businesses can build better products.

The future for Docker is clear to me. It will be orchestration based and the cloud providers that want to stay relevant will aggressively move into this space by providing turn-key solutions to this problem. The future will be containerized, and containers will no longer imply Docker. Instead, we’ll see a more polygot world where container tools can target different container runtimes.

Well, that’s a wrap for the Docker webinar series! I hope that you have enjoyed these sessions and that you have learned something new about the technology, ecosystem, and real world applications. I had a blast in these sessions, especially answering audience questions. Stay tuned for our future webinars on all things containers, infrastructure, and DevOps.

Good luck out there, and happy shipping!

Avatar

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.

Related Posts

Joe Nemer
Joe Nemer
— September 12, 2019

Real-Time Application Monitoring with Amazon Kinesis

Amazon Kinesis is a real-time data streaming service that makes it easy to collect, process, and analyze data so you can get quick insights and react as fast as possible to new information.  With Amazon Kinesis you can ingest real-time data such as application logs, website clickstre...

Read more
  • amazon kinesis
  • AWS
  • Stream Analytics
  • Streaming data
Avatar
Alex Casalboni
— September 3, 2019

Google Vision vs. Amazon Rekognition: A Vendor-Neutral Comparison

Google Cloud Vision and Amazon Rekognition offer a broad spectrum of solutions, some of which are comparable in terms of functional details, quality, performance, and costs. This post is a fact-based comparative analysis on Google Vision vs. Amazon Rekognition and will focus on the tech...

Read more
  • Amazon Rekognition
  • AWS
  • Google Cloud Platform
  • Google Vision
Alisha Reyes
Alisha Reyes
— August 30, 2019

New on Cloud Academy: CISSP, AWS, Azure, & DevOps Labs, Python for Beginners, and more…

As Hurricane Dorian intensifies, it looks like Floridians across the entire state might have to hunker down for another big one. If you've gone through a hurricane, you know that preparing for one is no joke. You'll need a survival kit with plenty of water, flashlights, batteries, and n...

Read more
  • AWS
  • Azure
  • Google Cloud Platform
  • New content
  • Product Feature
  • Python programming
Joe Nemer
Joe Nemer
— August 27, 2019

Amazon Route 53: Why You Should Consider DNS Migration

What Amazon Route 53 brings to the DNS table Amazon Route 53 is a highly available and scalable Domain Name System (DNS) service offered by AWS. It is named by the TCP or UDP port 53, which is where DNS server requests are addressed. Like any DNS service, Route 53 handles domain regist...

Read more
  • Amazon
  • AWS
  • Cloud Migration
  • DNS
  • Route 53
Alisha Reyes
Alisha Reyes
— August 22, 2019

How to Unlock Complimentary Access to Cloud Academy

Are you looking to get trained or certified on AWS, Azure, Google Cloud Platform, DevOps, Cloud Security, Python, Java, or another technical skill? Then you'll want to mark your calendars for August 23, 2019. Starting Friday at 12:00 a.m. PDT (3:00 a.m. EDT), Cloud Academy is offering c...

Read more
  • AWS
  • Azure
  • cloud academy content
  • complimentary access
  • GCP
  • on the house
Avatar
Michael Sheehy
— August 19, 2019

What Exactly Is a Cloud Architect and How Do You Become One?

One of the buzzwords surrounding the cloud that I'm sure you've heard is "Cloud Architect." In this article, I will outline my understanding of what a cloud architect does and I'll analyze the skills and certifications necessary to become one. I will also list some of the types of jobs ...

Read more
  • AWS
  • Cloud Computing
Avatar
Nitheesh Poojary
— August 19, 2019

Boto: Using Python to Automate AWS Services

Boto allows you to write scripts to automate things like starting AWS EC2 instances Boto is a Python package that provides programmatic connectivity to Amazon Web Services (AWS). AWS offers a range of services for dynamically scaling servers including the core compute service, Elastic...

Read more
  • Automated AWS Services
  • AWS
  • Boto
  • Python
Avatar
Andrew Larkin
— August 13, 2019

Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More

Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...

Read more
  • AWS
  • Azure
  • content roadmap
  • Google Cloud Platform
Avatar
Adam Hawkins
— August 9, 2019

DevSecOps: How to Secure DevOps Environments

Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...

Read more
  • AWS
  • cloud security
  • DevOps
  • DevSecOps
  • Security
Avatar
Stefano Giacone
— August 8, 2019

Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform

Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...

Read more
  • AWS
  • Cloud skills
  • Google Cloud
  • Microsoft Azure
Avatar
Andrew Larkin
— August 7, 2019

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery —  cloud-based or local — is up to you. But you’ll definitely want...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— August 6, 2019

Google Cloud vs AWS: A Comparison (or can they be compared?)

The "Google Cloud vs AWS" argument used to be a common discussion among our members, but is this still really a thing? You may already know that there are three major players in the public cloud platforms arena: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)...

Read more
  • AWS
  • Google Cloud Platform
  • Kubernetes