Container technologies: more than just Docker

Container technologies: Docker has gained widespread industry adoption and success since its release in 2014. As more people push to Dockerize everything, it’s important to realize that Docker is only the first wave of successful container technology. Here are just some of the reasons why what we’re seeing is only the beginning of mainstream container technologies adoption.
LXC Logo
LXC Logo

My Introduction to Docker course also covers container technologies. Control Groups (also known as “groups”), which are fundamental in the Docker engine, have been around for years. They provide a way to assign CPU, RAM, Disk I/O and Network I/O limits to a particular process. LXC came along later and provided a high-level abstraction by leveraging control groups and other kernel features. LXC is an operating system-level virtualization method for running multiple isolated Linux systems (containers) on a control host using a single Linux kernel. The goal is to create an environment that is as close as possible to a standard Linux installation but without the need for a separate kernel. LXC offers separate namespaces for users, mounts, processes and more.
LXC comprises a fantastic set of technologies. It just needed a killer app. Heroku came along and transformed the platform as a service industry by providing a containerized way to run Ruby applications. Their stack was built on LXC. Users pushed their app via git push to Heroku‘s servers. Then, Heroku did the heavy lifting to build what they call a “slug”. This is essentially everything required to start a container on their internal system. Heroku has added support for other first party stacks (such as Python, Node.js, Go, Java, etc.) and allows users to contribute their own via build packs. You can even view the source for their official Ruby buildpack.
Given that Heroku and LXC were released nearly 10 years ago (even FreeBSD jail was available before LXC or control groups), why is it that our industry is focusing almost entirely on Docker? Perhaps it is because Docker achieved critical mass among developers. They won by taking what Heroku was offering privately and internally and making it accessible to everyone.
Docker gave everyone a standard way to build images using the Dockerfile. It was astounding simple: List the commands required to configure your software and add the files. Then, more importantly, push that image to a public repository. Now, anyone could pull that image and run your software as a container—regardless of whatever language it was written in. The model took the industry by storm. It’s safe to say that for many people, container technologies and the wider ecosystem are basically Docker.
This view may be changing in the next few years. v. production deployments are a completely separate story and arguably the most important. But businesses don’t make their money during development. Software only matters once it’s in production. Getting Docker containers technologies into production is a complicated affair, and this is where other container technology companies want to compete with Docker Inc. New competitors in this space are already having an impact on the container technologies ecosystem.
The initial (and ever increasing) technical scope of Docker’s engine makes some weary. There are a few layers here. First, generally, it’s more difficult to integrate with projects of a larger scope because their implementation may overlap or conflict with existing systems (for example Docker vs Systemd). Second, a larger technical scope means that the project is more likely to overtake third party projects (for better or worse). Third, and perhaps most importantly: Money and the wider impression of a for-profit company (Docker Inc.) trying to take over the world. These feelings came to a head when Docker added first-party orchestration via Docker Swarm in Docker 1.12. This made other vendors like RedHat (a huge player in enterprise scale systems) cautious.
The New Stack recently posted an article on how RedHat is developing a way to support non-Docker backends in Kubernetes. This excerpt from the article sums it up:
“We don’t really need much from any container runtime, whether it’s Docker or [CoreOS’] rkt — they need to do very little,” said Kelsey Hightower, Google’s staff developer advocate, “mainly give us an API to the kernel.  So this is not just about Linux. We could be on Windows systems… if that’s the direction the community wanted to go in, to support these ideas. Because it’s bigger than Docker Inc., at this point. This is about, how do you run a containerized application?”
He has a great point. Orchestration is about running containerized applications. The article, in addition to the recent rumors of a Docker fork, made me realize that, as an industry we need to expand our horizons and focus on the wider container ecosystem. We cannot look at containers technologies only through the lens provided by Docker Inc. Here’s a quote from the Open Container Initiative (OCI) FAQ:
“In the past two years, there has been rapid growth in both interest in and usage of container technologies based solutions. Almost all major IT vendors and cloud providers have announced container technologies based solutions, and there has been a proliferation of start-ups founded in this area as well. While the proliferation of ideas in this space is welcome, the promise of containers as a source of application portability requires the establishment of certain standards around format and runtime. While the rapid growth of the Docker project has served to make the Docker image format a de facto standard for many purposes, there is widespread interest in a single, open container specification…”
runC logo
runC Logo

OCI backs the runC project, which is a simple solution for starting containers compatible with the OCI standard. Docker donated a large amount of code to the initiative, and Docker Inc.’s libcontainer turned into runC. In fact, the libcontainer source repository redirects to the runC repo.
rkt_logoCoreOS is actively developing Rocket. Rocket is interesting because it’s somewhat of an independent container implementation (largely inspired by Docker). Rocket is built on the AppC standard. The AppC lists three stable implementations: rocket, jetpack and kurma.
The point is that there are many different active projects in the container ecosystem. While each effort works to optimize for different technical use cases, they share a common goal: More defined boundaries. This makes orchestration easier.
dcos-logo-%28horizontal%29-skvoop_zcDCOS (short for DataCenter Operating System) announced in a recent post, Who Wants to Run My Container, that they have decided to unify their support for AppC and Docker containers. The author calls out some implementation concerns later on in the post:
“Another factor making it easier is the decision to focus on support for the Docker image spec, but not necessarily target all Docker runtime features such as storage or networking. We believe some of those runtime features should be part of a higher level API, since most people only actually use a subset of these runtime features.
Consider, for example, the networking stack: here we decided to use the Container Networking Interface (CNI) instead of Docker’s native Container Network Model (CNM). CNM has been designed specifically with the Docker container orchestration life-cycle in mind, which doesn’t play very well with other orchestration engines such as Mesos or Kubernetes. CNI plugins, on the other hand, are completely unaware of the lifecycle of the orchestration engine, making them much more platform independent.” 
The key takeaway here is “platform independent.” Docker won phase one of the software pipeline by reaching a critical mass of developers. Bringing containers technologies to production is the next phase. Most people are not concerned with bringing one container into production, as this is simply too small a scale to care about. Organizations are increasingly building more and more distributed systems with smaller and smaller components (e.g. Micro services architecture). These systems require more robust orchestration systems like Kubernetes, Mesos or DCOS. Increasingly, these systems are less concerned with the implementation of particular components and more with how easy it is integrated with their scheduler. This where the tension begins.
Docker Inc. added Docker Swarm into its 1.12 Docker release. This created friction with downstream vendors who are upset with Docker Inc. for competing with their products and for shipping things that break their products. This is one reason behind the rumored Docker fork. Regardless, the industry wants to move to better orchestration tools that are decoupled from the underlying technologies.
Now, a question for you: If you were a maintainer on DCOS/Kubernetes/Mesos/Openshift, etc., would you rather integrate with decoupled tools or larger, more tightly coupled tools? I think you would choose the latter. I believe that we’ll see the industry shift away from Docker’s implementation and focus more on open standards such as AppC or those proposed by OCI. Again, this is backed by Kelsey Hightower:
This is more about leveling the playing field,” he said. “Right now, Docker is the most well-supported container runtime, because we at Google and the community have done the majority of the work to make Docker work as a first-class runtime. But now that we see that people want choice, and it has always been our vision to support multiple container runtimes, one step in doing that is creating this Container Runtime Interface. As part of that, we’re going to refactor our current code base to implement the [CRI].
We are experiencing the first wave of mainstream container technologies adoption. Fast forward five years into the future: Do you think that Docker will be around, or will open source implementations have taken the lead? Do you think Docker Inc. will be targeting the same marketplace or will they have moved on to larger enterprises? Do you think that Kubernetes/Mesos/DCOS and friends will still be around, or will they have been beaten by other projects?
I’ll throw my hat into the ring: I think that in the next five years, the industry will have moved on from Docker as the primary player. Open source implementations will take center stage. Orchestration tools will have evolved significantly in their functionality and robustness. The focus will no longer be in container technologies because they are an implementation detail. Orchestration tools will continue to improve and will focus more on interfaces than on implementations. We’ll also see more new container implementations.
My advice to you is to stay informed about the container technologies ecosystem and watch these other implementations. At best, you will be more prepared to ride the waves of industrial change. At worst, you will have learned something new.

Written by

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.

Related Posts

Albert Qian
— August 28, 2018

Introducing Assessment Cycles

Today, cloud technology platforms and best practices around them move faster than ever, resulting in a paradigm shift for how organizations onboard and train their employees. While assessing employee skills on an annual basis might have sufficed a decade ago, the reality is that organiz...

Read more
  • Cloud Computing
  • Product Feature
  • Skill Profiles
— July 31, 2018

Cloud Skills: Transforming Your Teams with Technology and Data

How building Cloud Academy helped us understand the challenges of transforming large teams, and how data and planning can help with your cloud transformation.When we started Cloud Academy a few years ago, our founding team knew that cloud was going to be a revolution for the IT indu...

Read more
  • Cloud Computing
  • Skill Profiles
— June 26, 2018

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to compute resources including CPU, memory, storage, and network connectivity. Which resources you choose for your delivery, cloud-based or local, is up to you. But you’ll definitely want to do your homework first.Cloud ...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud
Albert Qian
— May 23, 2018

Announcing Skill Profiles Beta

Now that you’ve decided to invest in the cloud, one of your chief concerns might be maximizing your investment. With little time to align resources with your vision, how do you objectively know the capabilities of your teams?By partnering with hundreds of enterprise organizations, we’...

Read more
  • Cloud Computing
  • Skill Profiles
— April 5, 2018

A New Paradigm for Cloud Training is Needed (and Other Insights We Can Democratize)

It’s no secret that cloud, its supporting technologies, and the capabilities it unlocks is disrupting IT. Whether you’re cloud-first, multi-cloud, or migrating workload by workload, every step up the ever-changing cloud capability curve depends on your people, your technology, and your ...

Read more
  • Cloud Computing
— March 29, 2018

What is Chaos Engineering? Failure Becomes Reliability

In the IT world, failure is inevitable. A server might go down, an app may fail, etc. Does your team know what to do during a major outage? Do you know what instances may cause a larger systems failure? Chaos engineering, or chaos as a service, will help you fail responsibly.It almost...

Read more
  • Cloud Computing
  • DevOps
— November 22, 2017

AWS re:Invent 2017: Themes and Tools Shaping Cloud Computing in 2018

As the sixth annual re:Invent approaches, it’s a good time to look back at how the industry has progressed over the past year. How have last year’s trends held up, and what new trends are on the horizon? Where is AWS investing with its products and services? How are enterprises respondi...

Read more
  • AWS
  • Cloud Adoption
  • Cloud Computing
  • reInvent17
— October 27, 2017

Cloud Academy at Cloud Expo Santa Clara, Oct 31 – Nov 2

71% of IT decision-makers believe that a lack of cloud expertise in their organizations has resulted in lost revenue.1  That’s why building a culture of cloud—and the common language and skills to support cloud-first—is so important for companies who want to stay ahead of the transfor...

Read more
  • Cloud Computing
  • Events
— October 24, 2017

Product News: Announcing Cloud Academy Exams, Improved Filtering & Navigation, and More

At Cloud Academy, we’re obsessed with creating value for the organizations who trust us as the single source for the learning, practice, and collaboration that enables a culture of cloud.Today, we’re excited to announce the general availability of several new features in our Content L...

Read more
  • Cloud Computing
— August 29, 2017

On 'the public understanding of encryption' Tweet by Paul Johnston

Some of the questions by journalists about encryption prove they don't get it. Politicians don't seem to get it either (most of them). In fact, outside technology, there are some ridiculous notions of what encryption means. Over and over again, the same rubbish around encrypti...

Read more
  • Cloud Computing
— July 13, 2017

Our Hands-on Labs have a new look

Building new hands-on labs and improving our existing labs is a major focus of Cloud Academy for 2017 and beyond. If you search "types of adult learning," you will get approximately 16.9 gazillion hits. Many will boast about how they meet the needs of a certain type of learner (up to 70...

Read more
  • Cloud Computing
  • hands-on labs
— July 11, 2017

New infographic: Cloud computing in 2017

With 83% of businesses ranking cloud skills as critical for digital transformation in 2017, it’s great news for anyone with cloud architecting experience, and for those considering a career in cloud computing. In our new infographic, we compiled some of the latest industry research to l...

Read more
  • Cloud Computing