How to fix the great Docker security mess
Success story: Linux software package management
When thinking about security in general, and Docker security in particular, it’s important to never forget the potential costs of devastating data breaches. You really need to ensure that EVERY deployment you launch should be as secure and reliable as possible. In the Linux world, one of the most powerful tools you can use is curated software repositories.
Package managers, such as Debian’s apt and Red Hat’s yum, verify the authenticity and the integrity of every package they make available for download. Since access to the repos is restricted to trusted and capable individuals, you can be confident that anything you install from official channels on your Ubuntu or CentOs system is safe.
This is how many (perhaps most) desktop and server deployments are currently run.
But if you download packages from uncurated repositories, manually add third-party keys to the keyring, or somehow build your system using a compromised operating system image, then all bets are off and there’s no way you can ever really know what you’re getting.
This is how many (perhaps most) Docker containers are currently run.
Docker security: the problem
Running “Docker Pull” itself will both download and install an image in a single step using an unsafe connection and offering no verification mechanism. Until 2013, running the Python package manager, pip, left your system similarly vulnerable.
All that assumes that the original source image or package you’re downloading is reliable. But, since anyone can upload anything to repositories like Docker Hub, npm, and pip, choosing a package can be more like playing Russian Roulette than optimizing Docker security (although Docker Hub does feature official registries for major distributions like Ubuntu and software like MySQL).
This isn’t just theoretical: malware has already been found on a public repository.
Update: some months after this post was published, Docker introduced their Docker Content Trust – designed to address just this problem. Docker Content Trust is a system for verifying the identity of the publisher whose software you are pulling, ensuring that you are getting only properly signed images. As this is an opt-in feature, it will still be the responsibility of administrators to make sure they’re using only best practices, but at least best practices are now much more accessible.
Why do people treat Docker security differently?
The most significant difference is that the Debian and Red Hat software ecosystems are managed by a small group of expert and trusted people. As we already mentioned, anyone can upload anything to the Docker Hub, the npm repository, or PyPI, whose packages often come with dependencies that are automatically installed without even asking for permission.
The web is also full of guides suggesting very unsafe practices. The desire to create clean and simple routines to make software installation painless can easily lead to lines like this (an actual live example from GitHub):
"curl http://npmjs.org/install.sh | sudo sh"
Just imagine how this must warm the hearts of our good friends at the NSA!
Docker security: what to avoid
- Just because a how-to guide – even from a well-known source – provides you with an installation script, don’t run it blindly: behind the scenes, it may fetch resources over unsecured channels. Be sure to read and understand every line of the script and be even more cautious if it requires administrative permissions.
- Avoid “solutions” (like Python’s Virtualenv) that don’t properly isolate your system. These were designed for easing package management, not for Docker security.
- Don’t trust packages, scripts, and advice from any source unless you’re sure that the admins properly understand and are committed to an appropriate level of security.
- Don’t add repositories or keys from people you don’t trust.
- To limit your exposure to risk, avoid deploying any software that you don’t absolutely require.
The right tools
- If possible, build and maintain a local mirror as your private repository. This mirror should contain signed and verified versions of the packages you need and should be accessible only over secure transports. The Red Hat blog provides some excellent tips for Docker.
- Enable all the security features offered by the package managers you do use. Pip, for example, can verify hashsums. After auditing a Python package, be sure to pass explicitly its hash to pip when installing it (remembering that the package may come with dependencies).
- Install only software coming from known and trusted sources.
- Where possible, use reproducible builds and audit them. Both Debian and Fedora are working on making more and more reproducible packages available. It may still be some time before this is widely applied, but it’s worth keeping it in your sights.
If you absolutely MUST run untrusted code, minimal Docker security demands that you use proper isolation via solutions like Apparmor/SELinux, LXC, unprivileged LXC, Qemu/VMWare/VirtualBox.
None of these approaches is perfect, and each has its own strengths and weaknesses, but with some careful tuning, they might be effective for you in the right combination.
Microservices Architecture: Advantages and Drawbacks
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Docker vs. Virtual Machines: Differences You Should Know
What are the differences between Docker and virtual machines? In this article, we'll compare the differences and provide our insights to help you decide between the two. Before we get started discussing Docker vs. Virtual Machines comparisons, let us first explain the basics. What is ...
Top 20 Open Source Tools for DevOps Success
Open source tools perform a very specific task, and the source code is openly published for use or modification free of charge. I've written about DevOps multiple times on this blog. I reiterate the point that DevOps is not about specific tools. It's a philosophy for building and improv...
New on Cloud Academy, March ’18: Machine Learning on AWS and Azure, Docker in Depth, and more
Introduction to Machine Learning on AWS This is your quick-start guide for building and deploying with Amazon Machine Learning. By the end of this learning path, you will be able to apply supervised and unsupervised learning, ML algorithms, deep learning, and deep neural networks on AW...
New on Cloud Academy, January ’18: Security, Machine Learning, Containers, and more
LEARNING PATHS Introduction to Kubernetes Kubernetes allows you to deploy and manage containers at scale. Created by Google, and now supported by Azure, AWS, and Docker, Kubernetes is the container orchestration platform of choice for many deployments. For teams deploying containeri...
8 Hands-on Labs to Master Docker in the Enterprise
Docker containers are known for bringing a level of ease and portability to the process of developing and deploying applications. Where developers have embraced them for development and testing, enterprise DevOps professionals consider container technologies like Docker to be a strategi...
New on Cloud Academy, September ’17. Big Data, Security, and Containers
Explore the newest Learning Paths, Courses, and Hands-on Labs on Cloud Academy in September. Learning Paths and Courses Certified Big Data Specialty on AWS Solving problems and identifying opportunities starts with data. The ability to collect, store, retrieve, and analyze data me...
Mesosphere to Incorporate Kubernetes into DC/OS
The announcement that Mesosphere is going to incorporate Kubernetes into DC/OS has generated a fair amount of buzz in the industry, with the consensus landing largely on the side that this is a sign that Mesosphere is ceding to Google’s open source software. I have a different perspecti...
What is HashiCorp Vault? How to Secure Secrets Inside Microservices
Whether you are a developer or a system administrator, you will have to manage the issue of sharing "secrets" or secure information. In this context, a secret is any sensitive information that should be protected. For example, if lost or stolen, your passwords, database credentials, or...
How to Deploy Docker Containers on AWS Elastic Beanstalk Applications
In this post, we are going to look at how to deploy two Docker containers on AWS Elastic Beanstalk Applications. Today, Docker containers are being used by many companies in sophisticated microservice infrastructures. From a developer point of view, one of the biggest benefits of Do...
Docker Webinar Part 3: Production & Beyond
Last week, we wrapped up our three-part Docker webinar series. You can watch the Docker Webinar session on the webinars page and find the slides on Speakerdeck. Docker Webinar part one introduced Docker, container technologies, and how to get started in your development environment. It ...
Docker deployment – Webinar Series Part 2: From Dev to Production
Docker deployment: I recently held Part 2 of a three-part webinar series on Docker. For those of you who could not attend, this post summarizes the main topics that we covered. It also includes some additional items based on the QA session. You can watch part 2 and read the detailed QA...