Docker Compose Parts
The course is part of these learning paths
Docker Compose is an open-source tool for managing multi-container applications with Docker. With Docker Compose, you can describe environments using a declarative syntax and Compose will do all of the heavy-lifting to create the environment. Compose also has built-in logic to make updating environments easy and efficient. It's not only useful for deploying pre-built images, though. You can use it during development to easily manage dependencies for projects. If that sounds interesting, you are in the right place!
In this course, we’ll go over what Docker Compose is and why you would use it. Then we’ll explore the two parts of Docker Compose: Docker Compose files and the Docker Compose command-line interface. Next, we’ll get into demo-focused lessons beginning with running a web application with Compose. After that, we’ll see how to build images in a development scenario with Compose. Lastly, we’ll see how to use Compose to adapt an application to multiple different environments. In particular, we’ll see how to use Compose to manage an application in development and production.
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
By the end of this course, you'll be able to:
- Describe the anatomy of Docker Compose files
- Configure your application using Docker Compose files
- Use the Docker Compose CLI to manage the entire lifecycle of applications
- Build your own images from source code with Docker Compose
- Extend Docker Compose files to adapt applications to multiple environments
This course is for anyone interested in working with Docker, including:
- DevOps Engineers
- Cloud Engineers
- Test Engineers
This is an intermediate level course that assumes:
- You have experience working with Docker
- Some understanding of software development is also beneficial
Thanks for joining me. We will start to peel back the outer layers of Docker Compose in this overview lesson.
I will begin by looking at how you might accomplish a task without Docker Compose. This will highlight some of the issues that Docker Compose was made to solve and give motivation for this lesson and really the entire course on Docker Compose.
Then I will define Docker Compose at a high level in terms of what it can do and how it does it.
Lastly, I will introduce the two parts that make up Docker Compose.
By the end of this lesson, you will understand what Docker Compose is, why you would use it, and get to know a bit about the parts that make up Docker Compose.
Ok, to start off with, I want to share the motivation for Docker Compose and it will give an opportunity to review some core Docker concepts. Let’s say that you are working on developing an application with Docker. The application is relatively simple, consisting of two services that communicate with one another. Each service corresponds to a container in the diagram. One service, let’s call it service A, is entirely stateless and should be accessible from the host machine on a port. The other service, service B, is required to persist data. You have Dockerfiles for both services already.
The task at hand is to spin up a temporary environment with the application running to perform some tests and tear it down when you are finished. How do you create this environment?
In a world before Docker Compose, you might go about achieving that with the following series of Docker commands. You want to follow best practices in isolating your application’s containers from other containers that are running on the Docker host. To give you the most control over that you create a user-defined bridge network.
Using a user-defined network also gives you access to automatic DNS resolution from container names to IP addresses that your application might take advantage of.
For service B that persists data, you want to follow best practices again by choosing an option that is easy to back up and migrate, that can be managed using the Docker commands, can be safely shared among multiple containers, and have the flexibility to be stored on remote hosts or in the cloud. You naturally decide to create a volume to achieve this.
Now you build the Docker images using docker build with the –f option to specify the different Dockerfiles for each service.
Almost there. You only need a couple more commands to start running the containers using the images you built. The docker run command creates the containers and starts running them. You use the –network option so that both containers are in the app_network in order to communicate. You use the -p option so that Service A is accessible on a host port. You use the --mount option for service B to mount the volume into the container.
Now everything is up and running so you can perform some tests.
After your tests are completed you decide to tear down everything you created to keep the environment pristine.
Start by stopping the service A and service B containers. Once the containers are stopped, you are able to remove them. Next you can remove the images you built for service A and service B. After that, you’re free to delete the volume for service B’s persisted data. And finally, you can remove the network that enabled communication between the two containers. And that’s it.
Now let’s take a moment to discuss the solution in the grand scheme of things. Setting up and tearing down the environment required about ten Docker commands. Relatively speaking, it is not too bad compared to a solution using virtualization and even better when compared to what would be involved using bare metal. Docker has made great progress in creating environments quickly and without having to worry about nasty issues like configuration drift.
Now, the series of commands used isn’t the minimal number you could use to achieve the same result. For example, you might decide that it is acceptable to have Docker automatically create the volume for you as a side effect of the –mount option in the run command instead of explicitly creating the volume with docker volume. But even after some optimizing, the fact remains, there is a lot of typing involved to accomplish a fairly common task. Not to mention there could easily be more options involved for configuring each command.
However, it is natural to ask the question, can we do better? One option that could be useful when performing the commands more than once is to put all the commands in a script. That also allows you to check the script into version control to better manage changes and collaborate with other team members. But to write the script you still need to know all of the docker commands required and the sequence to put them in. You are in essence telling Docker how to do something with the commands in the script. This is sometimes referred to as the imperative paradigm in DevOps where you give explicit steps to perform.
As an alternative, wouldn’t it be nice to only have to declare what you want to make instead of the explicit steps to perform to create what you want? That is to take a declarative, as opposed to an imperative, approach and let some tooling figure out the steps to create what you want.
That is in essence what Docker Compose gives you with respect to defining and managing multi-container environments in Docker. You still get the benefits of being able to use source control, but the emphasis shifts to describing what you want instead of how to create it. By way of analogy, Docker Compose is similar to using Dockerfiles. You can run a container, attach to it, run some commands, and use Docker commit to create a new image from the container. But in most situations, you want the enhanced documentation and maintainability that a Dockerfile gives you for accomplishing the same task along with Docker build. Analogously, you usually want to use Docker Compose instead of running a series of Docker commands.
That gives you a high-level understanding of what Compose is and why you might use it. In the context of Docker, you can refer to Docker Compose simply as Compose. You will see a lot of examples of Compose in action throughout this course to develop a more robust understanding of Compose.
It is also worth noting that Docker Compose files can be used to manage multi-container applications that are distributed over a cluster of computing resources. To natively manage a cluster in Docker, you run Docker in swarm mode. Swarm mode is outside of the scope of this course. You can learn more about swarm mode in other excellent content on Cloud Academy. I just want you to know that the time you spend learning Docker Compose in a single host environment will pay dividends later on when you start running applications on a Docker swarm cluster.
Docker Compose consists of two parts: a specially formatted file called a Compose file, and a command-line interface.
A Compose file is where you declare services that comprise your application. You can do a lot inside a Compose file. The Docker commands you use for creating containers, volumes, and networks have equivalent declarative representations in Compose files. Knowing Docker commands makes writing Compose files quite easy given their close connection.
There is an entire lesson devoted to the details of Compose files in this course. To get a sneak peak of what’s to come, take a look at this example Compose file. The services section declares two services: web and redis. Each service has a set of options underneath it. For the web service, there’s an option for specifying the image, which ports to make expose on the host machine, and a volume to mount in the container created for the service. There is also the depends_on option which isn’t something that has an equivalent docker command option. It becomes necessary in the context of Compose because you need a way to specify the order services come up. You can no longer issue commands in a specific order as you would with docker commands. But I’m getting ahead of myself. We’ll cover a lot more details on Compose files in an upcoming lesson.
The other part of Compose is a command-line interface. It has a familiar feel to the docker command-line interface. Many of the commands you use with docker exist in docker-compose but generalized to multi-container applications. The name of the Compose binary is appropriately docker-compose. As an example of the power of docker-compose, the series of docker commands that was presented in the Motivation section can be performed with just two commands in Docker Compose. One for bringing the application up, and another for tearing everything down. The simplicity of creating isolated environments with Docker Compose makes automated testing one of its key use cases. As with Compose files, there is an entire lesson in this course devoted to the Compose command-line interface. You will also see a lot of both Compose files and command-line interface in all of the lessons that present examples showing you how to use Compose for different tasks.
It can be difficult to manage multiple container applications. In the example at the beginning of this lesson, many commands and options are required to start and stop a relatively simple multi-container application. The difficulty in managing multi-container applications grows with the number of containers involved.
Docker Compose lets you specify services in a multi-container application using a declarative paradigm. You declare what you want and Compose figures out how to create it.
There are two parts to Compose. Compose files are where you declare the services you want, and the docker-compose command-line interface is how you manage the multi-container application declared in a Compose file.
That’s all for this overview lesson on Docker Compose. We’ll dive deep into Compose files in the next lesson. Whenever you are ready, continue on and start to see your multi-container applications through the Docker Compose lens.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.