The course is part of this learning path
Docker Compose Parts
In this lesson, we will review some of the high-level features of the Compose CLI. Next, we will go through some of the installation options available for different platforms. And finally, we will finish the lesson by looking at how to use the docker-compose CLI by reviewing common commands and parameters.
First, we will discuss the features of the Compose CLI:
- running multiple isolated environments on a single host
- using a parallel execution model to perform tasks for creating and deleting an application environment
- reusing any containers that have not changed configuration after a restart
- reporting issues and making feature requests on Github
Next, we will begin the installation of the Compose CLI on your system. We will cover the difference between Mac, Windows, and Linux installations.
Then, we will open up the Compose CLI, view some basics, and move to terminal to start using the Compose CLI.
In Compose, we will go over several commands that work to make adoption easier for users familiar with Docker. In addition to these commands, we will discuss the two non-deprecated commands that are unique to Compose:
- Up: performs the actions required to initiate the application described in a Compose file. It starts by creating default and named networks as applicable, and any named volumes.
- Down: will only remove containers, as well as any named and default networks by default.
We will provide an example of how and when to use up and down using the Compose CLI.
Finally, we will review all of the information that was covered in the lesson.
In this lesson, we’ll see how to use the Docker Compose command-line interface to turn the multi-container applications described in Compose files into actual running environments in Docker.
I’ll start by reviewing some of the high-level features of the Compose CLI.
Next, I will go through some of the installation options available for different platforms.
Lastly, I’ll finish the lesson by looking at how to use the docker-compose CLI by the reviewing common commands and parameters
One of the features of the Compose CLI that I want to highlight is its ability to run multiple isolated environments on a single host. Some scenarios where this is extremely useful is on a continuous integration server where you need to run automated tests for each build version. The ability of Compose to run multiple isolated environments means you don’t have to sequentially iterate through each version. A development scenario where this comes in handy is when you may need to create multiple copies of an environment for different feature branches. You can use variable substitution in the Compose file to create the desired branch environment. We’ll talk more about development scenarios in later lessons in this course.
The Compose CLI uses a parallel execution model to perform tasks for creating and deleting an application environment. Not everything can run in parallel due to dependencies and limitations in Docker, but when possible parallel execution is used to reduce the time it takes to manage applications.
Another useful feature to be aware of is the change detection capabilities of Compose. Every time you start a container for a service in Compose, the configuration is cached. If you later restart a Compose application, Compose will reuse any containers that haven’t changed configuration. This is a bit like how layers are cached when building images from a Dockerfile. Just like with Dockerfiles, you can instruct Compose not to use the existing containers and instead force all containers to be rebuilt.
The last feature, using the term loosely, is that Docker Compose is an open source project on Github with an active community. You can report issues and make feature requests there. If you are familiar with the Python programming language, you can fork the project and modify the source to better suit your needs. Maybe even make a pull request to have your improvements included into the project.
Before we get into using the Compose CLI, I want to say a few words about getting Compose installed on your system.
For mac users,
Compose comes installed with the Docker for Mac application and Docker Toolbox for older systems.
For Windows users,
If you obtained Docker through Docker for Windows, or Docker Toolbox
Compose came included with that.
If you are running the native Windows Docker Daemon, on Windows Server 2016 or Windows 10 with the Anniversary Update
You need to install Compose separately. You can choose the appropriate version of Compose and download an installer from the Compose Github releases page. For example, you could download version 1.17.0 to get the version of Compose I’m using for this course.
For Linux systems,
Docker Compose is included in many distribution repositories
For example, on CentOS or RedHat distributions you can use yum or dnf, and apt on Debian-based systems.
If Compose isn’t available through the distribution’s package repo or you want a specific version,
you can get Compose from the Github release page.
All right! With that out of the way, we can look at how to use the Compose CLI. I’ll show a couple slides to cover the Compose CLI basics and then hop over to my terminal to briefly illustrate using the Compose CLI.
docker-compose follows similar patterns to the docker CLI. You specify options to Compose, followed by a command, and add arguments for the command at the end. You can always use the --help argument to print a help page for any command.
Compose will use the Docker Daemon running on the host by default.
You can connect to a Docker Daemon running on a remote host using the -H option. Along with that, you can secure the connection to the remote host using transport level security options. This requires the remote host to have been configured to use tls for the Docker Daemon.
For commands that reference a Compose file, the default Compose files that the CLI tries to find in the current directory are named docker-compose.yml or docker-compose.yaml.
It can be restrictive using only the default Compose file, so the -f option is provided to allow you to specify a path to any file that you want Compose to use as the Compose file for a command.
Each isolated application is associated with a project in Compose. The project is given a name and that name appears in resources that get created by Compose. For example, the names of networks and containers created by Compose begin the project name followed by the arbitrary name key declared in the Compose file.
The default project name is the name of the directory containing the Compose file.
You can assign a custom project name by using the -p option.
As we have seen, Compose tries to make adoption easy for users already familiar with Docker. Most of the commands in the Compose CLI are familiar Docker commands that are generalized to work with multi-container applications.
This is the list of commands that exist in both Docker and Compose CLIs as of Compose version 1.17. As an example of how a command is generalized to multi-container applications, consider the stop command. In Docker, you use the stop command to stop one or more running containers passing the container names as arguments to the command. In Compose, the stop command will stop all containers declared in a Compose file, unless you provide the names of individual services to stop. Most commands generalize as you would expect. Some commands like config are not related to the Docker command. Config is useful for validating the YAML and configuration in a compose file. It’s important to note that you can still use the docker CLI to work with resources created by Docker Compose. For example, containers created by Compose are listed by docker ps since they are created by the Docker daemon. Compose is just wrapping commands to generalize them to how you would expect them to work with multi-container applications.
After removing the commands that exist in Docker, there are currently only two non-deprecated commands that are unique to Compose and they are big ones.
The first is up.
The up command performs the actions required to instantiate the application described in a Compose file. It starts by creating default and named networks as applicable, and any named volumes.
It then takes the actions required to bring up service containers. This includes building images if required, then creating and starting containers, and finally attaching to the containers to aggregate output and error streams from the containers. When the command exits, the containers are all stopped. However, you can use the -d option to let the containers run in detached mode.
The up command is also responsible for performing change detection when you bring up an application that has already been brought up. It will recreate containers with changed configuration and join them to the appropriate networks. Any connections that were established with the original container are closed. There are several options for configuring how Compose does this if the default behavior isn’t what you want.
The other command unique to Compose is down. Down is a partial opposite of up.
What I mean by partial opposite is that down will only remove containers, as well as any named and default networks by default.
It won’t delete volumes or images that up created, unless you pass arguments instructing the command to do so. Up and down make it easy to perform integration tests in a continuous integration pipeline. You can simply wrap a test script between up and down to have the tests run in the isolated environment. Ok, with that, we’ve covered enough of the Compose CLI to try it out and to use the help argument to find out more information when needed.
Here we are at my terminal. I want to demonstrate using the Compose CLI just to give a first look at it. We will cover more in-depth examples showing Compose in action in the remaining lessons in this course.
To start with, you can always get the usage information by appending --help on any command or docker-compose itself. I’ll pipe it into more to page through the output. I won’t read through the output since we’ve discussed most of what is shown. The help output finishes with a list of all the available commands.
To get more information on the up command, I’ll enter docker-compose up --help. I’ll jump down to the options to see what’s available for configuring the behavior of the up command. Just as an example, --no-deps can be used to prevent starting dependent services. This doesn’t sound very useful when you first bring an application up, but if you later modify the configuration of one service, it can be useful to not restart the services that the one service depends on. As another example, adding the --remove-orphans option can clean up any services that are no longer declared in a compose file. This can happen if you delete a service outright from a Compose file or if you rename one.
To finish up, I’ll demonstrate how to use docker-compose’s config command to debug any YAML or configuration errors. You will also see how config shows you the effective configuration that is used by Compose after variable substitutions and extension field references.
If I switch over to VS Code, I have a Compose file open called 1-extension-fields.yml. It follows an example shown in the slides for using extension fields.
It’s using version 3.4 which is good because that’s the earliest version that supports extension fields. To see if everything is ok with the file, run the config command on it. At the terminal, I’ll use the -f option to specify that file as the Compose file to use. So, Compose reports an error about services.cache.command contains true which is not valid. If I jump back to Code, it seems strange at first because there is no instances of true in the file. But remember that multiple words get mapped to true in YAML. Yes is one of them. Code has even changed the color to indicate that it isn’t a string value. I’ll add quotes around it to correct the error.
Running config again reveals a different error. The cache service doesn’t set an image or build command so it can’t be created. I’ll set the image to redis, but I want to use variable substitution to set the tag, like so. Now when I run the config command again there are no errors and the effective configuration is displayed. Here you can see the default-logging YAML references have been replaced with the associated configuration under each services logging key. Compose reports a helpful warning at the top about the REDIS_VERSION variable not being set so an empty string is substituted. You can see that in the displayed configuration. That will need to be corrected. I’ll export the variable
And the config command output confirms the variable is substituted into the configuration. I’ll leave it at that for now. We’ll see several more examples of Compose at the command-line in upcoming lessons.
This lesson started by outlining some of the features of the Compose CLI including, how it can create multiple isolated environments on the same host. This makes it appealing for continuous integration, testing, and development scenarios. It also has built-in Compose file change detection support to only do what is required to bring the application to the desired state described in a Compose file.
We saw that the docker-compose CLI follows the same pattern as the docker CLI. Most of the commands in docker-compose are analogous to ones you find in docker.
There are however, two important commands that are unique to compose. Up does everything required to bring an application described in a Compose file up, and down brings an application down, removing containers and networks, but leaving images and volumes untouched by default.
In the next lesson, we’ll go a step farther and bring up a web application with Compose using pre-built images from Docker Hub. When you are ready to see more of Compose in action, continue on with the next lesson.
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.