Docker Compose Parts
The course is part of these learning paths
In this lesson, we will discuss how Compose handles multiple Compose files. We will cover a few things to consider for using Compose for production environments. And finally, we will review the concepts discussed in a demonstration.
We will start by discussing the Compose feature that allows it to combine multiple files. You will see how Compose handles each file, using the subsequent file to override configurations of the previous iteration.
We will cover several Compose commands for overriding configurations:
- -f: specifies non-default override files; can be used multiple times.
- Config: displays the effective Compose file after everything is combined.
- -H: manages the application on different Docker hosts.
Then we will see an example of how multiple Compose files interact and override each other.
We will discuss several Compose file considerations for moving to a production environment:
- Remove any volumes for source code.
- Consider using the always restart policy so services will automatically bring themselves back up if they exit.
- Avoid host port conflicts that can prevent an application from coming up by letting Docker choose the host ports to use.
- Reduce verbosity of logs and disable debug information using environmental variables.
- Consider additional services that may be useful in production: monitoring and log aggregation services.
Finally, we will see manage a multi-container application in multiple environments through a hands-on demonstration.
We saw how to use Compose to build an image in a development scenario where the code is not ready to be sealed into the image. But what about once the code is ready? How can you use Compose to make the production image? Do you need to use two independent Compose files and Dockerfiles? This lesson will clear up these questions.
I’ll start with a discussion of how Compose handles multiple Compose files.
Then I’ll mention a few considerations for using Compose for production environments.
I’ll finish by reviewing the concepts we discuss in a demo. The demo extends the app from the previous lesson to use Compose for development and production environments.
Multiple Compose Files
Although it’s an option to maintain completely separate compose files for each environment you maintain,
Compose has a useful feature that can combine Compose files.
The semantics of combining Compose files is to treat the first file as a base configuration and each additional file overrides configuration specified in the base configuration. The overrides can add configuration that isn’t present in the base configuration as well, not only strictly overriding existing values in the base configuration.
By default, Compose is set up to read two Compose files, the familiar docker-compose file, as well as an optional override file called docker-compose.override.yml.
The -f Compose option can be used multiple times to specify non-default override files. Each override file overriding the previous ones.
The Docker Compose config command is useful when writing and debugging multiple Compose files. It will display the effective Compose file after everything is combined.
The -H option of Compose allows you to manage the application on different Docker hosts, since the different environments are probably on different machines.
Multiple Compose Files Example
Let’s consider an example that uses Compose in two environments, one that is not intended for development, and another that is for development. The Dockerfile for the application is shown here. It’s follows the non-development image pattern of copying all of the source code into the image, and installing the dependencies. Note that the source files are in the /src directory.
Here is the docker-compose.yml file, which plays the role of the base configuration in our example. Compose is instructed to build the web image using the current directory as the context, along with publishing a port on the host, and running a redis container. During the build the source files in the current directory get added to the image. The image can then create containers that don’t have any dependency on source files outside of the container. This is what you want in a non-development scenario. Let’s see how an override Compose file can extend the application to work in development scenarios. Can you guess?
On the right, I’ve shown a development override Compose file. To use source files on your local machine instead of inside the image, you can use a volume. The example mounts the current directory at /src in the container. Because of the way that layering works in images, the files that are in the container are effectively overwritten by the volume. With this override, you can modify the source files on your local machine and see the changes reflected in a running container. It isn’t always possible to use a common Dockerfile for each environment you intend to use the container. In that case, you can override the Dockerfile for each environment using the build’s dockerfile configuration in each Compose file.
Moving to production environments is worthy of a course of its own. I will mention a few Compose file considerations for moving to a production environment, but know that there is more to it.
Remove any volumes for source code. You want the code to be frozen inside of a production image.
Consider using the always restart policy so services will automatically bring themselves back up if they exit.
Avoid host port conflicts that can prevent an application from coming up by letting Docker choose the host ports to use. You do this by only specifying the container port in the ports sequence.
Usually the runtime and the application have environment variables to configure production mode. This may reduce verbosity of logs and disable debug information.
The last one that I’ll mention, is consider additional services that may be useful in production. For example, monitoring and log aggregation services. Now let’s wrap up with a demo illustrating some of the concepts of using multiple compose files for development and production environments.
I have a project open that is similar to the example in the previous lesson, except I’ve modified it to use multiple Compose files. The development Dockerfile is the same as before. This is the file, dev.dockerfile, to refresh your memory. Because the application uses different ports and default commands for development and production, I’ve written a separate production Dockerfile.
This is it here, prod.dockerfile. It doesn’t install any development dependencies like nodemon and it copies all of the source files into the image, not just the dependency file. The exposed port has changed from 3000 to 8080 as well.
Now let’s get to the main subject, the Compose files. This is the base Compose file, docker-compose.yml, that has configuration that is common to both environments. Then each environment has its own override file to add its own unique configuration. Alternatively, you could use the base configuration for the production environment, and have a single override file for development that not only adds but actually overrides settings in the base configuration. The configuration in this file is similar to the Compose file in the previous lesson’s demo with the development specific configuration removed. One change is that the image has been given a registry URL. This allows you to later use docker-compose push to push the production image to a corporate registry, for example.
Looking at the development override Compose file now. This is the development specific configuration extracted from the single configuration in the previous lesson. When you tell Compose to use the base configuration plus this configuration as an override, it effectively reproduces the single development environment Compose file in the previous lesson. We’ll actually see how they combine with the docker-compose config command in awhile.
And over to the final file we’ll look at, the production override Compose file. I’ve use the name prod but the image could and probably should be used for automated testing in a continuous integration system and/or staging before going into production. First, note the build configuration is set to use the prod dockerfile. We can also see several of the production considerations manifested in this override file. There is no volume for the app, both services are configured with the always restart policy, no specific host port is set to avoid port conflicts, and a production environment variable is set to configure the application for production. The last override is that the database is set to use a named volume to persist its data. In development it was considered optional, but we definitely want a volume in the production environment. Now let’s take a look at how to use multiple Compose files on the command-line.
I’ll focus on using the config command to show the effective configurations when you specify override files. I’ll start by validating the configuration for the development environment. The command just adds an extra -f for the development override file. The output shows the combined configuration that Docker Compose will use. The output of config is more explicit than what was in the Compose files, in terms of using absolute paths and not having implicit null values. But we can clearly see that both configuration’s options are there. For example, the base configuration’s image and the development override’s source code volume. I’ll clear that and have one more go at it, this time specifying the production override file. And here we can see the effective configuration for the production environment. For example, the always restart policy on both services.
In this Lesson, you saw how multiple Compose files and Docker Compose’s override feature makes it easy to manage your multi-container applications in multiple environments. We also discussed some considerations for when one of those environments is production. When you are ready, continue on to the next lesson where we’ll wrap up the course.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.