Docker Compose Parts
The course is part of these learning paths
In this lesson, we will begin by getting into the Compose file configuration and Compose commands necessary for building images.
We will discuss how to build images with Compose. We will discuss what a build key is and how it is used in the Compose file. We will also review several other build commands.
We will look at the two forms of build configurations in a Compose file: short and long forms:
- Short Form: sets the build value to the path of the build context where the Dockerfile is located.
- Long Form: uses a nested mapping to build the context using a required key and it has the same meaning as the context for the short form.
We will explain the naming pattern for the build images, and how to change the names or tags. We will see how the Docker-compose command, and several optional commands, build/rebuild images.
Finally, we will spend the rest of the lesson working hands-on to illustrate how to build in Compose. We will put the concepts outlined previously to work as we develop an image with Compose that will allow on-the-fly updates as you modify the source code.
Up until now, we have seen Compose working with images pulled from a Docker registry. Can you use Compose in development scenarios when the code isn’t ready to be sealed in an image? How do you build images with Compose? These are the questions I’ll answer in this lesson.
I’ll begin by getting into the Compose file configuration and Compose commands needed for building images.
With that foundation in place, I’ll finish the lesson with a demo that illustrates how to use Compose to build an image in a development scenario. Compose will bring the application up and your code changes will be reflected in the running container without needing to rebuild or stop the container. This provides a similar experience to developing on your local machine without Docker, but the server and all of the application dependencies are running inside a container.
Building in Compose
When you build images in Compose, you make use of the same tried and true Dockerfiles that you use when building images with the docker build command. We won’t get into the details of Dockerfiles in this lesson, but I’ll quickly review one in the demo.
To instruct compose to build an image, add the build key in a service’s configuration. There can be more than one service in a Compose file with a build mapping.
The Docker-compose up and docker-compose build commands can be used to build images. We’ll take a closer at the build key and these commands in the next few slides.
If a service has a build key present, Docker Compose will build the image for the service. There are two forms of build configurations in a Compose file. The short form sets the build value to the path of the build context which is where the Dockerfile is located.
The longer form uses a nested mapping. The context is a required key and it has the same meaning as the context for the short form. The Dockerfile key is optional. If specified, the value is the name of the Dockerfile to use. If it isn’t specified, Dockerfile will be used as the name for the file containing the image build instructions. Args are optional as well, and can be used to pass Arg values at build time. The Dockerfile should have corresponding ARG instructions.
The built image will be given a name that follows the pattern of Compose project name followed by the service name. If you want to use a different name, or want to specify a tag for the built image, beside the default latest tag, you can do so by using the image key. The image specified the image to pull from a Docker registry before, but if a build configuration is present for a service, the image is interpreted as the name of the built image.
Docker-compose up will build any image for services that don’t have one already built. Subsequent up commands won’t rebuild the image, unless you pass the --build option. This might not give enough control over built images, so there is another command for building.
Docker-compose build will build images or rebuild them if they already exist. Just like with the docker build command, there are a couple options to customize the behavior of docker-compose build. The --no-cache option will prevent using the layer cache causing all layers to be rebuilt. The --pull option will always attempt to pull a newer version of a base image described in the Dockerfile. That’s all there is to building in Compose.
Now, we’ll get into a demo to illustrate building in Compose. The demo will use a NodeJS project that uses MongoDB for persistence. The image shows the app. It simply accumulates whatever messages users enter. The goal of the demo is to build an image with Compose that will allow on the fly updates as you modify the source code. No rebuilds and no stopping the containers. This gives the instant feedback that developers crave. Let’s see how to do that.
Here in VS Code, I have a Dockerfile for the project open. It’s called dev.dockerfile. I’ll try to stay as language-agnostic as possible but the specific RUN instructions are specific to NodeJS development. At a high level, the instructions install the dependencies for developing and running the application. On line 5, nodemon is installed. nodemon is a tool that watches for changes to development files and automatically restarts the server to reflect the changes. On line 17, nodemon is set as the default command for running a container using the image. On lines 8 through 11, the src directory is created and set as the working directory. Then the application dependencies file, package.json, is added to the src directory in the image. The npm install command installs all of the dependencies in the src directory. Note that only the dependency file is added and not any source files. The image has everything the code needs to run but not the code itself. The development server port of 3000 is exposed on line 14. So how will this image be used to develop the code? The default command is expecting a file at /src/app/bin/www to start the server but it doesn’t exist in the image. How will that work? The answer to both questions is by mounting a volume. Specifically, the source will be mounted at /src/app. The default command will then start a server using the code in your development environment. Let’s take a look at the Compose file, that I’ve called dev.docker-compose.yml.
There are two services, app and app-db, that are in the backend network. App publishes the port of 3000 so that the host can access the development server. There are a couple environment variables to configure NodeJS for development and to pass the hostname of the database. What’s most important for this lesson is the build configuration. Because the dockerfile doesn’t have the default name, the mapping syntax is required. The context is ., representing the directory of the Compose file which is also where the dockerfile is. The volumes key also plays an important role. The src directory on the host is mounted at /src/app where the development server expects to find it. Let’s bring up the application using the Compose CLI. Thanks to the image configuration mapping, the built image will be named accumulator and will receive the default tag of latest.
I’ll use the up command which builds the image since there is no prior image to use. I’ll skip ahead to the build part. Each of the instructions in the Dockerfile are executed just like with docker build. I’ll jump ahead to when the image is ready. There are some harmless warnings because some optional dependencies are specific to macs but the image is Linux. The output reports that the accumulator:latest tag is used. Compose also gives a helpful warning telling you to use the build command or pass the --build option to rebuild the image. Let’s verify the app is up and running.
I’ll point my browser to localhost on port 3000 and voila, the accumulator app is up and running. I’ll enter some messages and refresh the page to ensure they are persisted in the database. Everything looks to be working.
I’ll hop back to VS Code, and edit one of the views by adding a colon after Enter messages to accumulate, to confirm that the change gets updated in the browser. I’ll save that change, and refresh the browser.
I’ll add some exclamation marks at the end of the development environment notice that appears in the upper right corner. Going back to the browser and refreshing
We see the changes reflected. No stopping the container and no build command required.
To show that nodemon detected the change, let’s look at the app service’s logs. There it is in green, restarting due to changes. That’s pretty cool. You can use the development image to bundle up all the dependencies and all you need on your machine is the source files. You don’t need the dependencies installed locally. By relying on the image for dependencies, you are a step closer to having parity between development and production because the production image would be using the same dependencies. The chance of the code working on your machine but not in production is greatly reduced. We’ll look more at dev-prod parity in the next lesson.
I’ll take the application down now. And if I bring it back up, do you think the test messages I entered into the application will still be there?
Let’s refresh the page and see. In this case, the messages are gone. Recall that the db image isn’t using a volume, so once the container is removed, everything is gone. That give an easy way to start fresh when developing this app, but you could easily add a volume if you wanted to persist the messages.
This lesson illustrated how to build images and develop with Compose. The next lesson will further build upon what we’ve learned in this Lesson to adapt Compose to multiple environments so you can share common configuration between development and production. When you are ready, continue on to the next lesson to see how it’s done.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.