Building Images


Getting Started
Wrap Up
4m 43s
Start course

This lesson is all about building our own Docker images with Dockerfiles and Docker Build.

We will start building our image by creating a new Dockerfile. You will create a file using the copy command.

We will get into building the image with Docker Build including the values required to making the image.

From here we will use hands-on practice to detail the process from creating an image through pushing it to the Docker Registry:
- Starting the container for the new image
- Creating a simple web application to demonstrate how to build a Docker image from scratch
- Step-by-step walkthrough to build the Dockerfile
- Installing and updating the Node.js
- Copying server.js to the root directory on the image file system
- Starting a Docker container using the exposed ports with the image we just built
- Reworking our image to use express.js web framework, including creating a new directory for the source code.
- Creating the application directory
- Reviewing and removing the Dockerfile boilerplate
- Exposing our port

Finally, you will build, push, and share your image to the Docker registry using Docker Push.


Hello and welcome back to the Introduction to Docker course from Cloud Academy. I'm Adam Hawkins and I'll be your instructor for this lesson.

This lesson is the longest one yet, so grab a coffee and get strapped in. This lesson is all about Dockerfiles and Docker Build, so, of course, we'll be working with Dockerfiles and pushing Docker images. This is a long lesson, so there are many subtle things in our learning objectives. We'll build a Docker image and we'll apply Docker to our development workflow. So, this is your last warning: this is going to be a long lesson. Ready to go? Awesome, let's rock!

The previous lessons covered using other people's images. It's time to get down and dirty building our own images. Consider our previous NGINX example. We used the volume mount to share files with a container. Instead, let's use the corrective approach of building an image that includes our files. This all starts with a Dockerfile and Docker build. The Dockerfile defines everything that should go into an image such as files, possible volumes, ports to expose, environment variables and the default command.

Let's build our first image now. First, create a new Dockerfile. The Dockerfile has its own syntax, which we'll cover in this lesson. Each line is an instruction. The first instruction is FROM, by convention. FROM declares the base image, or where to start from. I'll use the nginx base image. Everything in the base image is now available in this image. This includes settings for ports, volumes and anything else.

We apply customizations on top by adding our files or overwriting any previous settings. Add a hypothetical file to our Docker image. This file doesn't exist yet, so we'll need to create it before we can build it. The COPY command takes files specified in the first argument and adds them to the path in the second argument. The file is added to the directory nginx expects files. That's enough for this simple example. Create the hypothetical file mentioned in the Dockerfile. I'll throw some text in there.

We're ready to build our first Docker image with Docker build. There are two required values. The first -t sets the name and tag. The final argument sets the context directory. This is the directory where files may be copied from. Dot represents the current directory. Docker build produces a lot of output. Notice that it prints everything in a stepwise fashion. There's a lot of useful debugging information here which we'll come back to later.

Start a container using the new image. Now, fire off a curl to hello.txt. Everything works as expected. Our Docker container serves the custom file. This simple example is about as far as we can go in Step One. Dockerfiles have many features. We need to start building a real application to dive deeper.

Let's create a simple web application to demonstrate building a Docker image from scratch. I'll use Node.js to write a simple Hello World application. Don't worry about the programming aspect. I promise you, we won't have more than 10 lines. Like before, start by creating a directory for the source code. Node works nicely because we can create a web server without the need for external libraries. We only even need two files server.js and a Dockerfile. I've already got server.js ready, so I'll paste it in for now.

Time for the real work, building the Dockerfile. We'll walk through writing it step by step. Start with FROM, like before. I'll use ubuntu 14.04 because it's easy enough to install Node on. Note the 14.04 tag. I've specified the tag here to be explicit of which ubuntu to use. It's always better to be explicit. This will save you headaches later on. The next step us to install Node and add our files. Node.js recommends installation via a script. This script follows the curl bash pattern, so we'll need to install curl like we did before. We do this with a Run command. Add commands to update and install curl. Note, these commands include the -y option to remove the confirmation prompt. Docker build is a non-interactive process, so an interactive prompt will not work as expected. Check the progress so far by building the image. Everything works as expected.

Let's move on to the next step of installing Node.js. This line is copy and pasted straight from the documentation. Build, again. Notice how Docker did not run the first two steps again. This is because it has already been done before. Each image is made of multiple layers. Layers are generated by the commands in the Dockerfile. Docker knows the layer for Step One is already done and the same for Step Two. However, if we change the commands, then those steps would need to run again and all the following steps would need to run again. Add the Run command to install the Node.js apt package. Build, again. It's not required to build at every step. However, it's nice in the beginning when building an image for the first time. This way, each layer is cached and changes may be tested quickly.

Here we copy, server.js to the root directory on the image file system. Normally, you would not do this. Instead, you would use a directory for all of the source files. However, this is fine enough for this simple demonstration. The server starts on port 8080. Add an expose command. Finally, set the default command via CMD. CMD takes a json array as arguments. Write the CMD to start the server. Build, again, and now it's ready for a spin.

Start a Docker container in the background using the correctly exposed ports with the image we just built. Now, fire off a curl. Look at the response back from the node server. Not bad, huh? This example gives us some pretty decent exposure to the Dockerfile basics. However, real-world applications don't only include a single file. They have dependencies and usually many, many files. Let's move on to a more real-world example.

Let's rework our example to use the simple express.js web framework. First, create a new directory for the source code. We'll need Node.js installed on our host at bootstrap.project. I already have it installed. Run npm and net to bootstrap the new project. The command will prompt you for many things, however, just hit Enter. Nothing is actually required in this case. Now, add express to the dependencies. Next, update server.js to use express. I've already got this on my clipboard, so I'll paste it into my editor. We can set about updating the Dockerfile accordingly.

Let's use the previous version as a base. Create the application directory. Do this after the dependencies are installed because they are more likely to change than the source code location. I like to use slash app. This is my personal preference. However, the official images have their own convention. Use the WORKDIR instruction to change the current directory inside the image to the source code directory. Commands run from this directory during Docker build. Now, copy over package.json and then run npm install so the dependencies are available in the image. Next, copy everything else into the image. Copy all, so it takes a directory. This recursively copies all files instead of adding each file individually.

Lastly, change the CMD to npm start. This is an npm convention. This change also requires a change to package.json. We'll do that next. Add this start script to run server.js. Time to build the image. I'll use the v2 tag to differentiate between past examples. Now, start a container using the new image. Now, fire off a curl localhost 8080. Look at that: hello world. This Dockerfile contains a bunch of boilerplate. We certainly don't need to install node.js ourself. We should use an image with it already baked in. This is where the official library comes into the picture. The Docker community maintains an official library of base images for many common applications. We've been using them this whole time. These images do not have a slash in their name.

So, Redis, NGINX, Ubuntu are all official images, where Foo slash bar is owned by Foo. Image reuse is encouraged because you're more likely to have more layers in common with the official images than your own images. The official library contains images for most programming languages with appropriate version tags, databases and even some frameworks. Odds are, there is an official image for your stack.

Let's work through shortening our example to use the official node.js image. Start by changing the base image and removing all the boilerplate node.js installation code. Simply change the base image to Node Four. This would actually be enough if we only needed node.js installed, but there is more that we can do. Dockerfiles usually have a similar structure. First, install the runtime and or compiletime dependencies. Next, create a directory for the application code to live. Then, copy over the application's dependency manifest file such as package.json. Next, install application libraries. Finally, copy over all the files, define the ports, volumes or anything else and set the default command.

The steps are generally the same, but vary on language. The official library also includes on-build images for typical use cases. These base images have special on-build instructions. This allows the base image to specify commands to run when the base image is used with FROM. In short, this allows the base image to run commands when used. Let's replace even more of our own boilerplate with the on-build image. We can remove the instructions to create a directory, install the dependencies and even copy all of the code. CMD is no longer required, because it's expected by node.js convention.

All that's left is exposing our port. I must say, that's pretty nice. We are able to go from something like 16 instructions to just two. The on-build images are definitely handy when your application follows the established pattern.

Now, our application is ready to go as a Docker image. Time to build, push and share it. Let's build the image now to demonstrate how on-build instructions work. This is my first time using the image, so it must be pulled. Note the on-build triggers. This is everything the on-build image is doing in the context of building our own Docker image. We've built our first real image using community-established best practices.

Time to push our handiwork to the Docker registry. Docker provides free public image hosting on the official registry. You need to create a Docker Hub account before you can push images. Head over to to create one, if you haven't already. Once you have your username and password, you can log in on the CLI. I already have an account, so I can log in now.

Use Docker Push to push the image we just built. You can omit the tag and Docker will push all of the images. This is not recommended because you may overwrite existing tags. Tags are mutable. I can build a new image and tag it as v2, push it and overwrite what was there. You should always, always be explicit when pushing images. Now, we'll see the Docker pull output, but in reverse. Great, we've just pushed our first image. Now, anyone can pull it, run it and get a simple Hello World.

This lesson was long and jam-packed. We covered everything you need to know to build and share your first production-ready Docker image built in accordance with community best practices.

The remainder of the course explores the Docker toolchain and demonstrates what exactly you can do with Docker. The next lesson is on building multi-container applications.

About the Author

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.