Fundamentals of Docker Containers and Docker Images (cont.)

Contents

keyboard_tab
Introduction
1
Getting Started
PREVIEW2m 45s
Summary
7
Wrap Up
4m 43s
Start course
Overview
Difficulty
Beginner
Duration
1h
Students
1807
Ratings
5/5
starstarstarstarstar
Description

This lesson builds on the processes and procedures used in part 1 of the Docker Container Fundamentals. The objective of this lesson is to use HTTP to communicate with a container and between containers, and share files via Docker volumes.

First, you will recap the Docker fundamentals covered in the previous lesson. Then, we will begin discussing backgrounding Docker containers, exposing ports, linking and inter-container networking, Docker volumes, and finally, the container lifecycle.

We will begin discussing long-running processes using the detach command on an NGINX server. Then, we will go through the docker ps command to check the container status, and explain how to find and work with exposed ports.

We will begin the hands-on portion by learning how to use curl to talk to containers. And we will cover how to map container ports to Docker ports.

Next we will discuss how to free up ports by cleaning up containers, and then to access those ports for new containers. The hands-on practice continues when we connect two containers together.

You will move data into and out of containers with Docker volumes using redis to move the data out of the containers.

Finally, you will learn how to save the data to disk, which is a predefined volume on the server.

Transcript

Hello, and welcome back to the Introduction to Docker course from CloudAcademy. I'm Adam Hawkins, and I'll be your instructor for this lesson.

This lesson continues with the Docker fundamentals. I recommend you watch Part 1 before continuing here. The topics for this lesson focus on starting and interacting with Docker containers. First we'll cover backgrounding Docker containers, exposing ports, linking and inter-container networking, Docker volumes, and finally, the container lifecycle. The learning objectives for this lesson are to first communicate with a container over HTTP, communicate between containers over HTTP, and so share files via Docker volumes.

All set? Let's head over the terminal for the Docker Container Fundamentals Part 2. Our previous examples use Docker containers that did one thing and exited. Now it's time to talk about long running processes like a web server or a database. The docker run command accepts a -d flag. This option is short for detach.

Let's start in an NGINX server to place around with this concept. Docker will pull the image and start the container in the background when we run this command. This is the first time I've use this image, so it must be pulled. Docker prints the container ID immediately. This command is asynchronous. It immediately regardless of how long the actual process takes to start or if the process started successfully. We need to inspect the state. Use docker ps to check the container status. We can see our container is running.

Great, but how can we talk to it? Previously, we did not have anything in the PORTS column, but now we do. This is because the NGINX image declares exposed ports. We will cover this more later on in the course when we build custom Docker images. Exposed ports enable network communication to run in containers. The NGINX image declares port 80 and port 443. Let's try talking to the web server on port 80. We can do this with a simple curl.

Hmm, connection refused. The container is running, so what's happening? Ports are only for inner-container communication. We're not running curl from a container. We're running curl from the Docker host. There are two ways we can talk to this container. We could map the container ports to ports on the Docker host or we could create a container connected to the NGINX container and run curl. Let's go with option one.

Docker provides a few ways to map container ports to Docker host ports. We'll use the simplest method for now. We'll use the -p option to declare a host container port mapping. This command mapped the Docker host port 8080 to container port 80. We have a new bit of information. Docker ps shows that 0.0.0.0:8080 is mapped to container port 80/tcp. Now we can take curl for a spin.

Voila, look at that nice HTML response. Too bad the terminal does not let us experience it in all its glory. It seems the server is running though. If we want, we could also expose port 443. This time, we use port 8081 and 8082. This is because another container is holding port 8080. Docker run accepts multiple -p options, so feel free to expose as many ports as needed. Docker run also accepts -P in upper case. This exposes all ports. Docker will find an open port and map it to a container port. Let's see how this works. The ps output shows the random ports chosen by Docker. We can also quarry these values programmatically via the docker port command, using the container ID or name.

Now is a good time to free up ports by cleaning up containers. Stopping is similar to terminating a process. Stop containers can then be removed. This will free up ports and other resources. Once a container is removed, it can not be restarted. Let's clean up everything now. First, stop all the Docker containers. Second, remove all the Docker containers. These two commands use a sub-shell to capture all the container IDs as arguments to docker stop and docker rm. Docker ps-q only prints the container IDs. This is exactly what docker stop and docker rm expect. Check docker ps-a now. You can see we're fresh and clean.

I mentioned earlier there were two options to access the container. Let's consider the second option of creating a container to run curl. Docker makes this possible through a bit of networking magic. Docker contains a complex network stack to enable inter-container and various host level abstractions. We'll focus on connecting two containers together.

First we must give the container a name. Then we can tell Docker to connect container A to container B when starting the container. Docker will set up a host file entry in container A so it can resolve container B. Things start off like last time, except this time, we provide the --name option to docker run. Link containers will use this name to resolve the container. Note this time we did not include the -p option. Docker automatically allows traffic on exposed ports between containers. Now we can start a container linked to the test server container.

Let's start a curl container. The --link option creates the connection. This container will be able to resolve the test server host. I'll use an image that has curl baked in to skip the install step. We'll drop into bash for a consistency. Now we're sitting right back in the terminal. Try a curl. Voila, more glorious HTML output. We can also poke into the etc/host file to see exactly what Docker has done for us. Docker has added entries for the container and container ID to point to the Docker-generated IP.

Now, let's leverage our knowledge of docker run to skip the interactive bit and run the test with a single command. This is the same command as last time, except here we run curl instead of bash. Containers may also be linked with different names. Here I've changed the --link option to point test-server to server. The curl argument is changed to server, as well.

Alright, we've seen how to communicate between containers. Now it's time to move on to moving data into and out of containers with Docker volumes. Docker supports mapping different data stores to the file system and the container. A local bin amount is the simplest. This connects the Docker host file system to the container.

First create a directory for the content files. Then create a text file for testing. We can start a docker container with the demo directory mounted on the file system. This is done via the -v option. It takes one argument, a string separated with a colon. The first part is the absolute path on the Docker host. The second part is the absolute path inside the container. Mount the demo directory at the path NGINX expects to find files. Also give this container a name so we can link other containers to it. Now we can curl the server. We et Hello World back as expected. We can do more cool stuff, like update the content in real time.

Let's update the file and try again. Write some new content to hello.text and then fire off another curl. We get the updated content back. Local bin amounts a really powerful feature.

We've just scratched the surface here on what else you can do with Docker volumes. One of my favorite uses is to mount the current directory in a container, run my build process, and then get my outer effects back on the host. It's a great way to containerize many different build tools.

Anyway, time to get back on track. We got data in, but how do we get data out? For that, we need to use something that generates data. Redis is one of the simplest data stores. It's a useful test bed for explaining the other side of Docker volumes. Data stores require a persistent place to store data. We've already covered how changes to a Docker container are lost when the container is removed. How do we solve this problem for things like databases? Docker images may also define volumes. Volumes are intended to be mounted at run time. However, Docker will automatically create a local bin amount for unspecified volumes. It's a convention to use /data for this data transfer. Let's demonstrate this with a simple redis setup.

Start by creating your redis server container. Use the redis CLI to add data. Let's unpack a long command. The redis image includes the redis-cli command. The container is linked to the previously created redis container. The -h option sets the server host to redis, the name of the previously linked container. Then the set foo bar command is executed. Let's run a similar command to get the value back. We have confirmed that the foo key is set. Tell the server to save data to disk.

We have written data to a server running in the container and persisted it to disk, but what is the disk in this case? The disk is a predefined volume. Docker automatically creates a volume because we did not specify one. Use Docker inspect to get detailed information about this container, in particular, where on the Docker host it is written to. Wow, that is a lot of JSON. Find the mounts section. Docker automatically created a directory in ver/lib/docker. Given the data is written back to a disk on the Docker host, we should be able to stop then start the server again and get our data back. This example demonstrates how users may manage the container lifecycle themselves. Note that the container retains the same ID.

Let's check the keys value again. We get bar. This is because the data was persisted to a volume independent of the server container. Time to completely remove the redis container. Docker rm takes a -v option for removing auto-generated volumes. If unspecified, the data directory will live on. This may eventually eat up your disk space, depending on how many containers you start. We can repeat this example using our own specified volume mount.

First create a directory to store the redis data files. Now start the server with redis_data mounted at /data. Run the same command as set, and the save the data. Stop and remove the container. Take a peak inside redis_data. Look at that. There's is an rdb file. This directory could be used with another container now. We could even back up this directory while the container is running. This example concludes our containers fundamentals tour.

These two lessons covered a lot of ground. You should have a good grasp on how Docker containers work and what you can do with them. The next step is to build our own Docker images. That's exactly where we're headed in the next lesson.

About the Author

Adam is backend/service engineer turned deployment and infrastructure engineer. His passion is building rock solid services and equally powerful deployment pipelines. He has been working with Docker for years and leads the SRE team at Saltside. Outside of work he's a traveller, beach bum, and trance addict.