In this lecture, we will discuss the many aspects of the Docker architecture.
You will learn how Docker uses a client-server architecture. We will introduce you to Docker daemon, the server in the partnership, which responsible for managing Docker objects, such as images, containers, and networks.
You will meet the client-side of the architecture in the Docker binary, the docker command.
We will discuss the many configurations available from the daemon. Options include setting it up as a remote daemon, and debugging capabilities to make runtime changes.
We will detail why Docker has become so popular in the recent years, and how it has become synonymous with containers.
You will be introduced to namespaces, which allow you to isolate system resources. Docker uses several namesakes to handle different processes:
- pid: handles process isolation
- net: network stack isolation
- ipc: SysV-style interprocess communication isolation
- mnt: file system isolation
- uts: hostname isolation
At the end, we will explain control groups, or cgroups, which limit resource allocation, and union file systems, which merge layers to avoid file duplication in your container.
Welcome back. In this lesson we're going to take a look at some of the different aspects of the Docker architecture. Now there are a lot of things to explain with Docker and it's all very interconnected, so don't feel bad if this doesn't make sense right away. Stick with it and by the end of this course things should start to make sense.
Okay, at a high level, Docker uses a client-server architecture. The server part is that there's a Docker daemon, which is responsible for managing Docker objects. By objects I mean things such as images, containers and networks, which we'll cover later in the course. The daemon exposes a REST API that the client consumes over UNIX sockets or a network interface.
The client is the Docker binary, so whenever you use the docker command, that's the client. This diagram here gives a glimpse of how the client and daemon interact together. Here you can see that the subcommands issued by the client are sent over to the daemon, for example, the docker pull command here instructs the daemon to get an image from the registry.
The Docker daemon has a lot of different configuration options that you can pass in when you run the daemon. There are different options that let you change how the daemon operates, for example, if you want to use a remote daemon you could adjust the socket option, if you want to have some debugging capabilities you can pass in the -D flag, so if you want to make changes to the runtime, you could do that too.
So the Docker daemon is in charge of managing Docker objects, and the client is the primary way that you'll interact with the Docker API. Being new to Docker, you don't need to change anything about the daemon, the defaults will get you by just fine. So while you might not be customizing the daemon settings, knowing about that separation of the client and the daemon will help if you run into an error such as this one when you're using the Docker binary.
In this example, the Docker binary was used to try and list off the running containers. Since the client relies on that daemon to be running to get that information and the daemon isn't running, the client is kind of useless and it throws this error, and the solution to this is just to make sure that the daemon is started on the OS that you're running on.
Containers aren't a new concept, they've existed in some form for years. If you haven't heard about them before, you probably didn't have to deal with the types of problems that containers solved. However, the cloud has really shifted things and hyperscale is the new defacto standard. There are a couple of reasons Docker has taken off so well.
First is because Docker took a lot of the current container technologies and combined them into one product. The second reason is that Docker came along at a time when people were looking for better ways to build, deploy and run highly scalable, secure apps. So the combination of market demand and well established technologies made it a prime candidate to become synonymous with the term container.
Docker is actually built on some very well established Linux kernel features, that when combined together, allow processes to be run in isolated environments. Let's go through the technologies to understand how they combine into one product. The first is called namespaces, which allow you to isolate system resources.
Docker uses a few different namespaces. The namespaces supported by Docker are the pid namespace, the net namespace, the ipc and mnt namespaces, as well as the uts namespace. It uses pid to handle process isolation, this means that each namespace has its own set of process IDs. The net namespace, which is short for networking, is used to isolate network stacks.
Each net namespace has a private set of IP addresses, its own firewall, routing table, et cetera. Linux has several mechanisms for interprocess communication. One of those mechanisms is called System V, using the Roman numeral V for five, or SysV for short. The ipc namespace allows processes to be isolated from SysV-style interprocess communication.
The mnt namespace handles mount points, which allows for isolation of file system mounts. And finally, the uts namespace, which stands for Unix Timesharing System, allows for isolation of the hostname. So namespaces allow for isolated aspects of a system's resources. Now there are additional Linux kernel namespaces, however the ones that I've shown here are the namespaces being used by Docker.
So, when these are all used together they allow for a very high level of process isolation. The next feature that Docker uses is called control groups, or Cgroups for short. Control groups are used to limit resource allocation. Since containers allow processes to be run in isolation, you can't have any single process consuming all of the system resources.
So as an example, imagine you need to run five containers and one of those consumes all of the system's CPU. The end result is a sort of denial of service attack, because the other four containers are not able to get the CPU that they need to function. So control groups allow for sort of limits to be set on different subsystems, which ensures that the processes aren't going to be taking more than they should be allowed.
The next bit of functionality that makes up Docker is the union file system. Now this is an important part of Docker, and it's functionality that really helps to keep down the overall size of Docker containers as well. A union file system starts with a base image and then can merge in any changes. When you create a Docker container, you have a starting image, which is a set of files that make up the base image for your container.
As you start making customizations by adding or removing packages, files, directories, et cetera, those create different layers. Each layer is a set of file changes that the union file system can merge into the previous layer. Because of this layered design you don't end up with duplicate files because each layer just needs to know about the changes.
Okay, so none of this was really deep dive, this is after all an intro course, so I want to stop here and let's summarize what we've covered. First, Docker is a client-server application, the client is the Docker binary, the server is the Docker daemon which runs a REST API. Second, Docker is comprised of several well established Linux technologies, including namespaces, control groups and a union file system.
It's the combination of these technologies that makes Docker so useful. Each of these is great on their own, but together they allow Docker a better way to run your code in isolation. Alright, let's wrap up here, and in the next lesson we're going to cover actually installing Docker. So if you're ready to keep going, then I'll see you in the next lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.