The course is part of these learning paths
Docker has made great strides in advancing development and operational agility, portability, and cost savings by leveraging containers. You can see a lot of benefits even when you use a single Docker host. But when container applications reach a certain level of complexity or scale, you need to make use of several machines. Container orchestration products and tools allow you to manage multiple container hosts in concert. Docker swarm mode is one such tool. In this course, we’ll explain the architecture of Docker swarm mode, and go through lots of demos to perfect your swarm mode skills.
Learning Objectives
After completing this course, you will be able to:
- Describe what Docker swarm mode can accomplish.
- Explain the architecture of a swarm mode cluster.
- Use the Docker CLI to manage nodes in a swarm mode cluster.
- Use the Docker CLI to manage services in a swarm mode cluster.
- Deploy multi-service applications to a swarm using stacks.
Intended Audience
This course is for anyone interested in orchestrating distributed systems at any scale. This includes:
- DevOps Engineers
- Site Reliability Engineers
- Cloud Engineers
- Software Engineers
Prerequisites
This is an intermediate-level course that assumes:
- You have experience working with Docker and Docker Compose
All right, this lesson and remaining lessons focus more on getting hands-on with Docker swarm mode. You will see a lot of the concept knowledge that you've built up in the previous lessons in action. These lessons will focus more on the commands you need to use to accomplish tasks related to swarm mode, starting with setting up a swarm.
We will begin by laying out the options available to you for setting up a swarm mode cluster. After that we'll show two ways to set up a swarm locally on your machine: as a single node swarm and as a multi-node swarm using virtual machines.
There are several options for creating a swarm mode cluster. You should consider factors such as the workloads you want to deploy on the swarm, the management complexity, and cost when determining which option to choose.
The simplest option is creating a single-node swarm. Recall that swarm mode managers can also perform work by default meaning that you can run swarm workloads with a single node. We will see later in this lesson how easy it is to set up. This may be appropriate in development and test scenarios. With no fault tolerance, it is not something to do in production.
The other options are for multi-node clusters. There are unmanaged options that you put you in charge of maintaining the infrastructure and applying patches, and there are more managed options where you can use the swarm as a service without worrying about hardware or software patches.
For the unmanaged option, you would likely have your own compute cluster or private cloud. You would need to ensure Docker is installed on the bare metal servers or on virtual machines running on top. The network firewall would need to allow traffic on the ports swarm mode requires (TCP port 7946 and UDP ports 7946 and 4789). The Universal Control Plane that is available through Docker Enterprise edition can set up an on-prem swarm using a graphical interface.
Here is a screenshot of the UCP web interface in action showing a three node swarm.
We will setup a multi-node cluster using VMs on a single physical host later in this lesson. You could run VMs in a public cloud and make a swarm out of the VMs. However, there may be a better option if you are going to leverage the public cloud.
For the more managed options, you could use cloud provider templates that allow you to set a few parameters and have swarm created for you. This is true for Microsoft Azure, Amazon Web Services, and IBM Cloud. You can also leverage Docker's Docker Cloud offering to create swarms on Azure and AWS through the Docker Cloud graphical interface. Each option is explained in Docker's own documentation.
Now, it's time to demo setting up some swarms. I'll first setup a single node swarm and then a multi-node swarm using virtual machines and the help of the docker-machine command.
I'm here at my terminal on my mac. I have Docker for Mac installed
$ docker version
To see the current status of the Docker daemon's swarm mode, you can use the docker info command and look for the Swarm key:
$ docker info | grep Swarm
The inactive value means the daemon is not running in swarm mode.
Now we'll see how easy it is to start running in swarm mode. The commands relevant to managing a swarm are under the swarm subcommand of the Docker CLI.
$ docker swarm --help
In the commands list you see everything from rotating the root certificate authority for a swarm to unlocking a locked swarm. The only command needed to start a new swarm is
$ docker swarm init
And that's all that it takes to start running a single-host swarm. The output tells you that the current node is running as a swarm manager and provides a command for joining workers to the swarm. The value of the token argument is the worker join token. A similar looking token is used for joining manager's to a swarm as seen from the join-token manager output
$ docker swarm join-token manager
Let's probe around to see some of the changes that occur when you start running in swarm mode. First, let's revisit the docker info output
$ docker info
The state has changed to active to indicate that the daemon is indeed running in swarm mode. There is also a bunch of useful tidbits related to the swarm's configuration: Number of managers, number of nodes, right down to internals of the Raft consensus algorithm. You can even eek out additional information including TLS certificate info by using the format flag and specifying the Swarm field
$ docker info --format '{{json .Swarm}}'
I'll clear that because it is quite unsightly and there is no pretty print option.
We can also verify that the networks we learned in the swarm architecture lessons have been created
$ docker network ls
Here we see the docker_gwbridge local bridge network for connecting overlay networks to the hosts network and the ingress network used for handling external ingress traffic to the swarm.
That's all there is to the single-node swarm. You could start using it for development and test scenarios as is. For demonstration purposes, I want to use a multi-node cluster so I will tear down our current swarm. To do that, you force leave the swarm
docker swarm leave --force
The force flag is required because when the last manager in a swarm leaves all the swarm state goes with it. This is what we want to happen in this case.
I'll set up a multi-node swarm with two workers and one manager for demonstrating various swarm concepts. Remember that one manager is not a good idea in production, but it is going to be enough to illustrate working with a swarm mode cluster. To quickly create Docker-enabled VMs, I'm going to use docker-machine.
$docker-machine
docker-machine comes installed with Docker for Mac and Docker for Windows. Only a few docker-machine commands are needed so I'll explain them as they are required. But know that there is a lot more to docker-machine than what I'll explain in this lesson.
The first command is create, which does exactly what you'd expect.
$ docker-machine create vm1
By default it will create a VM in VirtualBox using an image with docker installed. Virtualbox was installed previously on my mac so everything went off without a hitch. I'm using the names vm1, vm2, and vm3 instead of more descriptive names like manger1 because it's possible for nodes to change their role in a swarm. However, vm1 will be used as the manager in this lesson. I'll speed this up until it finishes…
Now I'll create vm2 in the same way
$ docker-machine create vm2
And finally vm3
$ docker-machine create vm3
Now I'll use the ls command to list the vms and their IP addresses
$ docker-machine ls
The machines are at 192.168.99.100, 101, and 102. The VMs are running the 18.01 edge release which doesn't have any significant changes in swarm mode compared to the 17.12 stable release I have running on my Mac. I'll connect to vm1 using docker-machine's ssh command
$ docker-machine ssh vm1
And I'll show docker info to confirm that docker is installed but swarm mode is inactive
$ docker info
To initialize a swarm, I'll use the same init command as with a single-node setup but with an advertise address:
$ docker swarm init --help
The advertise address is the IP address other nodes will use to join the swarm.
$ docker swarm init --advertise-addr 192.168.99.100
I'll give the IP address but you could alternatively provide the network interface name. I'll copy the prepared join command for joining workers. You can always retrieve the join token later using the join-token swarm subcommand.
I'll drop out of vm1 and ssh into vm2 to join the swarm
$ exit
$ docker-machine ssh vm2
$ docker swarm join …
The output acknowledges that the node joined the swarm as a worker node. Now I'll repeat the process for vm3.
To confirm the swarm has one manager and 3 nodes in total, I need to run the docker info command on the manager node, which is vm1.
$ docker info
There we have it, a 3-node swarm with one manager setup with the help of docker-machine.
In this lesson, we learned some of the options available for setting up a swarm mode cluster. These included some high touch options putting you in charge of the hardware and software patching to fully automated solutions like the one provided by Docker Cloud allowing you to spin up a swarm on Amazon Web Services or Azure from the comfort of a graphical interface.
We then saw how to set up a single node swarm using the swarm init command
After we set up a multi-node swarm with the help of docker-machine and Virtualbox. The same init command was used with an advertise address for other nodes to use to join the swarm using the join command.
This is a depiction of the swarm that we currently have set up. vm1, vm2, and vm3 are all in the swarm, while my mac is not participating in the swarm. vm1 is the swarm manager, as indicated by the orange tie. We'll use this multi-node swarm for the remainder of the lessons.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.