The course is part of these learning paths
Docker has made great strides in advancing development and operational agility, portability, and cost savings by leveraging containers. You can see a lot of benefits even when you use a single Docker host. But when container applications reach a certain level of complexity or scale, you need to make use of several machines. Container orchestration products and tools allow you to manage multiple container hosts in concert. Docker swarm mode is one such tool. In this course, we’ll explain the architecture of Docker swarm mode, and go through lots of demos to perfect your swarm mode skills.
Learning Objectives
After completing this course, you will be able to:
- Describe what Docker swarm mode can accomplish.
- Explain the architecture of a swarm mode cluster.
- Use the Docker CLI to manage nodes in a swarm mode cluster.
- Use the Docker CLI to manage services in a swarm mode cluster.
- Deploy multi-service applications to a swarm using stacks.
Intended Audience
This course is for anyone interested in orchestrating distributed systems at any scale. This includes:
- DevOps Engineers
- Site Reliability Engineers
- Cloud Engineers
- Software Engineers
Prerequisites
This is an intermediate-level course that assumes:
- You have experience working with Docker and Docker Compose
Stacks help you manage applications distributed across a swarm. In this lesson, we'll get some practice working with stacks. I think you'll find that it's easy to get the hang of and better than writing sprawling, multi-line docker service create commands.
Agenda
We'll start the lesson by introducing stacks and stack files. It won't take very long because your prior experience with Docker Compose will serve you well here. Then we'll get started with a demo. Our last demo of the course.
Stacks
In swarm mode, stacks are a group of related services that can be orchestrated and scaled together. It carries the same meaning as a software stack or technology stack in application development.
Stacks are declared using a Compose file. That makes it very easy to start using stacks given your Docker Compose experience. You can use stacks to manage services, networks, and volumes as you would with Docker Compose. For a compose file to work as a stack, you have to be using Compose file version 3 or greater. Although they are the same type of file, when you deploy an application declared in a compose file to a swarm, it's referred to as a stack. By convention, the file name that is used to indicate a stack is docker-stack.yml.
A lot of the Docker Compose features you know and love work with stacks. These include the declarative configuration, an active community on Github, and the benefits of source-controlled configuration. Similar to Compose, when you deploy a stack a network is created by default to isolate the stack from other services.
However, there are some differences between using Docker Compose and stacks that you should be aware of. Currently, there are several configuration options that are ignored with stacks that you could use in Compose. Some of the most notable ones are build and depends_on. The Compose file reference documentation should be consulted as support is added/removed for options over time. A link to the documentation is at the bottom of the transcript for this video.
On the flip side, what's stacks can use that Compose can't are mostly gathered under the deploy key. That is where you can specify options for node labels, replication mode, resource reservations, update configuration, and others. Currently, there isn't support for configuration rollbacks in a stack file, and placement preferences and endpoint_mode for setting virtual IP or DNS round robin service discovery are only available in version 3.3 Compose files or above. Stack files also support swarm secrets by a top-level secrets key. We won't get into the details but know that they are supported.
With previous experience in Compose, that's all that we need to go over to start using stacks in swarm mode. We can get started with the demo which will reproduce what we did in the last lesson's demo except using stacks.
Demo
I'll start by removing the currently running services we deployed with docker service create, since we're going to recreate them using a stack.
$ docker service rm viz nodenamer
Now let's take a look at the stack file. I won't dwell on it since it is a one-to-one mapping of the commands we entered to create the services before except with more structure and slightly different option names. The stack specific options are under the deploy keys. Because we have a placement preference, we need to use version 3.3 or higher. It is more pleasant to work with stacks compared to entering all the options at the command-line. We'll also see next that commands for stacks can limit output to services declared in a stack without having to do any filtering.
The commands for working with stacks are organized under the docker stack management command
$ docker stack --help
ls lists the current stacks, services lists the services in a stack and ps lists all the service tasks for a given stacks. The deploy and rm commands are similar to docker-compose up and down. In fact, up and down are aliases for deploy and rm so you could use up and down with stacks if you prefer.
Let's look at the deploy options
$ docker stack deploy --help
You have options for removing services that are no longer in a stack, controlling when images are resolved, and for passing along credentials if you are using images in a private Docker registry. The one required option is -c to specify a Compose file for the stack.
$ docker stack deploy -c docker-stack.yml demo
I'll call the stack demo. We can see the stack has a network created automatically for the services in the stack. Of course, you can, and often should, exercise more control over the networks you want, just as you would in Docker Compose. The stack has finished deploying so we should see the visualizer on port 8080 of vm1, and there it is.
To do an update you use the same deploy command you used with the original stack deploy. Let's make an update to the stack so there are 6 replicas, and let's also put a resource reservation of half a cpu for each nodenamer replica. Each vm only has one cpu so there won't enough cpu available for all 6 replicas. We'll use this to verify that swarm respects the resource constraints. Re-run the deploy command
$ docker stack deploy -c docker-stack.yml demo
The configuration changes will be detected, and the swarm leader will bring the swarm to the new desired state. Let's watch the visualizer to see what happens. We can see 2 new tasks starting on vm2 and then another on vm3. After all the tasks on vm2 are running one is stopped to respect the resource reservations. We can also see from listing the services in the stack that only 4 of 6 tasks for nodenamer are running while two are left pending.
That's all for this demo and this lesson. We saw that working with stacks is a natural extension of Compose files and docker commands.
Closing
We've almost reached the finish line now! Join me for the final lesson when you're ready to wrap up the course.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.