image
Consistency

Contents

Intro
1
Course Introduction
PREVIEW2m 45s
2
Overview
PREVIEW6m 59s
Architecture
3
Networking
10m 26s
6
Security
4m 21s
Wrap Up
11
Summary
4m 27s

The course is part of these learning paths

Start course
Difficulty
Intermediate
Duration
1h 11m
Students
4941
Ratings
3.7/5
starstarstarstar-halfstar-border
Description

Docker has made great strides in advancing development and operational agility, portability, and cost savings by leveraging containers. You can see a lot of benefits even when you use a single Docker host. But when container applications reach a certain level of complexity or scale, you need to make use of several machines. Container orchestration products and tools allow you to manage multiple container hosts in concert. Docker swarm mode is one such tool. In this course, we’ll explain the architecture of Docker swarm mode, and go through lots of demos to perfect your swarm mode skills. 

Learning Objectives

After completing this course, you will be able to:

  • Describe what Docker swarm mode can accomplish.
  • Explain the architecture of a swarm mode cluster.
  • Use the Docker CLI to manage nodes in a swarm mode cluster.
  • Use the Docker CLI to manage services in a swarm mode cluster.
  • Deploy multi-service applications to a swarm using stacks.

Intended Audience

This course is for anyone interested in orchestrating distributed systems at any scale. This includes:

  • DevOps Engineers
  • Site Reliability Engineers
  • Cloud Engineers
  • Software Engineers

Prerequisites

This is an intermediate-level course that assumes:

  • You have experience working with Docker and Docker Compose

 

Transcript

Consistency is an important consideration for any distributed system. In this lesson, we'll look at the consistency model of swarm mode and how it can impact how you operate a swarm.

Agenda
To kick things off we'll discuss:
" (Consistency) the consistency problem and
" (Raft) how swarm mode goes about solving it, in particular the Raft Consensus algorithm.
" (Tradeoffs) We'll cover just what you need to know of Raft to understand key tradeoffs that you should consider when deciding on the composition of your swarm.
" (Raft Logs) Lastly, we'll talk about the raft logs where the cluster state is stored.

Consistency
We have seen that swarm mode can include several manager and worker nodes in a swarm. This provides fault tolerance if a node were to go down and ensures services are highly available. But with multiple managers, how does swarm make decisions regarding the state of the cluster? Do nodes in the swarm share a consistent view of the cluster or could one node have a different view than the other? And if so, for how long? These questions all touch on the issue of consistency.

In swarm mode, managers all share a consistent internal state of the entire swarm. This avoids any potential issues that could arise if managers were allowed to eventually converge to a shared state. Workers, on the other hand, do not share a view of the entire swarm. That is exclusively a manager responsibility.

The managers maintain a consistent view of the state of the cluster by using a consensus algorithm. There are several consensus algorithms to choose from and the implementation details are outside the scope of this course. But the consensus algorithm has an impact on how the swarm operates. We'll look at the basics of swarm modes consensus algorithm so we can understand the implications in operating a swarm.

Raft Consensus
The consensus algorithm used by managers to maintain a consistent view of the state of the cluster is called Raft. Raft achieves consensus by electing one manager as the leader. The elected leader makes all of the decisions for changing the state of the cluster to bring it to the desired state. For example, the leader accepts new service requests and service updates and also decides how to schedule tasks.

In order to maintain a consistent view across the managers, the decisions aren't acted upon until a majority of managers agree on the proposed changes to the cluster. A manager "agrees" simply by receiving a proposed change and acknowledging they received it. When the leader is certain a majority of managers have received the proposed change, the change can be implemented. In this context, the majority of managers are referred to as a quorum.

The reason why a quorum is enough to proceed is because Raft limits how many managers failures it can tolerate. If you have N managers in a swarm, Raft allows for (N-1)/2 failures. In the case of a three manager swarm, that means 3 minus 1 divided by two is one, so one manager can fail and the swarm can continue to operate as usual. If two managers were to fail, the cluster state would freeze until a quorum of managers again became available. In the absence of a quorum, currently running services will continue to run but no new scheduling decisions take place.

Regarding leader elections, when a swarm is initialized the first manager is automatically the leader. If the currently elected leader fails or voluntarily steps down, say to perform system updates, an election between remaining manager nodes takes place. Until a newly elected leader is chosen, the cluster state is frozen.

Manager Tradeoffs
After that overview of Raft consensus, you might be tempted to add a lot of managers to your swarm. The more managers, the more failures your swarm can tolerate and remain fully operational. Although, that is true, the more managers that are in the swarm also increases the amount of managerial traffic required for maintaining a consistent view of the cluster and the amount of time it takes to achieve reach consensus with every state change. Although increasing managers does increase fault-tolerance it generally decreases performance and scalability.

There are some general rules for setting the number of managers:
You should usually have an odd number of managers. Having an even number of managers doesn't improve the fault tolerance compared to having one less manager and increases communication overhead.
A single manager swarm is acceptable for development and test swarms. Because a single manager swarm can't tolerate any failures, it is not something you should use in production.
A three manager swarm can tolerate one failure, while a five manager swarm can tolerate two.
Docker recommends a maximum of seven managers which can tolerate three manager failures. Above seven has too much of an impact on performance to be beneficial.

However many managers you settle on, you will want to distribute them across availability zones to maintain a fully operational swarm in the event of a datacenter outage. Docker recommends distributing across at least three availability zones in production.

Working Manager
There is another tradeoff when considering managers in a swarm. Have you heard the bad joke that goes "Don't stand around doing nothing. People will think you're the boss." In swarm mode, you need to consider whether or not you let the boss, or the managers, do work. By default managers perform worker responsibilities, namely running tasks. But that has more to do with enabling single node swarms than anything.

Because managers participate in the Raft consensus process, it can be detrimental to the performance of the swarm if managers are overly utilized. You can use conservative resource reservations to make sure that managers won't become starved for resources. To be on the safe side, you can also prevent any work from being scheduled on manager nodes by draining them. Draining essentially removes any tasks currently on a node and preventing new tasks from being scheduled to it.

Worker Node Tradeoffs?
You might be wondering if there are any tradeoffs to consider when adding worker nodes to a swarm. There really isn't much to worry about in the case of adding more worker nodes. More workers give you more capacity for running services and improves service fault tolerance. More workers don't affect the manager's raft consensus process so the swarm performance isn't harmed.

Workers actually do participate in a consensus process. To exchange overlay network information nodes participate in a weakly-consistent, highly scalable gossip protocol called SWIM. The details are outside the scope of this course and the performance implications are negligible. The protocol is an example of an eventually consistent model where the network state is allowed to differ between nodes but eventually they converge on a consistent view.


Raft logs
The last topic we'll discuss in this lesson is Raft logs. If you arrived at this lesson from a search for log rafts, I'm afraid you'll need to continue your search. The logs we're talking about are where the leader manager records the Raft consensus state changes, such as creating a new service or adding a new worker. These logs are what get shared with other managers to establish a quorum.

The Raft logs are persisted to disk. The logs are stored in the raft subdirectory of your Docker swarm data directory. This is /var/lib/docker/swarm on Linux by default. As part of a disaster recovery strategy, you can back up a swarm cluster by backing up the entire swarm directory which includes certificates and other files in addition to the raft change logs in the raft subdirectory. You can restore a new swarm from a backup by replacing the directory swarm directory with the backed-up copy.


Recap
That's everything for this lesson. We started by understanding how swarm mode solves consistency challenges by electing a leader manager and ensuring a majority of managers acknowledge swarm changes. This strategy comes from the Raft consensus algorithm. We understood the tradeoffs between fault tolerance and performance when choosing the number of managers in a swarm, as well as whether or not managers should do work. We finished by discussing the raft logs which are persisted on disk and record all the changes the leader makes to a swarm.


Closing
In the next lesson, we will cover the security measures included in swarm mode. Continue on to the next lesson when you are ready to learn about security in Docker swarm mode.

About the Author
Students
186859
Labs
213
Courses
9
Learning Paths
52

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.