1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Introduction to Azure Container Service (ACS)

Orchestrators - Docker Swarm

The course is part of this learning path

play-arrow
Start course
Overview
DifficultyIntermediate
Duration45m
Students1203
Ratings
4.7/5
star star star star star-half

Description

Course Description:

The Azure Container Service (ACS) is a cloud-based container deployment and management service that supports popular open source tools and technologies for container and container orchestration. ACS allows you to run containers at scale in production and manages the underlying infrastructure for you by configuring the appropriate VMs and clusters for you. ACS is orchestrator-agnostic and allows you to use the container orchestration solution that best suits your needs. Learn how to use ACS to scale and orchestrate applications using DC/OS, Docker Swarm, or Kubernetes.

Intended Audience:

  • Developers
  • Operation Engineers
  • DevOps
  • Anyone interested in managing containers at scale

Pre-requisites:

  • Viewers should have a basic understanding of containers.
  • Some familiarity with the Azure platform will also be helpful but is not required.

Learning Objectives:

  • Demonstrate how to use ACS to create virtual machine hosts and clusters for container orchestration
  • Understand how to choose an open source orchestration solution for ACS
  • Understand how to run containers in production on Azure using ACS
  • Demonstrate ability to scale and orchestrate containers with DC/OS, Docker Swarm, and Kubernetes

This Course Includes:

  • 60 minutes of high-definition video
  • Live demonstration on key course concepts

What You'll Learn:

  • Overview of Azure Container Service: An overview of containers and the advantages of using them.
  • Orchestrators: A summary of what orchestrators are and a description of the most popular orchestrators in use
  • Open Source Cloud-First Container Management at Scale: This lesson discusses the purpose of ACS, and how the service is activated.
  • Deployment Scenarios: A brief explanation of different orchestrator scenarios.
  • Deploy Kubernetes and Security: A live demo on how to deploy K8S.
  • Deploy Kubernetes from the Portal: A live demo on how to create a security key and a K8S cluster.
  • Deploy Kubernetes from the CLI: A live demo on the Command Line Interface.
  • Orchestrator Overview – Kubernetes: A lesson on managing containers. First up..Kubernetes.
  • Orchestrator Overview – DC/OS: In this lesson we discuss deploying containers to the Data Center operating System.
  • Orchestrator Overview – Swarm: In this last lesson we'll look at how ACS deploys Swarm.
  • Summary and Conclusion: A wrap-up and summary of what we’ve learned in this course.

Transcript

The next orchestrator and the final one we'll take a look at is Docker's own Swarm. The current version of the Azure Container Service supports a legacy version of Swarm. In the original Swarm implementation, Swarm was a separate engine that worked with the Docker engine.

The current Docker image integrates Swarm as a special mode of the image, but the concepts will remain the same. Containers are units of deployment. Services describe how containers can connect, communicate, persist volumes, et cetera, and will do things such as mapping the available ports. Swarm intrinsically load balances the external port and is used to manage multiple instances. Let's take a look at how the Azure Container Service deploys Docker Swarm. Once again, we'll use the convenient Azure shell to create our cluster.

We'll set some convenient variables to use. Do a resource group, set a default location, give it a cluster name. And a DNS prefix. Then we'll go ahead and create our resource group. And now we'll go ahead and create our cluster, so we're gonna give it an orchestrator type of Swarm. DNS prefix. Resource group. Location. Agent count. Cluster name. And tell it to generate its own SSH keys. Now the provisioning is run, and we can see that it has succeeded and what's nice about running this from the command line tool is it also gives us some information about how to access the actual agents that manage the swarm. So I'm going to go ahead and take this value for the master straight off of this result and paste it here, and shell into the master.

Now what's important to notice, now that we're in the master if we do a Docker info, what we're gonna get is a local version of Docker that's running. This is not the actual Swarm, this is just a node that is running the Swarm. So in order to access the Swarm, what we need to do is export the host. When the Azure Container Service configures the cluster, it always gives the host the same IP address.

So what we're gonna do is type export Docker host equals 172.16.0.5, that's gonna be consistent with every swarm that you create. So we'll go ahead and export that and then we should be able to run info now and see information about the Swarm agent nodes, and when we run Docker PS, we'll see that there's no images running yet. So now we're actually connected to the Swarm host. So let's go ahead and deploy an agent, so what I'm gonna do is use the Docker run command, gonna tell it to run into detached mode,

I'm gonna map to public port 80 internal port 3,000, which is what my service runs on, and tell it to use the image CA-ACS. So we'll run that, and what it's gonna do is go out to the Docker hub, pull that publicly available image down, and then deploy it to the Swarm, so we'll give it a second to run through those steps. So now after that first run, we've got a container ID that's given back to us so now if we type Docker PS, we can see that our containers is running and we can see it's actually been relegated to an agent on 10 zero zero eight. So in order to actually access this, we need to go through the public load balancer.

So we're gonna refresh the list of resources up here, and we're gonna find the swarm group and inside the swarm group we'll look for a public IP for the external agents. So we'll come down, public, and here's Swarm DNS agents, so let's go ahead and navigate to this, and take the IP address, and then we're gonna use our curl command to curl that end point, and as you can see we got orange cat earth and if we repeat that we should consistently get the same service end point.

So the next step is to scale out some more containers. Well the way we do that is relatively simple. Swarm automatically load balances the external facing port so what I'm gonna do is do this run command a second time. Now we've got a container ID back and I'm just gonna go ahead and run it a third time. And it's unable to find a node with that port available, so let's do Docker PS, and we can see that it's balanced two different nodes, nine and eight, on that same port.

So let's go ahead and use our curl command and we got violet cat earth instead of orange cat, and if we curl again, we get orange cat, violet cat, orange cat, so you can see it successfully load balancing and then if we wanted to add more on the same port we'd simply add additional agents.

About the Author

Students1204
Courses1

Jeremy Likness is an experienced entrepreneur who has worked with companies to catalyze growth and leverage leading edge development technology to streamline business processes and support innovation for two decades. Jeremy is a prolific author with four published books and hundreds of articles focused on helping developers be their best. Jeremy speaks at conferences around the country and covers topics ranging from technologies like Docker, Node.js and .NET Core to processes and methodologies like Agile and DevOps. Jeremy lives near Atlanta with his wife of 19 years and teen-aged daughter. His hobbies including hiking, climbing mountains, shooting 9-ball, and regularly attending CrossFit classes while maintaining a vegan diet.