Azure Container Service
The Azure Container Service (ACS) is a cloud-based container deployment and management service that supports popular open-source tools and technologies for container and container orchestration. ACS allows you to run containers at scale in production and manages the underlying infrastructure for you by configuring the appropriate VMs and clusters for you. ACS is orchestrator-agnostic and allows you to use the container orchestration solution that best suits your needs. Learn how to use ACS to scale and orchestrate applications using DC/OS, Docker Swarm, or Kubernetes.
- Operation Engineers
- Anyone interested in managing containers at scale
- Viewers should have a basic understanding of containers.
- Some familiarity with the Azure platform will also be helpful but is not required.
- Demonstrate how to use ACS to create virtual machine hosts and clusters for container orchestration
- Understand how to choose an open-source orchestration solution for ACS
- Understand how to run containers in production on Azure using ACS
- Demonstrate the ability to scale and orchestrate containers with DC/OS, Docker Swarm, and Kubernetes
This Course Includes
- 60 minutes of high-definition video
- Live demonstration on key course concepts
What You'll Learn
- Overview of Azure Container Service: An overview of containers and the advantages of using them.
- Orchestrators: A summary of what orchestrators are and a description of the most popular orchestrators in use
- Open Source Cloud-First Container Management at Scale: This lesson discusses the purpose of ACS, and how the service is activated.
- Deployment Scenarios: A brief explanation of different orchestrator scenarios.
- Deploy Kubernetes and Security: A live demo on how to deploy K8S.
- Deploy Kubernetes from the Portal: A live demo on how to create a security key and a K8S cluster.
- Deploy Kubernetes from the CLI: A live demo on the Command Line Interface.
- Orchestrator Overview – Kubernetes: A lesson on managing containers. First up..Kubernetes.
- Orchestrator Overview – DC/OS: In this lesson, we discuss deploying containers to the Data Center Operating System.
- Orchestrator Overview – Swarm: In this last lesson we'll look at how ACS deploys Swarm.
- Summary and Conclusion: A wrap-up and summary of what we’ve learned in this course.
Now let's take a look at deploying containers to the data center operating system. In this scenario, applications represent instances of containers, agents are the hosts for the applications, and agents are exposed by the fully qualified domain name of the public cluster by default.
We'll take a look at exactly what that means by deploying a cluster and load balancing some containers on that cluster. Now we're gonna run the tool to create the cluster and this time I'm going to use a command line prompt on my local machine instead of the Azure Cloud shell. Because of the Azure command line interface the commands will be consistent regardless of where I run them from. Similar to before, we're gonna set a set of variables. One for the resource group, one for the cluster name, one for the location, and then we're gonna use the Azure command line interface to create the resource group for us. We'll give it the name and we'll give it a default location. Now that that has succeeded, we're going to go ahead and create the cluster.
The command is the same as the one we use kubernetes with just a few changes. We're gonna give it a cluster name, we're gonna give it a resource group, we're gonna tell it to spin up three agents, we're gonna specify the location, we're gonna tell it to generate its own SSH keys, and then the one difference is instead of an orchestrator type of kubernetes, we're gonna use an orchestrator type of DC/OS. Now the provisioning has succeeded, so let's take a look at it through the portal UI. In the portal, I can see that the new resource group has been created, so I'll click on that group and take a look at the assets, and you can see it's created storage accounts, network interfaces, et cetera, basically everything I need for the cluster.
Now there is an important IP address I need to look at. I'm gonna find the master IP, this is the master controller for the cluster and I'm going to drive into that asset and note down this IP address right here, because this is something I'll need to tunnel into the master and manage my applications. In the portal I'm gonna go ahead and create a variable called IP, and in that variable I'll go ahead and paste the address that I copied and then I'm gonna type this command. Sudo elevates the admin privileges, I'm gonna give a path to the RSA key, this is the default path for the key that was generated when I created the cluster. And I'm gonna tell it to tunnel local host so that from our machine we can access the master management tool which runs as a web application.
So we're gonna pass it this and give it the variable for the IP address and it's going to prompt me for admin privileges, make sure I wanna connect, and now it gives an error that address is already in use, but I found that despite that error I can now access the master. Now that we have a tunnel through to the master, we can open a new tab and access local host. This provides a dashboard where we can see the health of our system, we can see nodes are running, et cetera.
The first step we wanna take is to go into universe and packages, we're gonna install a load balancer package. This is because our containers will be deployed to the private agents so we need a load balancer on the public agent in order to route that traffic and expose them as an endpoint. So we'll scroll down to a package called Marathon LB, here, and simply click install. This will automatically pull down the package and configure it on the endpoint. So if we go back to our dashboard now we can see we have a task and that task is staging.
Furthermore if we look at our nodes, we can see there's four nodes. There's the public node and the three private nodes and we now have a task running on the public node. What we can do is come back to our list of resources and this time instead of the master IP look for the agent IP. I'm gonna drill into the agent, take the IP address of the agent, and with that agent access the website. Now we can see service is unavailable because we haven't configured any containers or applications, so let's go ahead and do that.
So now I've navigated to local host Marathon. Marathon is where I'll manage my applications. You can see that installing the load balancer already created a package for the load balancer. Let's go ahead and create our own application. We'll click on create application. This will bring up a dialogue box where we can provide a name, so we'll call it 'caacsdemo'. Now we can specify the detail of the docker container.
I'm gonna use the exact same image that I used in the last example and for a purposes of how the private agents work, we're gonna make this a bridge network. For the ports, we're gonna specify 3,000, that's the default microservice port. But because this doesn't allow us to specify a host port, we're gonna switch into JSON mode and add that ourselves. This is just gonna be a host port and we're gonna map it to that same 3,000. We can come back to the UI and the last thing we need to do is add some labels to tell the load balancer how to manage our application. So what we're gonna do is provide it with a group that we'll call external.
Then we need to give it the fully qualified domain name of the host. The way we do that is we come back to our resource group and I've navigated into the agent cluster, not the master cluster, but the agent cluster. And I'm grabbing its DNS name, so I'm just gonna copy that. We'll come back over, paste that in. Next we need to give it a mode. And that mode will be http. So we should have enough to configure our application, we can switch to JSON mode, check caacsdemo, the right image, bridge mode, port mappings, et cetera, so now we'll click create application. You can see it goes into deploying and now it's running. So we should be able to open a new tab, paste in our domain name, and we get our example of the container service, violet cat mercury. If I refresh that, I consistently get the same result back.
Let's go back into Marathon, go into our caacsdemo and click scale application. I'm gonna scale it to three instances. And you can see these are immediately staged. If we come back out to our dashboard you'll see there's four tasks. And if we navigate to our nodes you can see tasks are running on multiple nodes. So there's one task here, zero tasks on these nodes, so we wanna wait for it to deploy to the additional nodes and then we should be able to see the effects of the scaling out. We can see that this node as picked up a new task, so let's go back to our tab and refresh that tab. And sure enough, we get orange cat mercury.
So now we're actually picking up the new service and we can wait for the third one to come online, and we would see it round robin to that third service.
Jeremy Likness is an experienced entrepreneur who has worked with companies to catalyze growth and leverage leading edge development technology to streamline business processes and support innovation for two decades. Jeremy is a prolific author with four published books and hundreds of articles focused on helping developers be their best. Jeremy speaks at conferences around the country and covers topics ranging from technologies like Docker, Node.js and .NET Core to processes and methodologies like Agile and DevOps. Jeremy lives near Atlanta with his wife of 19 years and teen-aged daughter. His hobbies including hiking, climbing mountains, shooting 9-ball, and regularly attending CrossFit classes while maintaining a vegan diet.