Azure Container Service
The Azure Container Service (ACS) is a cloud-based container deployment and management service that supports popular open-source tools and technologies for container and container orchestration. ACS allows you to run containers at scale in production and manages the underlying infrastructure for you by configuring the appropriate VMs and clusters for you. ACS is orchestrator-agnostic and allows you to use the container orchestration solution that best suits your needs. Learn how to use ACS to scale and orchestrate applications using DC/OS, Docker Swarm, or Kubernetes.
- Operation Engineers
- Anyone interested in managing containers at scale
- Viewers should have a basic understanding of containers.
- Some familiarity with the Azure platform will also be helpful but is not required.
- Demonstrate how to use ACS to create virtual machine hosts and clusters for container orchestration
- Understand how to choose an open-source orchestration solution for ACS
- Understand how to run containers in production on Azure using ACS
- Demonstrate the ability to scale and orchestrate containers with DC/OS, Docker Swarm, and Kubernetes
This Course Includes
- 60 minutes of high-definition video
- Live demonstration on key course concepts
What You'll Learn
- Overview of Azure Container Service: An overview of containers and the advantages of using them.
- Orchestrators: A summary of what orchestrators are and a description of the most popular orchestrators in use
- Open Source Cloud-First Container Management at Scale: This lesson discusses the purpose of ACS, and how the service is activated.
- Deployment Scenarios: A brief explanation of different orchestrator scenarios.
- Deploy Kubernetes and Security: A live demo on how to deploy K8S.
- Deploy Kubernetes from the Portal: A live demo on how to create a security key and a K8S cluster.
- Deploy Kubernetes from the CLI: A live demo on the Command Line Interface.
- Orchestrator Overview – Kubernetes: A lesson on managing containers. First up..Kubernetes.
- Orchestrator Overview – DC/OS: In this lesson, we discuss deploying containers to the Data Center Operating System.
- Orchestrator Overview – Swarm: In this last lesson we'll look at how ACS deploys Swarm.
- Summary and Conclusion: A wrap-up and summary of what we’ve learned in this course.
Now that we know how to deploy the cluster and the process is very similar between the three different orchestration systems, let's take a look at actually deploying and managing containers in each of the orchestrators starting with Kubernetes.
For Kubernetes, there's a few concepts to cover when we're looking to deploy containers. First, Kubernetes has a grouping called a pod which is a related set of resources that can be containers, storage, and network, and it's really a unit of deployment in Kubernetes. Pods are units that can scale across the cluster, so these are things that you can spin up and have multiple instances behind the load balancer. Deployments supervise pods, they manage how the pods are brought up, replicas of pods, how many pods there are, and are also used to manage rollout and execution of pods in the environment. And finally services expose pods for consumption, so as their network are hosted within the various clusters, there needs to be a way to access them externally.
Let's take a look at deploying an example image to our Kubernetes cluster that was set up using the Azure container service. Once again, we're going to use the convenient Cloud Shell to manage our Kubernetes deployment. The first thing we'll do is use the Kubernetes command line tool to run a pod that we'll name "clouddemo," and base it on an image that I have available in the public docker registry. This is a simple microservice that returns a unique name. So we'll run that command and then we should be able to ask the command line tool to get available pods, and we can see that it's creating a container based on the image.
So the next thing we wanna do is expose this pod to the outside world. So what we're gonna do is use the expose command. The deployment name is the same as the pod name. We'll give it the port 3000 and tell it to use a load balancer. After we run that command, it will take a few minutes to provision. I've actually allowed it to provision and the way I know it's ready is I use the get service command. Now when you first run this, this external IP will say pending, but once it shows up, it means it's available.
So now what we can do is use the curl command, which is used to fetch web pages. Use the end point that we see here, 205, on port 3000, and grab that, and you can see that it's grabbed the container named Red Lynx Jupiter. If we curl again, we get the same result because there's only one pod. So let's go ahead and remedy that by scaling to multiple pods, so what we'll do is use the command line interface, tell it to scale our clouddemo deployments, and set replicas to three. Now if we type the get pods command, you can see there are three copies of the pods running. So let's use our curl command to see what's happening.
So there's the Red Lynx Jupiter. There is a Violet Lynx Earth. And we'll see if we can grab the third one based on the round robin algorithm. And then we've got a Violet Jaguar Earth. So there you can see it's actually accessing the three different pods.
About the Author
Jeremy Likness is an experienced entrepreneur who has worked with companies to catalyze growth and leverage leading edge development technology to streamline business processes and support innovation for two decades. Jeremy is a prolific author with four published books and hundreds of articles focused on helping developers be their best. Jeremy speaks at conferences around the country and covers topics ranging from technologies like Docker, Node.js and .NET Core to processes and methodologies like Agile and DevOps. Jeremy lives near Atlanta with his wife of 19 years and teen-aged daughter. His hobbies including hiking, climbing mountains, shooting 9-ball, and regularly attending CrossFit classes while maintaining a vegan diet.