Azure Container Service
The Azure Container Service (ACS) is a cloud-based container deployment and management service that supports popular open-source tools and technologies for container and container orchestration. ACS allows you to run containers at scale in production and manages the underlying infrastructure for you by configuring the appropriate VMs and clusters for you. ACS is orchestrator-agnostic and allows you to use the container orchestration solution that best suits your needs. Learn how to use ACS to scale and orchestrate applications using DC/OS, Docker Swarm, or Kubernetes.
- Operation Engineers
- Anyone interested in managing containers at scale
- Viewers should have a basic understanding of containers.
- Some familiarity with the Azure platform will also be helpful but is not required.
- Demonstrate how to use ACS to create virtual machine hosts and clusters for container orchestration
- Understand how to choose an open-source orchestration solution for ACS
- Understand how to run containers in production on Azure using ACS
- Demonstrate the ability to scale and orchestrate containers with DC/OS, Docker Swarm, and Kubernetes
This Course Includes
- 60 minutes of high-definition video
- Live demonstration on key course concepts
What You'll Learn
- Overview of Azure Container Service: An overview of containers and the advantages of using them.
- Orchestrators: A summary of what orchestrators are and a description of the most popular orchestrators in use
- Open Source Cloud-First Container Management at Scale: This lesson discusses the purpose of ACS, and how the service is activated.
- Deployment Scenarios: A brief explanation of different orchestrator scenarios.
- Deploy Kubernetes and Security: A live demo on how to deploy K8S.
- Deploy Kubernetes from the Portal: A live demo on how to create a security key and a K8S cluster.
- Deploy Kubernetes from the CLI: A live demo on the Command Line Interface.
- Orchestrator Overview – Kubernetes: A lesson on managing containers. First up..Kubernetes.
- Orchestrator Overview – DC/OS: In this lesson, we discuss deploying containers to the Data Center Operating System.
- Orchestrator Overview – Swarm: In this last lesson we'll look at how ACS deploys Swarm.
- Summary and Conclusion: A wrap-up and summary of what we’ve learned in this course.
Now that we've got all the background information out of the way, let's jump into this introduction to the Azure Container Service. The first thing I'd like to do is give a brief primer overview of containers. When I talk about containers, I like to focus on your responsibility and get back to the basics the traditional physical machine we started out with.
With the physical machine, your responsibility goes all the way from the application down to the hardware. With the advent of virtual machines, we have this concept of a hyper visor that allows us to use code or configuration or prescribe templates to spin out virtual machines. So that reduces responsibility but you are still in charge of that guest operating system and keeping it patched and up-to-date.
The reality is you may move an application from one place to the other and that application could run differently because the environment it's running on has changed. Containers provide a solution for this problem because the Docker engine leverages the host OS for the containers so that the container becomes just the binaries and libraries that your application depends on and the application itself.
This creates some consistency and also reduces your responsibility to the things that you're most in control of and takes away that need to worry about the guest operating system. Instead, you know a container will run consistently on the Docker host. Let's recap by going over the advantages of containers.
Containers are small, so images can be checked into repositories. You can create container images that may only be a megabyte in size, up to several hundred megabytes. They use a shared host, so the images are spun up very quickly. Dependencies are packaged in the container, and so there's no more question of will it work on this machine, as long as it works on Docker. Containers are resilient because when one goes down, another can come up quickly to replace it. Containers are scalable because it is very easy to add new containers when you need to meet increased demand. Finally, containers are elastic.
Just like you can scale to meet demand, you can also scale back in periods of lull where the containers aren't being used. In other words, you're able to add or subtract containers as needed based on demand and this reduces waste, overall.
About the Author
Jeremy Likness is an experienced entrepreneur who has worked with companies to catalyze growth and leverage leading edge development technology to streamline business processes and support innovation for two decades. Jeremy is a prolific author with four published books and hundreds of articles focused on helping developers be their best. Jeremy speaks at conferences around the country and covers topics ranging from technologies like Docker, Node.js and .NET Core to processes and methodologies like Agile and DevOps. Jeremy lives near Atlanta with his wife of 19 years and teen-aged daughter. His hobbies including hiking, climbing mountains, shooting 9-ball, and regularly attending CrossFit classes while maintaining a vegan diet.