AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at support@cloudacademy.com.
Learning Objectives
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
Intended Audience
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
Prerequisites
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
Resources
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
Okay welcome back. In this lesson I'm going to review each of the different options available for scaling an AKS cluster either manually or automatically.
Scaling can be performed at different layers, either at the pod level or at the node level. There's even a virtual node option which I'll review as well. Any or all of these options will help to ensure that your AKS cluster deployed workloads have continued predictable performance regardless of the current demand profile.
AKS supports both manual and auto scaling. Manual scaling can be performed either at the node level, or at the pod level by simply increasing or decreasing the respective count property. For example, the following Azure CLI command is used to scale an existing nodepool up to 10 nodes.
In the next example, the kubectl command is used to manually scale up the webapp deployment to have 10 pods.
Auto scaling can be performed within an AKS cluster automatically by two different scalers. The cluster autoscaler can be configured to automatically add or remove worker node vm's to and from the cluster. New nodes are added when the cluster autoscaler detects that new pods cannot be scheduled anywhere within the existing nodepool due to resource exhaustion. The cluster autoscaler works together with VM Scale Sets to manage and maintain the overall set of nodes within the cluster. VM Scale Sets are also used to manage multiple cluster node pools, so that workloads with specific resource requirements are hosted on the right infrastructure.
The following command demonstrates how to create an AKS cluster with the cluster autoscaler enabled by specifying the "enable cluster autoscaler" parameter together with the min and max values for node count.
The horizontal pod autoscaler scales out additional pods across an existing node pool when the resource demand, whether it be for CPU, memory, or something else, for a replicaset or deployment exceeds a specific threshold. The replicaset or deployment needs to be configured with resource limits which the horizontal pod autoscaler uses to determine whether more pods need to be scheduled. The horizontal pod autoscaler queries the Metrics API continually every 30 seconds by default to determine if a pod scaling event is required.
Using and configuring the horizontal pod autoscaler is very simple and quick. To use it, you first need to set your required resource limits for the container. For example in the following webapp Deployment, the enclosed webapp container makes a request for at least 200m of CPU, 'm' short for milliecores. A maximum limit of 500m of CPU is also set.
To do so, we simply run the "kubectl autoscale deployment" command. The following command is configured to maintain a pool of pods, no fewer than 10, and no more than 100 for the "webapp" deployment so that the average CPU utilization across all of them remains at approximately 50%. AKS also provides a hybrid scaling option called VIrtual Nodes. With this enabled, AKS can perform rapid burst scaling, launching new pods into the Azure Container Service within seconds.
This approach is much faster than the Kubernetes cluster autoscaler, which incurs the penalty of the extra time taken to spin up a new worker node vm instance. It's important to note that Virtual Nodes are only supported on AKS clusters that have been created and enabled with advanced networking. The other interesting aspect of using virtual nodes for pod hosting, is that it changes the way you are billed. Pods hosted on virtual nodes are billed only per second of execution time making virtual nodes perfect for hosting short-lived pods.
Okay that completes this lesson. In this lesson, I reviewed the various scaling options available in Kubernetes and AKS. Kubernetes provides the cluster autoscaler and the horizontal pod autoscaler. AKS expands on these features by providing VM Scale Sets for node scaling, and Virtual Nodes.
Go ahead and close this lesson and I'll see you shortly in the next one.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).