AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at support@cloudacademy.com.
Learning Objectives
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
Intended Audience
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
Prerequisites
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
Resources
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
Okay welcome back! In this lesson I'm going to review networking as used by AKS and take you through the different AKS cluster networking configuration options available.
When you're about to provision your AKS cluster there are two distinct networking options available to choose from: One, basic networking, which is the default and is based on Kubenet. Two, advanced networking, which is opt in and is based on the Azure CNI plugin.
Configuring your AKS cluster with either one of these networking options, will dictate how the cluster communication works both internally and externally.
It's really important to understand and know about the pros and cons of working with either option before launching your AKS cluster as this choice is made at the time of launching the AKS cluster and cannot be altered there afterwards. To swap to the alternate networking option, this would require the cluster to be rebuilt.
This is perhaps one of the more important decisions that you will need to make when planning out your own AKS cluster builds. Choosing the wrong option can lead to dead ends in terms of communications, poor performance, and/or increased billing costs.
Before drilling into the details of the two main AKS networking options, basic and advanced, we should first quickly review the stated Kubenetes networking objectives and goals as stated on the main Kubernetes website. This is to help with your understanding of the AKS networking design and architecture which is to be explained in the next several slides.
To begin with, one, pod to pod communication relies on IP routing. Two, all containers can communicate with all other containers without NAT, or Network Address Translation. Three, all nodes can communicate with all containers and vice-versa, without NAT. Four, the IP that a container sees itself as is the same IP that others see it as. Five, a Kubernetes Service is an abstraction which defines a logical set of pods and a policy by which to access them, sometimes called a micro-service.
If needed, pause here and take your time to digest each of these statements before continuing. Basic networking, which is based on Kubenet is the default networking option for AKS cluster networking.
Kubenet uses a separate private IP address space, this being set aside for the pods.
By default the CIDR block 10.244.0.0/16 is allocated to the pod IP address space spanning all nodes. /24 segments of the pod IP address space are then allocated to each node, and then individual pod IPs are assigned using the host local IPAM plugin. Nodes are configured with a standard vnet allocated private IP address space taken from the Azure Virtual Network subnet that they reside within.
Kubenet operates at layer three and utilizes NAT, Network Address Translation, for pod initiated traffic heading to non cluster vnet attached resources, such as ordinary vms, or to go to the outside world.
Pods can communicate with other pods within the same cluster, either located on the same node or located on different nodes. When pods communicate with each other, they do so without their traffic being source nat'd. Pod communication across nodes is routed through the Azure vnet using a special user-defined routing table that is automatically created and attached to the subnet in which the cluster nodes reside. The AKS managed service automatically manages and maintains this route table, which is populated with one entry per cluster node, where each entry's address prefix is set to a /24 cidr block allocated to a node, and with the next hop being the node's private IP vnet assigned address. When scaling out or in the number of nodes in the cluster either dynamically or manually, this special route table is automatically updated by AKS with either a new route entry being inserted for each additionally added node, or with a route entry being deleted for each removed node.
Pods can initiate outbound communication to other resources attached to the vnet. When they do so, the traffic is source nat'd such that the source IP address becomes the node IP address. The receiver then only needs to return the response traffic back to the node's IP address for it to then be forwarded within the node to the originating pod caller.
Pods can also communicate with resources attached to the Internet. When they do so, the traffic is first source nat'd when exiting the AKS cluster node and then again when exiting the Azure vnet. When it exits the Azure vnet, the source IP is source nat'd to the Azure load balancers first assigned public IP address. The load balancer in question here is the one that gets created automatically when the AKS cluster is provisioned. This load balancer registers within it's configured backend pool each of the AKS cluster nodes.
When the traffic exits the Azure vnet it actually undergoes port address translation, this is to accommodate the fact that outbound traffic can originate from multiple backend cluster node IP addresses.
Inbound pod communication from internet-based resources to the AKS cluster is typically achieved by sending it via a Kubenetes service of type load balancer. In this case, an Azure load balancer is wired into the traffic path and is provisioned with a public IP address per Kubenetes service. External traffic is then sent to this public IP address. The Azure load balancer will then round robin it downstream across the clusters node pool, in turn the traffic is finally dnat-ed to one of the matching pods configured within the service. Note, its possible that the dnat-ed traffic may be sent to a different backend cluster node unless the service in question has been configured with a local only routing policy.
AKS advanced Networking, AKA Azure CNI networking, must be opted in. That is, it is not the default networking option. Azure CNI is a plugin which implements the CNI network plugin interface. Azure CNI uses Azure IPAM for IP address management. Pods are allocated private IP addresses taken from the same vnet subnet in which their host node resides. With this in effect, all pod assigned IP addresses are also additionally configured as secondary IP addresses on the hosting vm's interface.
Within a node, pods are connected to each other and to the node's eth0 interface using a virtual bridge device, this allows all pods to be connected using a local layer two segment. Since both nodes and pods draw upon IP addresses from the same vnet subnet address space, careful consideration and planning must be applied when rolling out this option to avoid IP exhaustion. To address this issue, nodes within the cluster are configured to host a maximum number of pods each.
This particular configuration behaves differently dependent on the provisioning approach. When provisioning an AKS cluster manually through the Azure portal the default setting is limited to 30 pods per node. This setting cannot be changed. If provisioning using either the Azure CLI or by way of Azure ARM templates, then the default is 30 pods per node, but this setting can now be increased to a maximum of 250 pods per node using the Max Pods parameter. Regardless of approach, you need to plan ahead carefully to avoid potential IP exhaustion, especially if your workloads when deployed scale out significantly.
Finally, this setting is configured at the time of cluster creation and cannot be altered anytime afterwards. Unlike the Kubenet networking option, routing of pod traffic between nodes through the Azure vnet does not require a special user-defined route table. Instead inter-node pod traffic is simply routed using the same default azure system routes created for the vnet subnet.
When pods communicate with other pods within the same cluster, they do so by sending their traffic first to the virtual bridge device on their host node, this in turn forwards it either to another locally hosted pod, or to the nodes eth0 interface to be routed across the vnet to the node hosting the recipient pod. Pods can easily communicate with cluster nodes using the same approach and vice versa. Pods can initiate outbound communication to other noncluster resources attached to the same vnet and again, vice versa. Pods can also communicate with resources attached to the Internet.
When using advanced networking, inbound pod communication from Internet-based resources to pods within the AKS cluster is achieved by the same mechanism as explained in basic networking, that is, the traffic enters via the Azure load balancer. The following table provided here, highlights the key differences between the two cluster networking options. Before I finish, I'll quickly summarize the motivations for using one or the other.
Use AKS Basic Networking, when: One, vnet IP address space must be conserved. Two, pod communication is mostly kept within the cluster. Three, advanced features such as Virtual Nodes are not needed. Four, Windows based worker nodes are not needed. Five, NATing outbound pod traffic to vnet attached resources is okay. Six, overall pod traffic is low, network traffic incurs performance penalties due to NATing and extra routing requirements. Seven, on-prem connectivity to and from the cluster is not required, when it is routing becomes very complicated.
Consider using AKS Advanced Networking, when: One, vnet IP address space is available. Two, pod communication is mostly external to the cluster. Three, advanced features such as Virtual Nodes and/or Network Policies are required. Four, Windows-based nodes are required. Five, overall pod traffic is high. Six, on-prem connectivity is required, routing in this case is less complicated since both the pods and nodes share the same IP address space. Okay that completes this lesson.
In this lesson, I reviewed the various AKS networking configuration options available. I gave you detailed explanations as to how traffic flows within the cluster, both internally and externally. Understanding how traffic moves between the various AKS cluster components will help you to troubleshoot and secure the traffic flows. Okay go ahead and close this lesson and I'll see you shortly in the next one.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).