Part One - Lectures
Part Two - Demonstration
The course is part of these learning pathsSee 2 more
AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at firstname.lastname@example.org.
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
Okay, welcome back! In this lesson, I'm going to review each of the different Kubernetes service resource types and how each of them are implemented within AKS.
For starters, let's quickly review the definition of a service as implemented within a Kubernetes cluster. Kubernetes states that a service is an abstract way to expose an application running on a set of pods as a network service. This so-called service abstraction defines a set of logical pods and the networking policy and configuration which you use to call them with. A Kubernetes service provides a stable VIP or virtual IP address. Behind this VIP, the Kubernetes service manages and registers pods based on one or several pod metadata labels. Incoming traffic to the service is then distributed across the pods that are registered behind the service.
Kubernetes defines several different service types. They are ClusterIP, NodePort, and LoadBalancer. I will now review each of these individually and show you how they are implemented within AKS, and in terms of the underlying Azure networking.
Creating a Kubernetes service of type ClusterIP, results in a service being provisioned with an internally assigned IP address. ClusterIP services are the default service type. When you create a service without specifying its type, then Kubernetes will default it to ClusterIP.
A ClusterIP service will be provisioned with an internal cluster VIP that can be called upon by any pods within the cluster and also from the nodes themselves. Outside of the cluster this IP is not callable. You'll find that all of the services that are deployed internally within the cluster inside the kube-system namespace are created this way.
This type of service is useful when you need to provide other cluster workloads with the ability to make network requests to the pods that sit behind it.
Testing a ClusterIP service externally to the cluster can be achieved by simply starting up a temporary utility pod in interactive shell mode. BusyBox for example, is a useful container image for this purpose as it has a small footprint and contains many useful networking utilities.
NodePort services are fairly primitive in their implementation approach which leverages a port being opened on each node instance within the AKS cluster. The particular port is either specified or selected randomly from a non-privileged TCP port range between 30,000 and 32,767. Kubernetes is clever enough to then route the incoming traffic received on that port and then redistribute it to the pods sitting behind the service.
NodePort services can't be called from outside of the AKS cluster since the cluster nodes or VM instances are provisioned with VNet private IP addresses and aren't directly exposed to the internet. In this respect NodePort services have limited functionality when used within AKS. Having said that a NodePort service is useful as a building block for testing services since you do not incur the extra expense and/or configuration when using for example the LoadBalancer service type.
Creating a LoadBalancer service type with AKS will route traffic through an Azure Load Balancer. The Azure Load Balancer will round robin the traffic downstream to the pods that sit behind the service. The Azure Load Balancer acts as a Layer-4 load balancer, load balancing TCP connections.
AKS LoadBalancer service types can be created using either an external or internal Azure Load Balancer, external being the default option. An external AKS LoadBalancer service allows inbound internet traffic to access the service. And internal AKS LoadBalancer service on the other hand, only allows VNet originating traffic to access the service. Either way, the underlying physical hardware running the Azure Load Balancer is to be considered fault tolerant, that is the Azure platform will ensure that Azure Load Balancer is always available, and therefore shouldn't be considered a possible single point of failure.
Creating an AKS cluster will by default, create an external Azure Load Balancer with a single public IP address assigned to it by Azure. If you want to bring your own public IP address, you can do so but need to use the Azure CLI to provision the AKS cluster, this will allow you to specify your public IP address and have it attached to the load balancer at cluster creation time.
When the load balancer is created, each of the cluster's nodes will be automatically registered within the load balancer's backend pool. The AKS managed service will continually update the backend pool, ensuring that as the cluster's nodes are scaled out and in, the right nodes remain correctly registered within the backend pool.
A single Azure Load Balancer is used to route traffic to multiple services within the AKS cluster. Each time a new Kubernetes service is created within the cluster, the AKS managed service will automatically register and create a new front-end IP, load balancing rules, and health probes within the existing Azure Load Balancer. AKS will continue to use the same Azure Load Balancer for each new service added into the cluster up until a limit is reached in terms of maximum front-end IPs, or load balancing rules that are allowed on the load balancer, or something else. When a limit is reached, AKS will then spin up a new Azure Load Balancer and so on.
The LoadBalancer service type is a common and popular solution for routing inbound external traffic to workloads deployed within AKS. However there are some limitations that you should be aware of. Firstly, AKS LoadBalancer services do not support SSL termination on the Azure Load Balancer. And secondly, the AKS LoadBalancer is Layer-4 load balancer and therefore doesn't have any of the HTTP path routing smarts that are available at Layer-7.
As previously mentioned, when creating a service type of LoadBalancer, AKS will by default wire it up to route traffic via an externally accessible Azure Load Balancer, that is the Azure Load Balancer will be configured with a public IP address.
An alternative but similar solution is to create a load balancer service but explicitly configure it to be internal only. This is accomplished by using the Azure Load Balancer internal annotation and setting it's value to true within a manifest. Creating this type of service will result in a new Azure Load Balancer being provisioned with a front-end IP address allocated to it and drawn from the VNet subnet IP address range, meaning that it can only be accessed from within the VNet that hosts it, therefore making it an internal only LoadBalancer service.
When you view the internal LoadBalancer service within AKS using the kubectl command, you'll see that the registered external IP address will be a private IP address and it will match the private IP address on the internal Azure Load Balancer as viewed from within the Azure portal.
Although strictly not a Kubernetes service, an Ingress controller can be installed to provide advanced Layer-7 HTTP host based path routing. The host based path routing rules are then defined within Ingress resources. When setting up this option the ingress controller will sit behind the Azure Load Balancer. The ingress controller is actually deployed within the cluster as a set of pods which when they receive traffic, apply the Ingress rules and then proxy traffic downstream intelligently to other pods based on request path et cetera. Various vendor provided Ingress controllers exist in the marketplace including the popular and freely available Nginx Ingress controller. Installing the Nginx Ingress controller is both simple and quick.
Okay, that completes this lesson. In this lesson, I reviewed the various service options available within Kubernetes and AKS to enable network access to a group of pods. There are three Kubernetes Service types, ClusterIP, NodePort, and LoadBalancer. Kubernetes also provides the Ingress controller together with the Ingress resource type to facilitate external inbound communications. AKS integrates the Azure Load Balancer into the traffic path for LoadBalancer service type and Ingress resources.
Okay, go ahead and close this lesson and I'll see you shortly in the next one.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.