Creating and Configuring AKS Clusters

The course is part of these learning paths

AZ-303 Exam Preparation: Technologies for Microsoft Azure Architects
course-steps
28
certification
7
lab-steps
13
description
1
AZ-400 Exam Prep: Microsoft Azure DevOps Solutions
course-steps
17
lab-steps
5
description
1
AZ-104 Exam Preparation: Microsoft Azure Administrator
course-steps
20
certification
4
lab-steps
16
more_horizSee 1 more
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 42m
Students914
Ratings
3.6/5
starstarstarstar-halfstar-border

Description

AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!

This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.

For any feedback, queries, or suggestions relating to this course, please contact us at support@cloudacademy.com.

Learning Objectives

  • Learn about what AKS is and how to provision, configure and maintain an AKS cluster
  • Learn about AKS fundamentals and core concepts
  • Learn how to work with and configure many of the key AKS cluster configuration settings
  • And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster

Intended Audience

  • Anyone interested in learning about AKS and its fundamentals
  • Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
  • DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster

Prerequisites

To get the most from this course it would help to have a basic understanding of:

  • Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
  • Containers, containerization, and microservice-based architectures
  • Software development and the software development life cycle
  • Networks and networking

Resources

If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:

Transcript

Okay, welcome back. In this lesson, I'm going to provide a quick review into how you go about creating and provisioning an AKS cluster. I'll take you through the process of firstly doing it from within the Azure AKS administration portal. And then next explain how to perform the same provisioning process using a scripted approach based on the Azure CLI.

To begin with, the quickest and easiest way to build and launch an AKS cluster, is to do it from within the Azure AKS portal. The process is wizard-driven, with the overall provisioning process broken down into seven main configuration areas, which I'll now cover off.

Starting with the basic screen, you need to supply the standard subscription and resource group values to ensure that the charges incurred with running the AKS cluster can be allocated to the correct team.

The cluster's name, region, and version then need to be specified. Finally, you need to define the number and size of the worker nodes that will be launched when the cluster is provisioned. Defaults are provided to you for these options, but they can be changed.

The Scale page provides options to enable or disable both Virtual Nodes and VM Scale Sets. Virtual Nodes use the Azure Container Instance service to provide rapid burst container deployment, I'll go into this in greater detail later. VM Scale Sets are used to manage pools of worker nodes that are added to the cluster.

On the Authentication page, the main setting is the Service Principal setting. Here, you can have the AKS managed service automatically create a new Service Principal, or you can elect to use an existing one. The service principal is used to allow the AKS managed service to create, deploy, and configure various cluster related resources into your virtual network, which you define next.

Additionally, you have the option of enabling or disabling Kubernetes RBAC, or role-based access control. By default, this is enabled and is the standard way for performing and managing role-based access within a Kubernetes cluster. With AKS, RBAC can be tied into and integrated with Azure AD for seamless users and group management.

The Networking page provides the most options that can be specified. The main configuration option on this page is the decision to either go with Basic or Advanced networking. I'll go into details about the differences in the networking lesson. But as a rule of thumb, go with Basic if you're just wanting to get up and running. If you're in need of using some of the more advanced options within AKS, such as Virtual Pools, then you will need to go with Advanced networking. This option cannot be changed after the cluster has been created, so you will need to consider this setting carefully. And as mentioned, I'll cover off the different flavors of networking in the coming networking lesson.

If you choose to go with the Basic networking option, the AKS managed service will create a default VNet for you with predetermined default networking settings. The benefit of this approach is that it's faster and requires less effort to get your cluster up and running. However, if you require more control over the networking settings, or you want to use an existing VNet, then you will need to consider swapping over to advanced networking.

Changing to Advanced networking, updates the configurable properties on the Networking page, where unlike basic networking, you now have the ability to tune the various networking properties of your AKS cluster such as the subnet CIDR range, the Kubernetes service address CIDR, the Kubernetes DNS service IP address, and/or the Docker bridge address CIDR.

If you go with Advanced networking, you can also leverage Azure Networking Policies which aren't available in the Basic networking option.

The Monitoring page simply provides you with the ability to enable or disable container monitoring.

The Tags page allows you to set one or many metadata tags against your AKS cluster, helping you manage and identify it amongst others if multiple clusters exist.

The final review and create page validates your selected settings before giving you the green light to create. Again, as a rule of thumb, stick with Basic for testing and basic cluster setups, and only go with Advanced when your requirements deem it necessary.

Overall, the experience of using the Azure AKS portal to provision a Kubernetes cluster is both intuitive and just works as it is designed.

When it comes to using the Azure CLI, the process again is fairly straightforward. The main steps to complete in the given sequence are, one, define main variables such as the cluster name, the resource group, and the VNet name that the cluster worker nodes will be deployed into. Two, create a new AD service principal and extract the assigned AppID and password. Three, create a new VNet and subnet for the cluster's worker nodes. Four, assign the contributor role to the service principal scoped on the VNet previously created. And five, finally create the AKS cluster, referencing the previously created service principal and VNet subnet. Okay, let's walk through this step by step.

To begin with, I'll define three main variables for the remainder of the script. Here, I'm defining the cluster's name, the resource group it is allocated into, and the name of the virtual network that we will create for the cluster's worker nodes.

Next, a new service principal is created. The AKS cluster will later be created with this. In this step, I'm using the jq utility to extract out the service principal's assigned appId and password. These must be passed into the aks create command later on.

Next, I create a new VNet and subnet to host the cluster's worker nodes.

I then assign the contributor role to the service principal scoped at the VNet level. This is required to support the AKS managed service which needs to have permissions to create and assign various resources into the VNet.

Finally, we are ready to run the actual aks create CLI command, which will provision and create a new AKS cluster, placing it into the previously created VNet and subnet. The aks create command can be customized through the many parameters available, many of which are optional.

Once the aks create command is kicked off, it will take approximately five to 10 minutes to complete.

One thing to point out with the aks create command shown here is the presence of the generate ssh keys option, this option will have the AKS managed service create a SSH key pair and update the worker nodes, allowing them to be authenticated using SSH. The SSH connections will need to be made to the worker nodes private IP address, and therefore, an NSG, or network security group, inbound rule for SSH will also need to be created, allowing connections to port 22. The SSH key pair is automatically downloaded and saved to your workstation. The aks create command will actually log out to standard out the location as to where the generated key pair has been saved to.

Okay, that completes this lesson. In this lesson, I presented you with a quick review of the two main approaches to creating an AKS cluster. The manual approach performed within the Azure AKS administration portal, and the Azure CLI script-based approach.

Okay, go ahead and close this lesson and I'll see shortly in the next one.

About the Author
Students36269
Labs33
Courses93
Learning paths23

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.