Part One - Lectures
Part Two - Demonstration
AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at email@example.com.
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
Okay, welcome back. In this demonstration, I'm going to take you through the process of creating an AKS cluster. And then once we have our cluster available, we'll deploy a sample cloud-native application into it.
Now, before I begin, all of the instructions that I'm about to perform have been captured and recorded within this particular README, found within this particular GitHub repo. I encourage you to take a copy of it, and this will give you the ability to run the same commands within your own environment.
Now, as mentioned, once the AKS cluster is up and running, we'll deploy a sample cloud-native application into it. The sample cloud-native application is already prebuilt. The sample cloud-native application provides a voting web-based frontend. On this particular application, there are voting buttons for various languages, and when you click these buttons, an AJAX call will be made from the frontend to an API which will be hosted in the AKS cluster. The API will receive the voting information and then write it into a backend database. In this case, the database that we have in mind is a MongoDB database.
So, this diagram here shows the infrastructure, or the cluster components, that will deploy into our AKS cluster once it's been created. So, to begin with, I'm gonna swap over into Visual Studio Code. Now, I have the same README here on the left-hand side, and I'm gonna perform all of the instructions using the integrated terminal on the right-hand side. Okay, beginning with step one, we're going to create our new AKS cluster.
Step 1.1 requires us to define three variable names. So, I'll copy these, I'll go into the terminal, I'll paste them. So, we're defining our CLUSTER NAME, our RESOURCE GROUP that the cluster will be provisioned under, and the VNET, or virtual net name, that the cluster worker nodes will be deployed into.
Okay, step 1.2, now requires us to create a new service principal. So, this service principal was a requirement for the cluster. So, I'll run these commands, and we need to extract both the APPID and the password that have been assigned for the service principal. So, you can see that I found the APPID by using the jq utility to navigate to the .appId property within the JSON response. Likewise, the same for the password. And here are their values.
Okay, moving on to step 1.3. We need to create the virtual network. So, I'll copy this command, paste it in the terminal. So, that's completed. We'll move on to step 1.4. Here we need to assign the contributor role to the service principal that we earlier created, and scope it on the virtual net that we've just created.
So, we'll copy these commands here. I will clear the terminal and run step 1.4 commands. So, again, this is assigning the contributor role to the service principal, and then the service principal will be used when we create the actual cluster, which is the next command.
So, step 1.5, this is where we actually create the AKS cluster using the
az aks create command. So, I first need to get the subnet ID, and now that I have it, I can actually run the az aks create cluster command. So, I'll clear the terminal, paste the commands into that.
Now, a couple of things to highlight, I've specified the generate ssh keys, and you can see here that the aks create command has generated those keys and then written them back down to my local file system. I have specified a node count of two. I'm using a VirtualMachineScalesSets for my worker nodes. I'm running 1.16.7 for the Kubernetes version, and I'm using advanced networking, as per the network-plugin azure setting. The service CIDR is a 10.0.0.0/16. And I've also enabled network policies using the azure version of it. And then, finally, we needed to specify both the service principals, APPID, and password. So, we'll now leave this to run and complete. It will take approximately five minutes. I'll speed up the demonstration to the point where the cluster has been created.
Okay, so the cluster provisioning process has completed successfully. And we should now have our AKS cluster, ready for us to now deploy our sample cloud-native application into. Now, before I continue on to step two, let's jump over into the azure portal. And if I refresh the Kubernetes services page, we should see our new AKS test cluster, which we do. So if I click into it, you can see that it's got a succeeded status, so it's ready for us. It is indeed a 1.16.7 version of Kubernetes, and if we take a look at load balancers, we've also got a single load balancer on there as well.
So, this particular azure load balancer was created as part of the AKS provisioning process, it's created automatically for us. And then, if I look at virtual machine scale sets, you can see we've got a single AKS node pool. And if we look under Instances, we've got our two nodes. So, these are our two worker nodes.
Okay, going back to Visual Studio Code, we'll continue on to step two. So, we need to establish the credentials to allow kubectl to actually authenticate to our cluster. So, I'll copy this command. I'll clear the terminal. Paste, enter. So, what this will do, is it will talk to the AKS cluster using the az CLI, and actually get the credential information and write it into the .kube config file.
If I was to display the contents of this file, what you should see is all of the connection and authentication information. So, we can see here, for example, the URL for our new AKS cluster. And then, we have the certificates that are used to actually authenticate to it.
So, if I run the
kubectl get nodes command, that should now authenticate against the API server within our cluster, as it does. And it shows us that we indeed have two work nodes, as we expected. So, I'll clear the terminal. I can run each of these commands to get more information about the available config, the current list of contexts, and the current context that I'm actually authenticated in az.
So, you can see here, we're under the akstest-admin context.
Okay, moving on to step three. I'll clear the terminal again. And in step 3.1, we're going to create a new name space. And this name space will be used to deploy the nginx-ingress controller into. So, that's created successfully.
Step 3.2, we're gonna use helm to do the installation of the nginx-ingress controller into our AKS cluster. So, if I run helm version, you can see that I'm running 3.0.2. So, helm is a client that is installed locally on my work station. And it will authenticate to our AKS cluster using the same .kube/config file that I showed you earlier. So, I'll run the helm repo add command. Okay, that is completed successfully. I'll now run helm repo update. Okay, so the helm repo update has completed successfully. And this time, we can actually perform a search within it using the helm search repo stable command. And here, we can see all of the different charts. Now, we're going to install the nginx controller. So, we're gonna install this guy here. I'll clear the terminal, and I'll run the helm install command. Ensuring that the installation of the nginx-ingress controller takes place within the nginx-ingress name space that we previously created.
Okay, so that has completed successfully. And moving on to step 3.3. I'll run a kubectl get svc command, and we'll perform a watch on it. And what we wanna see here is that the external IP gets provisioned for the ingress controller, which it has.
So, if we jump back over into the azure portal, and if we go back to load balancers. If we look at the load balancer that's been assigned for our cluster. And we go to the Frontend IP configuration, we can see now that we have two IP's. The first one was created when the AKS cluster was created, and then the second one has been created and assigned for the nginx-ingress controller that we've just deployed.
So, we can see here that 18.104.22.168 is the same as this IP address here. So, this is the frontend public IP facing address that is assigned to our nginx-ingress controller. So, everything is looking good. I'll clear the terminal. And this time I'll run the following commands. And what we're doing here is we're taking a look at the aks nginx-ingress controller, and what we wanna do is extract the IP address. So, to do that, I run the following command here.
So, this command is going to interrogate and pull this piece of information out for us and assign it to this variable. So if we now echo out this, we can indeed see we've pulled out the public IP address. Next, I'm gonna set up two more variables. I'm creating the API_PUBLIC_FQDN and the FRONTEND_PUBLIC_FQDN. And then finally, if we echo both those out, so we can see what they contain. All it is, is a DNS name in which we embed the nginx-ingress' public IP address into it.
Now, the reason I've just run the previous commands is that I wanted to generate the DNS name that will be used for both the api svc and the frontend svc that we're about to deploy. Now, what we do when we generate these DNS names is that we're embedding the nginx-ingress controller's public IP address into them. And then, when these DNS names get resolved by the nip.io DNS service that is dynamically resolved to the embedded IP address.
So, it's a really cool feature, and we need this because our nginx-ingress controller performs hosts path by S-routing. So, more on this later.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes (CKA, CKAD, CKS).