Part One - Lectures
Part Two - Demonstration
AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at email@example.com.
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
So we'll now move on to step six. In step six we're going to install the API layer. So the API will be used by the front-end to send Ajax calls to it. So then I'll hit the API ingress which will forward them to the API service which will then forward them to the API pods which are deployed as part of the deployment, and the pods will read and write to the MongoDB database that we just set up. So, let's move on.
First thing to do is to deploy a secret, and the secret is used to store the credentials for the MongoDB database. So, I'll copy this. I'll clear the terminal, paste it, enter. Now, when you create a secret, the data attributes need to be base 64 encoded. So, those base 64 encodings can be created using these commands here. Okay, that's been created. I'm now going to create the deployment and this will deploy before API pods.
Okay, that's completed. And you can see here that we're mounting the secret that we previously created. Okay, step six point two, we need to create the API service. It's been created, and then to complete the installation of the API we're going to create an ingress. Now, this ingress is going to be configured to use the NGINX ingress controller, and under host, you'll notice that it's referring to that API public fully qualified domain name that we generated earlier on in the deployment. So I'll clear the terminal, and if I was to echo out the value behind this, that's just a host name that will resolve to this public IP address by this free DNS service.
Okay, so let's copy all of six point three and paste it. That is route AKS. So, at this stage, we can look at the roll out for the deployment and it should have completed, as it has. If we now look at the pods again, we can see we've got four new pods for our API alongside our existing three pods in our MongoDB state full set. If I jump into the service view, we can see that we've got our service, and it's configured with a internal cluster IP address.
So, at this stage, we should be able to contact it remotely. So, I'll run the following command. This is a great result. What it means is that my network path for my local WIC station works all the way through the various layers to the API pods hosted within the AKS cluster. And we can drill down into the API even further.
So, for example, if I run the following command, we're hurting the API and we're asking for the information about the Go language. So, this request has gone through the NGINX ingress controller. Through the ingress it's been routed to the API service, and then down to the pods. The pods have then talked to the MongoDB database and the data is then been passed all the way back. So that looks really good.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes.