Part One - Lectures
Part Two - Demonstration
The course is part of these learning pathsSee 1 more
AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at firstname.lastname@example.org.
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
Okay, Step 9. I'm gonna start installing network policies to control pod-to-pod traffic within our cluster, particularly within the cloudacademy name space. So I'll copy the first deny all policy under Step 9.1.
Now, what we should find is that once this default deny all policy has been deployed, the application should break because all pod-to-pod traffic within the cloudacademy namespace will be denied. Therefore, if I run the following curl command I would expect this to fail with some sort of timeout. And indeed it has, we've got a 504 gateway timeout, which is probably the expected response from the NGINX controller because it cannot send a request downstream and get a valid response back. So let's move on and fix this.
So Step 9.2, I'll deploy the following network policy which is required to allow the Mongo pods to talk within themselves for database replication. Okay, that's been deployed.
Step 9.3, this network policy will allow the API pods to talk to the MongoDb pods, and we need that because the API needs to read and write to the database. So that's been created.
Step 9.4. We need to allow the ingress pods to talk to the API pods. Okay, that has been created.
Step 9.5, we need to allow the ingress pods to talk to the frontend pods. That's been created. And then the last network policy, under Step 9.6, we need to deploy to allow the pods within the cloudacademy namespace to perform DNS resolution against the cluster's DNS pods within the kube-system namespace. So we'll deploy that. And everything is in place now.
So what we have now is we have our default deny all policy together with a number of other network policies that allow just the right pod traffic within our setup. So if I go back to our instructions and run the Step 10 command the curl command should now work again. Which it does, so this is really, really good.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.