Part One - Lectures
Part Two - Demonstration
The course is part of these learning paths
AKS is a super-charged Kubernetes managed service which makes creating and running a Kubernetes cluster a breeze!
This course explores AKS, Azure’s managed Kubernetes service, covering the fundamentals of the service and how it can be used. You’ll first learn about how as a managed service it takes care of managing and maintaining certain aspects of itself, before moving onto the core AKS concepts such as cluster design and provisioning, networking, storage management, scaling, and security. After a quick look at Azure Container Registry, the course then moves on to an end-to-end demonstration that shows how to provision a new AKS cluster and then deploy a sample cloud-native application into it.
For any feedback, queries, or suggestions relating to this course, please contact us at firstname.lastname@example.org.
- Learn about what AKS is and how to provision, configure and maintain an AKS cluster
- Learn about AKS fundamentals and core concepts
- Learn how to work with and configure many of the key AKS cluster configuration settings
- And finally, you’ll learn how to deploy a fully working sample cloud-native application into an AKS cluster
- Anyone interested in learning about AKS and its fundamentals
- Software Engineers interested in learning about how to configure and deploy workloads into an AKS cluster
- DevOps and SRE practitioners interested in understanding how to manage and maintain an AKS cluster
To get the most from this course it would help to have a basic understanding of:
- Kubernetes (if you’re unfamiliar with Kubernetes, and/or require a refresher then please consider taking our dedicated Introduction to Kubernetes learning path)
- Containers, containerization, and microservice-based architectures
- Software development and the software development life cycle
- Networks and networking
If you wish to follow along with the demonstrations in part two of this course, you can find all of the coding assets hosted in the following three GitHub repositories:
Okay, welcome back. In this place and I'm going to cover off several important security options and configurations that can and should be applied with an AKS to keep it secure.
As a general rule of thumb, when building, configuring and managing AKS clusters, apply the rule of zero trust. Zero trust stipulates a never trust. Always verify and enforce the least privileged approach to privileged access. Zero trust should be applied from both inside and outside of the AKS cluster network.
When it comes to supporting multiple AKS environments and or multiple teams, you need to carefully consider how to configure your setup to ensure that deployed workloads remain separated and only accessible to their correct users. There are two general approaches to this.
The first approach involves physical separation. This is achieved by rolling out and provisioning individual clusters per requirement. When this approach has taken, every team gets its own dedicated cluster. Obviously, this approach comes at additional cost, but worth the benefit of ensuring a smaller blast radius, should something go wrong either by an applied user action or a component failure within the cluster.
The second and alternative approach is to perform the separation logically. Using Kubernetes name spaces. Name spaces can be created one pair of environment and or team. Multiple name spaces can be applied within a single cluster, allowing its resources to be carved up and shared. This approach is likely to be less costly than the multiple cluster approach, but must be more carefully managed as the blast radius for something going wrong will be greater.
When an AKS cluster is created, it is created within Azure service principal assigned to it. Either specified will created automatically for you. The Azure service principle is a security identity, which is then used by AKS to make authenticated calls to other Azure APIs. AKS uses the service principal at various times during the clusters lifetime to perform various automated tasks. Ranging from building the initial virtual network to host the worker notes if using the basic network option. Two, deploying extra veams into the virtual network when scaling out the node pool. Or allowing AKS to pull down container images from a private Azure container registry.
As mentioned, AKS will automatically create a new service principle for you and assign it to the new cluster, if you do not specify an existing one yourself. The following example demonstrates how to launch a new AKS cluster and watch the service principle is automatically created for you, since one wasn't explicitly configured.
The next example demonstrates how to first create a service principal and then launched an AKS cluster with this service principle specified. This example retains the following JSON response, captured by the SP variable. The service principal app ID and password, they need to be extracted. This is done so, using the following commands. And then finally, the AKS cluster can be launched. Using the following which assigns it an existing service principle, which we just created.
Kubernetes RBAC, Role Based Access Control, should be enabled with an NA case cluster. RBAC gives you the ability to manage who can do what within Kubernetes. And the application of it is vital to ensure that Kubernetes remains healthy. RBAC allows you to deploy fine-grained rule sets, defined within a role resource. The role contains the permissions in rules and is then applied to the user, through the use of a RoleBinding.
RBAC and AKS can be integrated with Azure active directory. When this integration is enabled, cluster uses are first authenticated by Azure active directory. Cluster-based RBAC policies can then be assigned a mapped against Azure active directory based users and/or groups.
An authenticated user conducting activity against the AKS cluster, using the cube CTL command, can then only perform the allowed actions. Note, Azure active directory integration with Kubernetes RBAC must be enabled and configured a cluster creation time.
Network policies is applied within Kubernetes are used to restrict and control ingress and egress pod communications. In essence, they provide a type of firewalling mechanism where you can design policies that control the flow of network traffic into and out of pods.
The following Kubernetes network policy demonstrates a quick example of how to filter inbound database pod traffic, allowing only traffic that originates from the API pod.
When it comes to enabling network policy functionality with an AKS, AKS provides a couple of different plug-in options. As your network policies and Calico network policies. Both implementations use a Linux IP tables, under the hood and therefore will only work with Linux base working notes.
Azure network policies are supported only on advanced working enabled clusters. Calico network policies on the other hand, are supported on both basic and advanced networking enabled clusters. Both, Azure network policies and Calico network policies support all network policy types, defined within the Kubernetes specification.
Network policies can also be used to prevent pods from talking to their host nodes as your metadata URL. Without any network policy in place, by default, any pod connects the host's nodes as your metadata URL, like so. In this case, to see it up and deny all traffic to the Azure metadata URL, implement the following network policy. Note, this network policy will allow all other outbound traffic from the pod.
The following example shows how to launch an AKS cluster, using advanced networking and with the Azure network policies option enabled.
The next example shows how to launch an AKS cluster, using advanced networking and with the Calico networking policies option enabled.
When you create an AKS cluster, by default, AKS creates and assigns the API server, within the control playing a public IP address. This approach is quick and convenient for many cluster deployments. Allowing you to run authenticated cube CDL commands across the internet over encrypted HTTP connections. However, it might not be considered desirable for other deployments, particularly those, which have very stringent security requirements and for which don't want to route the cube CTL, API server traffic across the internet.
For this requirement, AKS provides you with the ability to configure and enable private clusters, which again, must be done in cluster creation type. Enabling this option will ensure that the API server is assigned an ARC 1918 internal-only private IP address. With this security poster in place, your authenticated cube CTL encrypted traffic can only be sent over an internal network with routes to the cluster's API server.
An AKS private cluster can be enabled, regardless of whether you choose basic or advanced networking. It will work with either.
Enabling a private AKS cluster is done by specifying the enable private cluster parameter on the AKS create command, like so.
Once your AKS private cluster has been provisioned, you'll need to consider networking options as to how to connect to it when using cube CTL. The easiest approach is to deploy a bastion host or jump box with a public IP address into the same vnet, which hosts the cluster. The bastion host should be set up to wireless SSH traffic from your externally facing public IP address.
It's the next option is really just an expansion on the first. Whereby you set up a dedicated management vnet, which in turn is paired to the vnet hosting the cluster. The bastion host is then deployed into the management vent.
The third option would be to establish a VPN between the vnet hosting the cluster and your corporate on-prem together with the appropriate routes. With this in place, you can use cubes CTL directly from your corporate workstation or consider using a hybrid approach, combining all of the previous mention options together, like so.
Now, regardless of whether you enable your AKS clusters to be private or not, you can always at least white list and filter API server bound traffic by specifying a list of allowed IP address ranges. Again, this option must be declared at cluster creation time. For example, the following AKS create cluster command will create a new AKS cluster. It drops all inbound traffic to the cluster's API server, that falls outside of the side of block, 18.104.22.168/24.
Keep in mind that you can update and even disable the API server authorized IP ranges anytime after the initial cluster creation, using either of the following command.
And when it comes to allowing wholesale inbound traffic from the internet to your AKS cluster deployed workloads, it's not sufficient to rely on layer three and four filtering roles. Many modern attacks now happen in the application layer, layer seven. To address this problem, consider deploying a web application firewall or WAF in front of your clusters as your load balancer within its own vnet. Which is peer to the vnet hosting the cluster. When you do so, set up the downstream load balancer to be internal only.
Okay, that completes this lesson. In this lesson, I reviewed the various security options available in Kubernetes and AKS. It's extremely important to ensure that your AKS cluster deployment is secured from day one and maintains a zero trust policy throughout its lifetime.
Okay, go ahead and close this lesson and I'll see you shortly in the next one.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.