image
Azure Kubernetes Service (AKS) VNet Integration
Start course
Difficulty
Advanced
Duration
43m
Students
517
Ratings
4.2/5
Description

This course will focus on how to configure Azure Kubernetes Service and Azure App Service so that they are accessible within an Azure Virtual Network. In addition to the how of configuring these services, it is also important to understand the requirements for making the configuration possible as well as what features and functions are possible once active. This course will help to put all of this information into perspective.

Learning Objectives

  • Configure App Services for regional VNet integration
  • Learn how Azure Kubernetes service can be configured for VNet integration as well as the different networking models that it supports
  • Configure App Service environments so that your clients can access them

Intended Audience

  • Solution architects
  • Cloud administrators
  • Security engineers
  • Application developers
  • Anyone involved in the planning, implementation, and maintenance of Azure network solutions

Prerequisites

To get the most out of this course, you should have a strong understanding of the Azure portal, networking experience, and experience with Azure network solutions, including routing and private access.

Transcript

In this video, we're going to start by taking a look at the Azure Kubernetes service and how it is going to integrate into a Virtual Network inside of your Azure subscription.

Azure Kubernetes service is a very different beast than Azure App Services and the reasoning behind that is: although it is absolutely considered Platform as a Service, Microsoft manages only a portion of it. It gets deployed as Infrastructure as a Service, therefore giving you more control over the environment than an Azure App Service does. It also means that it gives you more networking integration capabilities.

Let's start by understanding exactly what are the resources that you're going to be leveraging for an Azure Kubernetes cluster, when that cluster gets deployed. 

The first is of course a Virtual Network, which is in fact required. The Virtual Network can either be manually attached to or automatically created depending upon the networking model that you're going to choose, and we'll talk about that in a little bit. 

Of course, there is going to need to be a load balancer to allow for scalability. The minimum recommended cluster size for Azure Kubernetes is typically three VMs, so that load balancer will need to be there at a minimum in order for Kubernetes just to work, let alone support your application.

Network Security Groups should absolutely be applied to the subnets that your cluster is placed into in order to prevent traffic that should not be flowing.

The basis of Azure Kubernetes is going to be the virtual machines which provide the nodes for your Azure Kubernetes cluster.

If you are going to provide any kind of scaling for your Kubernetes cluster, maybe you know that your user base is going to shrink and grow over time, then you're going to want to take advantage of VM scale sets to allow that cluster to grow and shrink.

I talked about the fact that Kubernetes, when deployed inside of Azure, supports two different networking models. The first is called Kubenet and this is the default Kubernetes model that most customers are used to. If they're deploying Kubernetes either On-Premises or in another cloud service provider, it does the following: 

  • it conserves IP address space
  • It uses a Kubernetes internal and external load balancer for reaching of pods 
  • It allows the manual managing and maintaining of user-defined routes 
  • It provides for a maximum of 400 nodes per cluster.

This is what is built into the Kubernetes engine.

Microsoft has provided a second networking model called the Azure CNI which adds IP addresses to the pods as well as the nodes. Of course, this means that all of your network IP addresses are going to need to be extremely well defined before implementing your cluster, depending upon how many nodes you believe are going to be deployed within that cluster and how many pods within those nodes. 

In this model, the pods get full Virtual Network connectivity and can be directly reached via their private IP address from connected networks. This requires a much larger IP address space, and much better planning.

In Kubernetes case, this is a sample diagram of what a Kubernetes implementation would look like.

You have individual nodes that have actual IP addresses. These are 192.168.* ranges and they are the IP addresses of the Virtual Network. Then you'll notice that there is a set of 10.* IP address associated with the pods. These are the internal IP address ranges that Kubernetes is leveraging in order for the services to talk to one another within Kubernetes, but those IP addresses are not published to the rest of the virtual network for anyone to access them individually.

The difference with Azure CNI is going to be that the pod IP addresses would have a 192.168 IP address range, just like the nodes do.

What are the differences between the two networking models?

Both networking models support automatic configuration of VNet resources if you choose to. They also both support manual creation and attachment of VNet resources which is partially true. 

What I mean by that is when you go through the Azure wizard, if you choose Kubenet as your networking model, you will automatically have a VNet created for you. You cannot attach Kubernetes to an existing VNet through the wizard. If you do it through the CLI or PowerShell, you do have more control. Manual creation and attachment is going to be available for the Azure CNI by default. However, you do have the ability to automatically create one as well. 

With respect to network policies for Azure Kubernetes service, they can be defined and changes can be made through both models. 

Manual VNet resource creation allows for user-defined routes and service endpoint configurations whereas automatic VNet resource creation prevents those configurations. So in the Kubenet case, if you're going through the wizard, when the VNet gets created, there will be User Defined Routes automatically set up to allow for Kubernetes to leverage the cube Kubenet model. 

In an Azure CNI case where the IP addresses are now attached to the pods, you are the only one that is going to know what the requirements are for a User Defined Route based on how the pods are going to communicate. 

When should you use the different networking models? 

If the following scenarios are true, the Kubenet should be the right networking model for you:

  • If you have a limited IP address space
  • Most of the pod communication is internal to the cluster
  • You don't need advanced features of AKS like virtual nodes or Azure network policy
  • If you have a very solid understanding of your application 
  • The pods or the services associated with your application are not going to be either spoken to individually or doing a lot of outbound address talking

In addition, if your networking is being managed by a separate team and they are controlling IP address availability, that's another reason for choosing Kubenet over Azure CNI.

The reverse scenarios are going to be true, for Azure CNI to be the right mode for you. The following should be what you are looking for: 

  • You have available IP addresses for both the nodes in the pods 
  • Most of the pod communication is to resources outside of the cluster rather than strictly internal
  • You don't want to manage user-defined routes when you specify Azure CNI, the user-defined routes are not only created for you automatically, but they're actually managed by Azure automatically as well.
  • You also don't need advanced features.

However, you can leverage advanced features with Azure CNI if you choose to.

There are definitely a number of differences between the networking models. I'm not going to go through each line of the table with the slide deck, but here are a few standouts: 

  • Pod to pod connectivity is absolutely supported in both models
  • Access to resources secured by service endpoints, meaning PaaS services that have been privatized to your virtual network is supported in both models
  • One thing that is not supported in both models is support for Windows node pools rather than Linux node pools. Kubenet does not support Windows node pools whereas Azure CNI does today
  • You can deploy a cluster into existing or new virtual networks in both models

Just make sure to understand the similarities and differences before you make a decision on which networking model you want to implement.

In the next video, we'll take a look at how to specify these network integrations for AKS by leveraging the Azure portal.

 

About the Author

Brian has been working in the Cloud space for more than a decade as both a Cloud Architect and Cloud Engineer. He has experience building Application Development, Infrastructure, and AI-based architectures using many different OSS and Non-OSS based technologies. In addition to his work at Cloud Academy, he is always trying to educate customers about how to get started in the cloud with his many blogs and videos. He is currently working as a Lead Azure Engineer in the Public Sector space.