The course is part of these learning paths
This Administering Kubernetes Clusters course covers the many networking and scheduling objectives of the Certified Kubernetes Administrator (CKA) exam curriculum.
You will learn a range of core practices such as Ninja
kubectl skills, the ability to control where pods are scheduled, how to manage resources for long-lasting production environments, and controlling access to applications in a cluster.
This is a 6 part course made up of four lectures. If you are not familiar with Kubernetes, we recommend completing the Introduction to Kubernetes course and the Deploy a Stateless Application in a Kubernetes Cluster Lab before taking this course.
- Analyze some pro tips on how to effectively use Kubectl. What you learn here will be useful for administering a cluster and using Kubernetes in general.
- Learn to be able to attract or repel pods from nodes or other pods. You can ensure pods run on nodes where they are intended to run and achieve other objectives such as high-availability by distributing pods across nodes.
- Learn to think about using Kubernetes for the long term when you need to consider how you’ll manage and update resources.
- Learn how to control internal and external access to applications running in a Kubernetes cluster.
- Anyone that is interested in Kubernetes cluster administration. But many parts of this course appeal to a broader audience of Kubernetes users.
- Individuals that may benefit from taking this course include System Administrators, DevOps Engineers, Cluster Administrators, and Kubernetes Certification Examinees.
To get the most from this course,
- Have knowledge of the core Kubernetes resources including pods, and deployments.
- Experience using the kubectl command-line tool to work with Kubernetes clusters.
- An understanding of YAML and JSON file formats. You’ll probably already have this skill if you have the prior two. When working with Kubernetes it won’t take long until YAML files make an appearance.
NGINX ingress controller for Kuberetes: https://github.com/kubernetes/ingress-nginx
Speaker 1: Kubernetes has several concepts relevant to networking. As a cluster admin, you need to know about the concepts and how you can use them to securely provide access to applications running in a cluster.
This lesson will begin by reviewing the basic networking models employed by Kubernetes. Then we will discuss more about services. The networking basics and services topics are also covered in other content here on Cloud Academy, so we will only review the key concepts of each.
Lastly we'll discuss Kubernetes Ingress resources. There is another concept important to Kubernetes network security called Network Policies. But we won't talk about it here because that topic is covered well in the Securing Kubernetes Clusters Lab here on Cloud Academy.
The basic building block in Kubernetes is a pod. The network model for a pod is IP per pod, meaning each pod is assigned one unique IP in the cluster. Containers in the pod share the same IP address and can communicate with each other's ports using local host. A pod is of course scheduled onto a node in the cluster. And you know it can reach the pod by using its pod IP address. Other pods in the cluster can also reach the pod using the pod's IP address. This is thanks to whatever Kubernetes networking plug in each use.
The network plug-in implements the container network interface standard and enables pod-to-pod communication. But pods should be seen as a femoral that can be killed and restarted with a different IP. You may also have multiple replicas of a pod running. You can't rely on a single pod IP to get the benefits of replication. This is where services come in.
The service maintains a logical set of pod replicas. Usually the set of pods are identified with labels. The diagram only includes one replica, but there could be many spread over many nodes. The service maintains a list of endpoints as pods are added and removed from the set. The service can send requests to any of the pods in the set.
Clients of the service now only need to know about the service rather than specific pods. Pods can discover services using environment variables as long as the pod was created after the service, or by using the DNS add-on in a cluster. The DNS can result in the service name or the name space qualified service name to the IP associated with the service.
The IP given to a service is called the cluster IP. Cluster IP is the most basic type of service. The cluster IP is only reachable from within the cluster. The cube proxy cluster component that runs on each node is responsible for proxying request for the service to one of the services endpoints.
The other types of services allow clients outside of the cluster to connect to the service. The first of those types of services is node port. Node port causes a given port to be opened on every node in the cluster. The cluster IP is still given to the service. Any requests to the node port of any node are routed to the cluster IP. The next type of service that allows external access is load balancer. The load balancer type exposes the service externally through a cloud provider's load balancer.
A load balancer type also creates a cluster IP and a node port for the service. Request to the load balancer are sent to the node port and routed to the cluster IP. Different features of cloud provider load balancers such as connection draining and health checks are configured using annotations on the load balancer.
The final type of service is external name and it is different in that it is enabled by DNS, not proxying. You configure an external name service with a DNS name and request for the service return a CNAME record with the external DNS name. This can be used for services running outside of Kubernetes, such as a database as a service offering.
Services operate at layer 4 in the OSI network stack. That is the transport level of TCP and UDP. Kubernetes also provides a layer 7 service abstraction called ingresses. Layer 7 is the application layer, which is where HTTP exist. You need to have an ingress controller running in your cluster to use any ingress resources.
Ingress controllers are different from most controllers that are automatically run as part of the cube controller manager binary. Instead ingress controllers are run as normal pods in the cluster. You can choose from a variety of ingress controllers. I have put a link to one based on Nginx in the transcripts of this video. Installation steps vary depending on how your cluster is deployed.
Once you have an ingress controller in place, you can use an ingress to specify rules for connecting inbound connections to Kubernetes services. Ingresses support SSL termination, load balancing and path-based routing. We will see an example of this in a demo. We'll see how to define an ingress to route incoming traffic to two different services based on the HTTP request path. We won't go through the process of installing an ingress controller because that depends on the specifics of the ingress controller you choose, and where your cluster runs.
I have the manifest for an ingress that will route requests to the k8s-perspectives.com host name. To separate news and blogs services based on the request path, let's go through the relevant fields. The first is an annotation that is ingress-controller specific. I'm using the ingress controlled named ingress Nginx, which is the one that I have linked to in the transcript.
Without getting into much detail, every time an ingress resource is created, updated or deleted, a new configuration for Nginx is generated. This particular ingress will match paths that we'll see later and rewrite them to slash when sending requests to different services. With that ingress controller specific detail out of the way, we can focus on the controller agnostic spec.
The spec consists of a list of rules. This example has only one rule. Each rule can specify a host, but it's not required. When a host is set, the rule will only apply when the request matches the host. If no host is set, the ingress applies regardless of the host value in the request. If you include multiple rules, setting separate values of the host field enables name-based virtual hosting allowing multiple host names for the same IP.
The other field is HTTP and it's required for each rule. Under that is the paths field, which consists of a list of paths. Each path must specify a back end which is a service name and port. If a path field is given, the incoming request must match the path for the rule to apply. Otherwise any path will match and be directed to the back end.
In our example, the k8s-perspective web app has separate news and blog services. The URL path of the request is used to direct requests to the appropriate underlying Kubernetes service. Ingresses make it easy to accomplish this scenario.
That's all for the lesson on Kubernetes networking. We began with a review of basic networking principles. Then we paid some extra attention to the different types of services available on Kubernetes. Lastly we discussed ingresses and how they can be used to manage external access at the HTTP layer.
We'll wrap up the course in the next lesson. Continue on when you're ready.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.