Create K8s Cluster
Build Container Images
Create K8s Resources
End-to-End Application Test
K8s Network Policies
K8s Deployment Update Challenge
The course is part of this learning path
This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
] Okay, welcome back. In this lecture I'll quickly review and document the Kubernetes deployment that we'll create in the remainder lectures. For each of the cluster resources that we intend to use, a summary explanation will be provided. All right, let's get going! Keep in mind that the types of Kubernetes resources reviewed in this lecture are going to be specifically the ones that we intend to use in the next lectures when we begin our deployment. Therefore, the purpose of this lecture is to ensure that you have good comprehension of each provisioned cluster resource before they are created. The full list of Kubernetes resources that we'll use to deploy our sample Cloud native application within our cluster are displayed here.
Let's now go ahead and review each of these individually so that you are familiar with the purpose of each resource and what solution or solutions that each resource provides us. Starting with the Namespace resource, as per the Kubernetes documentation: "Kubernetes supports multiple virtual clusters "backed by the same physical cluster. "These virtual clusters are called namespaces." Now what this means is that if you have a multi-tenanted Kubernetes cluster where several independent applications are deployed onto the same physical cluster, then they can be logically segmented and isolated through the use of a namespace. This is perhaps the first layer of security and easiest to put in place, but for which should also be used in combination with other Kubernetes security mechanisms such as network policies that we'll discuss later.
Pods. At its core, the Pod is the smallest unit of deployment when it comes to compute within a Kubernetes cluster and are considered the most basic building block. A pod however can be configured to contain multiple containers. Central to how networking is accomplished within a Kubernetes cluster is the fact that each pod is assigned its own cluster routable private IP address, which ensures that pods can communicate with other pods regardless of which physical node within the cluster they are deployed to. Pods are typically labeled with custom metadata that you provide to indicate what role and/or function they fulfill within your deployment. Other Kubernetes resources such as the Service resource may then select the pods based on attached labels.
Deployment. A Kubernetes Deployment resource is created to declare a desired state that you want your deployment to achieve. Deployments are often used in conjunction with ReplicaSets, another type of Kubernetes resource. A ReplicaSet is used to specify the required number of pods you need to launch within the cluster. ReplicaSets themselves are not often used directly but instead should be used within the Deployment abstraction which extends on the features available within a ReplicaSet by providing additional management features such as the ability to configure rolling updates or rollbacks et cetera as well as several other useful features. Deployments are primarily aimed at managing and deploying stateless components. In our demonstration, we'll use a deployment for the front end component as well as another deployment for the API component.
Service. A Kubernetes Service is used to setup and create a virtual IP or vip which sits in front of a set of related pods grouped by a common specified metadata label or labels declared using a selector. A service can be considered an abstraction which groups a logical set of Pods together for access via a stable network address and for which can be thought of as a microservice. Now, the motivation for declaring and creating a service is more often than not, to provide a stable network address that can be used by other cluster resources when needing to communicate with the pods that are registered behind it.
Acknowledging that pods are ephemeral in nature and may come and go, potentially being scheduled on different physical nodes, communicating directly with a pod via its assigned pod private IP address would be disadvantageous as the pod IP address would likely change over time. Instead, by using a service you not only get the advantage of a stable vip, you also get the benefit of incoming traffic being load balanced or round robined over the pods that sit behind it.
Ingress. A Kubernetes Ingress resource is used when you need to expose cluster services to the external world. To do so, an Ingress Controller is first deployed and setup within the cluster. Next, you define your Ingress requirements. Typically, services are exposed to external users via HTTP or HTTPS. Different Ingress Controller implementations exist and you can cherry pick which one you install based on the feature set that you require. For example, an Nginx Ingress controller can be installed. An Ingress resource can be created to provide load balancing, SSL termination, path based routing and name-based virtual hosting. In our demonstration deployment, we'll setup the Nginx Ingress controller and then establish two Ingress resources: one which exposes the front-end web application externally, and another one which exposes the API service externally.
StatefulSet As the name alludes to, a StatefulSet resource is ideal for applications that have a requirement to manage and persist state. Think of databases for example. As a quick summary, a StatefulSet is useful when any of the following requirements are needed: a stable, unique network identifier, stable, persistent storage, ordered graceful deployment and scaling and enviously, ordered graceful deletion and termination. When creating a StatefulSet, the StatefulSet generates and assigns a persistent stable unique network identifier for each pod. This naming convention follows the pattern statefulset name-ordinal where ordinal is the startup position of the pod within the StatefulSet. Zero being the first, then one, then two et cetera. Another important resource provisioned and used in combination with a StatefulSet is what's referred to as a Headless Service.
A Headless Service is a service resource with a clusterIP filed or property has been set explicitly to None. This will result in a service which groups a set of pods together and for which has no vip assigned to it nor will it provide load balancing, but instead this will create DNS records that map directly to each of the individual pods within the StatefulSet. This allows other pods within the cluster to use the DNS name to communicate with the desired pod within the StatefulSet. Now, this is useful for the use cases where you perhaps need to specify a database connection string et cetera.
The StatefulSet controller also guarantees the startup ordering of pods within a StatefulSet. The same goes for termination albeit in the reverse order. The guaranteed startup ordering is important when you require one pod to take some form of precedence over the remaining pods in the StatefulSet. This again is often a requirement when say for example, you need to configure a master-slave database configuration.
Each pod within a StatefulSet will be mounted to its own dedicated persistent volume via a persistent volume claim. Typically, persistent volumes are provisioned using remote volume types. If any individual pod within the StatefulSet dies, the StatefulSet controller will relaunch a replacement somewhere within the cluster, reapply the same unique network identifier to it and then reattach the same remote persistent volume. In our application, we have chosen to use MongoDB in a replica set configuration made up of three instances: a primary and two secondaries. We'll use a StatefulSet to manage setting up the MongoDB replica set within our cluster.
Network Policies. The final Kubernetes resource that we'll use and demonstrate is the Network Policy resource. Network policies as applied within Kubernetes are used to restrict and control ingress and egress pod communications. In essence, they provide a type of firewalling mechanism where you can design policies that control the flow of network traffic into and out of pods.
By default, all network traffic is allowed within a single namespace. But, often there are times where it doesn't make sense to allow all traffic within the single namespace. For example, it makes no sense for the database pod to initiate communication to the frontend pod. This can be controlled by using and deploying network policies. The basic approach is to first deploy a default deny all traffic policy, followed by deploying one or many other network policies that begin to open up and allow just the required network paths to make the application as a whole functional again. This is just one example of when and how network policies can be used within a Kubernetes cluster. In our sample application deployment, we'll use them to increase the security posture of our application.
We'll assign network policies to pods by specifying metadata labels as selectors and whitelisting all traffic originating from other pods by again specifying metadata labels as selectors for the pod origin. So. Let's now remap all of the resources that we've just reviewed back to our solution. Again, we'll ask the question, how do we intend to deploy our sample Cloud native application into our Kubernetes cluster? Well, as can be seen here, our preferred solution and chosen Kubernetes deployment will look like this. Let's now draw down into the logical science that makes up our four Kubernetes cluster deployment.
The frontend will be deployed using the following resources: a Deployment resource is used to deploy and launch four pods containing the frontend configured to listen on port 80, a Service resource is created to provide a stable vip or virtual IP to sit in front and round robin over the four pods containing the frontend. Again, the service is configured to listen also on port 80 and an ingress resource is created to route and proxy external HTTP requests to the frontend service VIP. The port mapping is port 80 to port 80.
The API assigned will be deployed using the following resources: A Deployment resource is used to deploy and launch four pods containing the API configured to listen on port 8080. A Service resource is created to provide a stable vip or virtual IP to sit in front of the four pods in round robin request over the API. Again, this is configured to listen on port 8080. And an Ingress resource is created to route and proxy external HTTP requests to the API Service vip. The port mapping here is port 80 to port 8080.
The main reason for this is that the Nginx Ingress Controller can only be configured to listen on ports 80 HTTP and 443 HTTPS. The MongoDB database will be deployed using the following resources: A StatefulSet resource is used to deploy and launch three pods containing the MongoDB service configured to listen on port 27017. A headless Service resource will be created to sit in front of the StatefulSet to create a stable network name for each of the individual pods as well as for the StatefulSet as a whole and three Persistent Volume resources are created, one for each individual pod within the StatefulSet.
MongoDB will be configured to persist all data and configuration into a directory mounted to the persistent volume and three Persistent Volume Claim resources are also created. These are used to map the Persistent Volume resources to the individual pods within the StatefulSet.
The final piece in our deployment puzzle is to secure the inter-pod communications within our application using network policies. As can be seen here, we'll secure pod communications by deploying the following three network policies: With these network policies in place, the final deployment solution prevents the following pod communications from happening of which none makes sense. MongoDB pods shouldn't be able to initiate and connect to either the API pods or the Frontend pods and the API pods shouldn't be able to initiate and connect to the Frontend pods and vice versa.
Okay, that completes this lecture and our Kubernetes sample Cloud native application deployment review.
Go ahead and close this lecture and we'll see you in the next lecture where we begin to perform the actual deployment.
About the Author
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.