Create K8s Cluster
Build Container Images
Create K8s Resources
End-to-End Application Test
K8s Network Policies
K8s Deployment Update Challenge
The course is part of this learning path
This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay, welcome back. In this lecture we will now deploy the front end into the cluster. In this lecture we will provision the following Kubernetes' resources.
A deployment, consisting of four pods, using the Cloud Academy front end V1 docker image. Which, will be configured to serve TCP traffic on port 80. The deployment will be configured for rolling updates and have liveness and readiness probes configured. We will create a service, which provides a stable, private vip, and provides internal lie bouncing over the four front end pods configured to listen on port 80. And, finally, an ingress resource will be created, which will forward external TCP port 80 traffic to the front end service. Okay, let's start. Within the terminal, I'll first navigate into the "voteapp" Kubernetes project directory. And here we will examine the directory structure using the "tree" command.
Next, I'll use the "kubeclt" command to perform and ordered deployment of the files stored within the front end directory. We can do this by simply running the following command. "kubectl apply -f" and give it the name of the directory, in this case, "frontend/". Great, we can see here that the deployment service and ingress resources have been successfully provisioned.
From here, let's clear the terminal and we'll run the following "kubectl" command to view the details for all pods currently deployed. We'll run "kubectl get pods -o wide" Here, indeed, we can see that the four front end pods have successfully launched, and are all in the running status. We can test the "engine x htdp" service exposed within the pod by retrieving the pod's IP address, like so and then curling to it. In this case we will send just a hid request indicated by the dash, capital "I" parameter. Here we can see that the engine X service on the pod has responded correctly within HTDP 200 response code. Which, is what we would expect. Next, we will perform the same test but this time using the front end registered service vip. "kubectl get svc" for service.
Again, we use the "curl" command to curl to this IP address. And again, we can see that the engine X service on the pod has responded correctly with another HTDP 200 response code. Finally, we need to test this externally using the browser. To do so, let's first query for the public IP address that has been assigned to our EC 2 instance.
Here we can see that is 188.8.131.52 let's copy this. Keep in mind that if you're forming your own deployment, then your public IP address will be different and even possibly just local host or 127.0.0.1 if you're deploying to your local wip station. Now, I'll jump into the browser browse to the nip.io in its name "frontend.184.108.40.206.nip.io" This is the address that we previously configured and updated in one of the earlier lectures. Remembering that the ingress resource requires a proper DNS name, not just a raw IP address. Regardless, the expectation here is that this will stall fail. And, as you can see, this has, indeed, failed. Why so? Well, we haven't yet deployed the engine X ingress controller. Which actually performs the forwarding of the external HTDP traffic to the back end service.
Let's go ahead and set this up now, in the next lecture.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).