Create K8s Cluster
Build Container Images
Create K8s Resources
End-to-End Application Test
K8s Network Policies
K8s Deployment Update Challenge
The course is part of this learning path
This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay, welcome back. In this lecture, we'll now deploy the API into the cluster. In this lecture, we'll provision the following communities' resources, a deployment consisting of four pods using the Cloud Academy, API, Version 1 Docker Image, which will serve, TCP traffic on port 80/80. The deployment will be configured for rolling updates and have liveness and readiness probes configured.
The MongoDB replica set connection string will be passed in as an environment variable, specified on the container itself. We'll also create a service, which will provide a stable, private VEP and provide internal wide balancing over the four API Pods. And finally, we will create an ingress, which will forward external TCP port 80 traffic to the API service backing, listening on port 80/80. Okay, let's start. Within the terminal, I'll first navigate into the Voteapp kubernetes project directory.
We will use the tree command to examine the directory structure, like so. Next, I'll use the kubectl command to perform an audit deployment of the files stored within the API Directory. We can do this by simply running the following command. kubectl apply -f, and the folder name, in this case API.
Okay, let's now take a look at the current state of the Pods. And, as you can see, the API-Pods have successfully been provisioned. We can then take a closer look at the status of the API-Pods by, examining the logs at the Pod level, by running the following commands. kubectl get pods -o wide. This will allow us to see the Pod name. We take the Pod name, and then we run the command, kubectl logs pod name. Here, we can see that the API-Pod has indeed successfully connected to our backing BongoDB replica set, that we set up in the previous lecture.
We can now also test the API, by using the curl command, aimed directly at the actual API Pod. In this case, we're using the Pod's internal API address and using curl to connect to it on port 80/80. So this looks really good. Next, let's now query their registered services.
Well run the command, kubectl get services. Here we can take the service API address that was generated for the API service and again use the curl command to navigate via the VIP that was registered for the API service, like so. In fact, we can run this command within a loop to have the service load balancing round robin our request across the backend API pods.
Okay, the API configuration is now complete. We'll delay testing it externally for now, as the nginx ingress controller hasn't yet been deployed. Instead, we'll move on and deploy the front end into the cluster.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).