Create K8s Cluster
Build Container Images
Create K8s Resources
End-to-End Application Test
K8s Network Policies
K8s Deployment Update Challenge
The course is part of this learning path
This training course is designed to help you master the skills of deploying cloud-native applications into Kubernetes.
Observe first hand the end-to-end process of deploying a sample cloud-native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise-grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:
This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:
- Ingress/Ingress Controller
- Persistent Volume
- Persistent Volume Claim
- Headless Service
What you'll learn:
- Learn and understand the basic principles of deploying cloud-native applications into a Kubernetes cluster
- Understand how to set up and configure a locally provisioned Kubernetes cluster using Minikube
- Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services, etc.
- And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.
This training course provides you with many hands-on demonstrations where you will observe first hand how to
- Create and provision a Minikube Kubernetes cluster
- Install the Cilium CNI plugin
- Build and deploy Docker containers
- Create and configure Kubernetes resources using kubectl
- A basic understanding of containers and containerization
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
- Anyone interested in learning Kubernetes
- Software Developers interested in Kubernetes containerization, orchestration, and scheduling
- DevOps Practitioners
- [Instructor] Okay, welcome back. In this lecture, we'll deploy and configure our MongoDB replica set database. In this lecture, we'll provision the following resources to support our cloud-native application. We'll create a stateful set, which is used to provision three pods, each running the official Mongo image. Each pod will mount to its own dedicated persistent volume. And we'll create a headless service. This is used to create individual stable network DNS names for each of the Mongo pods created within the stateful set.
Okay, let's start. Within the terminal, I'll first run the command sudo yum install -y tmux. tmux is a terminal multiplexer which will give us the ability to split the current terminal session into multiple panes. This is very useful when creating resources in a Kubernets cluster. Okay, now that tmux is installed, let's split up our terminal session by running tmux to split the current terminal into two panes. In one pane, we'll setup a watch to watch the resources being provisioned, and in the other pane, we'll actually run the provisioning command. I've entered the key sequence Control + B together with a double quote. This splits the terminal horizontally.
The focus is currently on the bottom pane. We'll move focus back to the top pane by entering the key sequence Control + B followed by the up arrow. In the top pane, I'll now navigate into the database directory and list its contents. Here we can see the following YAML file. Next, I'll move focus back to the bottom pane by again entering the key sequence Control + B followed by the down arrow key. In the bottom pane, I'll set up the following watch command to refresh every two seconds. This will allow us to observe the resources as they are created.
Okay, we are ready to go. Jumping back into the top pane, I'll now run the command kubectl apply -if mongo.statefulset.yaml. Observing the bottom pane, we can see the orderly startup sequence that the stateful set guarantees us. The first container, which is currently in container creating status, takes a little longer to complete, as the Docker image has to first be pulled down. We'll give it more time to complete the entire launch. Great.
The first Mongo pod is now up and running, and the second Mongo pod has started to launch, and it also has just completed, with the third and final Mongo pod now launching, and has also just completed. Okay, as you can see, each of the three Mongo pods that we just created are named in the sequence that they were started up in. We can also see that the stateful set also has three of three and a ready status. So this is a good result. We can now use the kubectl command to directly examine the pods created as part of the stateful set, like so.
Again, notice how the pods are named with an ordinal number representing the startup and tear-down order. We'll now enter the Mongo shell on the mongo.0 pod and initialize the Mongo replica set, adding to the replica set the remaining two Mongo pods. Running the following command will give us a shell on the mongo.0 pod, kubectl exec -it mongo.0 bash. Great. We're now on the mongo.0 pod and we can now launch the Mongo shell by running the command mongo by itself. This will connect to the local Mongo service.
Okay, that succeeded. Let's immediately exit the Mongo shell by entering the key sequence Control + C. Next, we'll quickly test using the Mongo shell to connect to each of the three stable DNS names registered for us, like so. Brilliant. This confirms that the names resolve and that we are able to form network connections to each pod. We can now proceed and configure the Mongo replica set. So we'll jump back into the Mongo shell on the current instance. We then initiate the replica set using the Mongo function rs.initiate, like so.
Okay, that looks good, but we'll need to fix the name that was registered, as it should use the proper DNS name mongo-0.mongo. We'll fix this after we've added and joined the remaining two Mongo pods into the replica set. To do so, I'll run the following Mongo function, rs.add. Now, this fails, since it's not using the proper DNS name, and therefore is unable to resolve and connect. Let's repeat this command but use the proper DNS name that was registered when the statefulset was created, which is in the form of podname.servicename. Okay, that worked, and we'll now repeat it for the last pod.
Next we'll go back and fix the name that was registered with the Mongo replica set for the first pod. First we need to capture install the current replica set configuration, like so. We can then print out the current configuration to the screen by entering cfg. We'll now take a closer look at the host value on the first member, and we can see the current value, which needs to be corrected. That is, we need to update it to use the proper stable network DNS name registered for this pod. We'll perform the correction like so. We can then apply the update by running the Mongo function rs.reconfig, passing in the new config and setting force to true.
Okay, that looks good. Now, before I do a final check, I'm going to first exit out of the tmux session so that we can reuse the full screen again. I'll then jump back into the first Mongo pod, like so, and again fire up the Mongo shell. Here I'll reexamine the updated configuration that we previously applied by running the Mongo function rs.status. Great. We can now see that the Mongo replica set is fully initialized in a functional, redundant configuration. This is indicated by the fact we have a primary and two secondaries. The final thing we'll do while we're still within the current Mongo shell is to create our database and populate it with some sample data that our voting application will work with.
We'll create the database by running the command use langdb. Next we populate the languages collection with three documents using the insert function, like so. We can then examine all current collections within the current database, and then finally query back out all documents within the languages collection, like so. Brilliant. We've launched, configured, and populated our MongoDB replica set database. At this stage, we're ready to move on and deploy the API resources.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).