OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Jeremy] Okay, welcome back. In this demo, I'm going to show you how to set up and deploy a MongoDB replica set database as a StatefulSet within the cluster. To quickly summarize, the MongoDB replica set database will consist of three pods, one of which will be configured as the MongoDB primary, with the remaining two being configured as MongoDB secondaries. This configuration gives us a layer of redundancy for the data being stored within the database. Writes will be sent to the primary and these will then be replicated into each of the secondaries. We also need to establish persistent volumes and persistent volume claims to bind the MongoDB pods to volumes that persist beyond the life of the pods themselves. To begin with, I'll need to establish a new storage class for the persistent volumes.
Since our OpenShift cluster runs in the AWS cloud, it makes sense to leverage the EBS storage capabilities of AWS. Let's do this. Jumping back into our run sheet, I'll copy the step 10 command. This is going to create a new storage class named ebs. This storage class is configured to create GP2 ext4 formatted volumes. I'll execute this back within the terminal. That looks good. Let's now query the current storage classes. We can run the command, oc get storageclass, and we can see that our new ebs storage class is good to go.
Okay, we're now ready to launch our MongoDB StatefulSet. The manifest that is used to perform this is fairly detailed and intricate, but don't worry, as everything that you need to do has already been laid out and captured within the run sheet. Let's jump over into it now. This time, we're going to copy this section here under step 11. Now, my first attempt at copying and pasting these commands directly from the rendered markup failed due to a minor issue with how GitHub renders the markup. Therefore, to avoid this same issue, select the README.md file and then, swap over into raw mode like so. Locate the command again within step 11 and then, copy the command directly from the raw text. Okay, now that we have a runnable copy of this command, let's paste it into our terminal and execute it like so. That looks really good and it looks like the MongoDB cluster resources have been scheduled for creation.
Let's now examine how the pod creation looks. For this I'll run the command, oc get pods --watch. This command provides you with a list of updated pod events as each pod undergoes a change in state. To exit this command, use the Control + C key sequence. Next, I'll run the oc get pods command to see the final status of all running pods within the current cloudacademy project. We can go further and display the pod labels assigned to each pod by using the --show labels option like so. Then, following on from this, we can filter the pods by specifying the pod label, role=db. This time, get a filtered view of just the three MongoDB pods.
So, this should give you a good idea of how to go about querying and filtering resources using the oc command, and as earlier mentioned, it is very much similar in terms of how the kubectl command works and is used. Next, let's take a look at the pods, persistent volumes, and the persistent volume claims together using this command. Here, we can see the one-to-one mapping between the MongoDB pods and each of their assigned persistent volumes and persistent volume claims.
This is a feature or behavior that is attributed to a StatefulSet. Okay, to finish off and round out our MongoDB StatefulSet deployment within the cluster, we need to deploy what's referred to as a headless service. The headless service provides us with stable network names which resolve directly to the pod allocated private IPs. This is all managed by an internal cluster DNS service. This is important, as it will allow us to configure the MongoDB replica set using the stable network names and not the pod IPs themselves.
I'll show you this in the next demo, but for now, head back to the run sheet and copy the step 12 command, and then, execute it within the terminal like so. Okay, that looks good. The headless service has been successfully created. Let's now examine it by running oc get svc for service to display all services within the current cloudacademy project. Notice that the cluster IP value is set to None for the Mongo service. This differentiates a headless service from a normal service. Next, I'd like to show you exactly what the headless service creation command has accomplished in terms of DNS name management within the cluster. To do so, I'll spin up a utility or helper container which comes embedded with networking tools. Go back to the run sheet and copy the oc run command and paste it back into the terminal. This will take a few seconds to first download a copy of the image to the cluster, before then spinning it up and then, dropping us into the containers bash shell.
Okay, excellent, we are now inside the network utils container. Let's now run the following two commands. The first will perform a DNS lookup on the host name Mongo, which represents the service itself. As you can see it returns three A records, one for each of the MongoDB pods. Each record resolves to the pod-assigned private IP address. Any client that attempts to resolve this host will then need to be responsible for determining which pod IP address to use, i.e. the client will need to use a round-robin strategy or something else. Next, I'll run the following for command, which repeats a DNS query for each of the DNS names assigned to the individual MongoDB pods. Again, we can see that each correctly resolves to the pod-assigned private IP address. So what you can start to see is the naming convention that is used and how the cluster DNS service manages the pod-private IP addresses when a headless service is created. Okay, let's now exit out of this temporary container.
Finally, let's quickly take a look within the OpenShift web admin console and view the three MongoDB pods by navigating to Workloads and then, Pods and selecting the cloudacademy project. Here, we can see the three MongoDB pods, all in running status. Next, I'll navigate to Stateful Sets and view the Mongo StatefulSet. Here, we can see that it correctly has three of three MongoDB pods registered to it. Next, I'll navigate to Storage. Here, Persistent Volumes correctly records the three PVs we created, one for each of the MongoDB pod. And under Persistent Volume Claims, we can see the three corresponding PVCs, where each has a bound status.
Okay, that completes steps 10, 11, and 12, and the provisioning of the data storage layer. We are now ready to pair up the MongoDB pods into a Mongo replica set and populate it with data, which we will do in the next demo.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.