OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at support@cloudacademy.com.
Learning Objectives
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
Intended Audience
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
Prerequisites
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
Source Code
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Jeremy] Okay, welcome back. In this demonstration, we're going to configure some automation into our build and deployment system for the frontend. I'll start by configuring a webhook on the openshift-voteapp-frontend-react GitHub repository, so that any time a developer pushes code back into this repository, the frontend BuildConfig deployed in the cluster will be notified. This in turn will trigger a chain reaction within the cluster involving a new build of the frontend image followed by the frontend DeploymentConfig being triggered to perform a rolling update of the frontend pods with the new updated frontend image.
Okay, the first thing we'll do is go to Step 27 in the runsheet and copy and execute the frontend buildconfig querying commands back in the terminal like so. Next, we need to locate the webhook URL and copy it. Now, the GitHub webhook URL has the secret masked out when it is displayed in the terminal. You will need to manually edit it back into the URL. Obviously it needs to be the same value as used within the BuildConfig created in step 22. Alternatively, you can copy it from the OpenShift web admin console within the Builds/Build Configs/frontend section. This approach comes with the secret embedded in the URL. Next, I'll use my browser and head over to the openshift-voteapp-frontend-react GitHub repository.
Now, since I created this repo, I can edit and change it's settings, but you can't hence the reason why, you should first fork this repository and use your equivalent version for the remaining instructions that follow. Navigating to the Settings, Nebhooks section, I'll then click the Add Webhook button. Here, we enter the buildconfig URL into the Payload URL confirming that the correct secret is being used as per the value we used in the buildconfig created in Step 22.
Okay, this looks good. Let's now complete the webhook config. Next, we set the Content-Type to be application/json, leaving the following Secret field empty. This is not used nor required by OpenShift, nor to be mixed up with the secret value embedded in the webhook URL, which is required by Openshift. Since we're in demo mode, I'll Disable SSL verification. In production you shouldn't do this. Finally, click the Add webhook button at the bottom, and confirm that the Webhook configuration has been successfully set as per the green tick which we can see. This indicates that the webhook has been successfully applied and has been received and authenticated at the cluster end. If need be, you can examine the details of each webhook post as sent and recorded on the GitHub slide like so.
Returning to Step 28 in the runbook, this requires us to make an edit in the source code of a locally cloned copy of the openshift-voteapp-frontend-react GitHub repository. To do so, I'll open a new terminal pane and then navigate to an existing local copy I've previously cloned. I'll quickly display the directory structure. Now, the file I'll target for an update will be the VoteApp.js file found here. I'm going to now load this source code into Visual Studio Code. I'll then navigate to the VoteApp.js file and open it up. I'll simply increase the version number in the string here from 2.10.2 to 2.10.3 like so. Okay, that's done and the file is saved. I'll now commit this change and push it back up into the remote origin, i.e. the repo on GitHub, and if all goes well this should trigger an automatic build and deploy chain reaction within the OpenShift cluster.
Fingers crossed! Step 29 in runbook asks us to check the current list of builds within the cluster. Let's do it by copying and running the command oc get builds in the terminal. Here, we can see that we have a new build automatically started, very cool! Let's now tail the new build using the command oc logs -f build/frontend-2. I'll watch it until it completes successfully, which it just has.
Next, let's examine the status of the frontend imagestream by running the command oc describe is frontend. Here we can see the newly built frontend image. Next, let's examine the frontend deploymentconfig status by running the command oc rollout status deploymentconfig frontend. Here, we can see clearly that a rolling update of the frontend pods is taking place. Again, I'll watch this until it completes, which it just has. Finally, let's list out the frontend pods by running the commands oc get pods and oc get pods -l role=frontend. Here, we can see that we have four new frontend pods all based on the recently built frontend image.
Now, the final acid test for this automated build and redeployment sequence is to view and confirm that the expected changes are now served up to the browser. Before we refresh the page, I'll highlight the previous version here which is 2.10.2. I'll now refresh the the VoteApp page, and we have the expected updated result. The version is now displaying 2.10.3. This is indeed impressive when you consider all of what's happening under the hood and within the cluster. This is definitely a great outcome. Clicking the various +1 voting button confirms that the application remains functional.
Finally, let's drop back into the Openshift web admin console and examine the frontend deploymentconfig. I'll navigate to Workloads and then DeploymentConfigs like so. Drilling into the frontend deploymentconfig displays the frontend deploymentconfig overview. Here, we can see that the latest version is set to two. Scrolling down we can see that the frontend containers are now using an image with the following check zone. Jumping back into the terminal, we can then compare this to the most recently built container image as recorded within the frontend imagestream, and as expected it has the same check zone.
Confirming that the currently running frontend pods are indded using the latest and most recent frontend container image. Navigating now into the Pods view, we can see the four version two frontend pods running here, here, here, and here, each with a recorded running status. Clicking on one of these pods and then navigating to the terminal option. We are presented with an embedded terminal session within the browser this is really useful for quick container troubleshooting. For example, I can list out the current directory contents by executing the command ls -la. I can also run a curl command against the local nginx web server using localhost.
But before I run it, I'll first confirm the port that the nginx server was configured to listen on. I can do so by jumping back into the run sheet and scrolling back to Step 25. Observing the configured container port, which is seen here as using port 8080. Therefore, we can now complete the curl command with port 8080 like so, and we get a successful response containing the frontend HTML. I can also examine the local running processes by running the command ps -ef. This shows that the nginx master and worker processes are up and running. If you look closely you can I also spot the S2I run script which if you recall is used to launch the required container processes. In this case, it would have been responsible for launching the nginx master process. Finally, we can examine both the Logs and Events associated with this pod. Both contain useful information when troubleshooting the behavior of pods running or not running within the cluster.
Okay, that completes Steps 28 and 29 of the demonstration. In the final demonstration, I'll show how to tear down the OpenShift cluster and archive off the respective Red Hat cluster subscription. Important to do so if you are not wanting to incur the cost of leaving the cluster running.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).