OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Jeremy] Okay, welcome back. In this demonstration, we're going to deploy the last major component of our sample VoteApp cloud native application into the OpenShift cluster, that being the frontend. The frontend is made up of resources, a DeploymentConfig, a Service, and a Route. As mentioned, we're going to use a DeploymentConfig to deploy the frontend pods. It's important here to understand that we're going to use a DeploymentConfig resource which is similar to the standard Kubernetes Deployment resource, but has a number of extra and useful features.
The motive for us using it ourselves is that it can be configured to be triggered for a redeployment based on certain events. In our case the triggering event will be when an Image change is detected on the frontend image which we know is located in the frontend imagestream previously created. We also know that our BuildConfig is responsible for generating new builds of the frontend image which are then pushed into the frontend ImageStream. So you can probably get an idea of the chain reaction that happens. Now before we create the frontend resources, we first need to query for the API route and retrieve its Fully Qualified Domain Name so that we can use it within the DeploymentConfig.
The reason again is that the Frontend pods need to be configured with the API services public Fully Qualified Domain Name and we are leveraging an environment variable at deployment time to achieve this. To do so I'll jump over into the runbook and under Step 25, I'll copy the "oc get route api" command and execute it within the terminal. Here we can clearly see the externally exposed API FQDN to which browser initiated AJAX calls from the frontend will be directed at. Let's now go back and grab the next set of commands which will extract this value and store it in the shell variable named APIHOST.
Next we return again to the runsheet and copy the rather large "oc apply" command which will create the frontend DeploymentConfig, Service, and Route all in one hit. Okay, that looks good. Let's now examine and watch the rollout of the DeploymentConfig into cluster. We can do so by running the command, "oc rollout status deploymentconfig frontend" We can see that the deployment is going well and has just finished. Let's also observe all current pods within the cloudacademy project by running "oc get pods", and then also run a filtered view on just the frontend pods by running, "oc get pods -l role=frontend". Here we can see the four new frontend pods are all in a running status, excellent.
We're just about ready to test the full VotingApp in the browser but before we do, I'll run a couple more sanity checks within the terminal. First up I'll run the command, "oc get route frontend" to see the routing info for the frontend route we just created. Next, I'll jump back into the runsheet and copy the command to extract out the frontend route FQDN and execute it like so. Then I'll simply run the curl command, "curl -s -I $FRONTENDHOST" to perform an HTTP Head request against the frontend, and excellent that has worked since we are getting not only a response indicating connectivity exists, but also an HTTP 200 response code which represents success. I'll now quickly rerun this command but this time using a lower case -i which tells curl to send a full GET request. And again we get a successful response this time with both the HTTP headers and the HTML body. So everything is looking promising. Let's now jump into our local chrome browser and test out the full end-to-end application. And this is super cool.
We can now see that the full end-to-end sample VotingApp deployed in our OpenShift cluster is working. Lets try voting by clicking the +1 button on the programming languages. We can again use the Developer Tools to record, filter and observe the AJAX traffic which will be generated each time any of the +1 vote buttons are clicked. If we select an ajax request and look at the HTTP Headers, we can see where the traffic is being directed towards which we know is the API service that we configured via the environment variable in the frontend DeploymentConfig. When it comes to troubleshooting, it's worthwhile to know how to troubleshoot and examine logs of various pods contained within a particular deploymentconfig or deployment. For demonstration purposes, lets set up a tail across a frontend pod within the deploymentconfig as per the instructions contained in Step 26 of the runbook.
To do so, I'll run the command, "oc logs dc/frontend --tail 50 --follow". Next I can repeat this same type of troubleshooting command for a pod within the API deployment, by running the command, "oc logs deploy/api --tail 50 --follow". Finally, lets go and check the backend MongoDB database state, checking that the voting activity has correctly been transacted. To do so, I'll first get a list of the running Mongo pod names by executing the command, "oc get pods -l role=db" Next, I'll perform a remote shell executed query on the first Mongodb pod like so Now, if we look carefully at the vote counts on each of the programming language documents, we can see that they have each increased.
Let's make some more votes on the Go programming language, and then rerun this same query like so. Again as expected the vote count on the Go programming language document has increased from 2 to 4. This really is indeed a great result as it indicates that the full communication flow between the browser and the various cluster hosted pods is working as designed and configured and our data transactions are being recorded successfully.
Okay, that completes Steps 25 and 26 and we now have a fully working cloud native application. Its been hard graft but it is rewarding to get to this stage. In the next demo I'll configure a webhook on our frontend source code Github repository to automatically trigger new frontend image builds within their cluster. This will be the icing on the cake.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).