OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Jeremy] Okay, welcome back. Now that we've reviewed the basic messaging requirements of our sample cloud-native voting application, let's now see how this translates into a working OpenShift configuration. The following diagram illustrates how our overall deployment will take place within the OpenShift cluster for our cloud-native application.
The key OpenShift resources used within our configuration are, routes will be used to allow incoming external HTTP traffic to the services. Services are used to provide a single static VIP or virtual IP for which traffic can be directed at. Services allow you to group likewise pods together grouped by one or more custom metadata pod labels. In essence this allows the service to round robin incoming traffic across members of the pod group. A StatefulSet is used for the MongoDB database replicaset. Persistent volumes and persistent volume claims are used. A BuildConfig is used to allow us to automatically trigger builds for the frontend container image any time a developer commits and pushes code edits back into the vote app frontend GitHUb repository. An ImageStream resource is created to act as a container image registry within the cluster.
This hosts the frontend container images. Over time multiple build resources or jobs are created, one for each trigger of the BuildConfig. The build job creates a new frontend container image which gets pushed into the frontend ImageStream. A DeploymentConfig is used to automatically redeploy the frontend pods any time a new frontend image is detected to have been pushed to the frontend ImageStream. The DeploymentConfig takes care of orchestrating the roll out of the updated pods in a staged process so that there is no downtime experienced by the end user. Let's now drill down into each logical grouping of resources as will be configured within our OpenShift cluster deployment. The frontend will be composed of the following. A DeploymentConfig resource is used to deploy and launch four pods containing the frontend configured to listen on port 8080.
Note that this is a DeploymentConfig resource specific to OpenShift, and not just a more traditional deployment resource which Kubernetes provides. The more advanced DeploymnetConfig resource is used as it allows automatic triggering of pod redeployments anytime a new frontend image is built and pushed to the frontend ImageStream. A standard Kubernetes service resource is created to provide a stable virtual IP or VIP to sit in front of the four pods containing the frontend. The port mapping is port 8080 to 8080. An OpenShift route resource is created to route and proxy external HTTP requests to the frontend service VIP. The port mapping is port 80 to 8080.
One final important point of interest when it comes to the frontend and in particular the frontend pods is how they get configured with the API public DNS endpoint. Recalling that at runtime the frontend is delivered to and loaded by the end user's browser. The frontend is purposely designed to direct AJAX traffic back to the API service. Now considering back at deployment time, the API gets deployed and exposed via a route and service, and during the API route creation, OpenShift dynamically generates and registers a public DNS for the API service. So the question becomes how does the API public DNS get injected into the frontend pods? The answer if you haven't already guessed it is it is passed down as an environment variable configured and set within the frontend DeploymentConfig.
A standard Kubernetes service resource is created to provide a stable virtual IP or VIP to sit in front of the four API pods. The port mapping is port 8080 to 8080. And an OpenShift route resource is created to route and proxy external API requests to the API service VIP. The port mapping here is port 80 to 8080. The MongoDB database will be deployed using the following cluster resources. A StatefulSet resource is used to deploy and launch three pods containing the MongoDB database service configured to listen on port 27017. This is used to control the startup sequence of the database pods. Additionally it assigns an ordinal pod name starting at zero and incrementing by one to each of the pods and maps each of them to a dedicated persistent volume via a persistent volume claim.
A standard Kubernetes headless service resource is created to sit in front of the StatefulSet. This is used primarily to generate stable network names for each of the individual MongoDB pods. A standard Kubernetes headless service resource is created to sit in front of the StatefulSet. This is used primarily to generate stable network names for each of the individual MongoDB pods. Three persistent volumes or PVs are created, one for each of the individual pods in the StatefulSet. MongoDB will be configured to persist all data and configuration into a directory mounted to the persistent volume. Three persistent volume claims or PVCs are created, one for each individual pod in the StatefulSet. MongoDB will be configured to bind to a persistent volume via the persistent volume claim. As mentioned earlier when we described the frontend cluster resources configuration, the frontend pods which are launched by the frontend DeploymentConfig resource are derived from the latest frontend container image located and stored within the frontend ImageStream.
Let's walk through how these related cluster resources work together. An OpenShift ImageStream resource is created purposely to host each and every front end image which is created anytime a new build is triggered and completes successfully. An OpenShift BuildConfig resource is created and contains the configuration details as to how a build should be performed for the frontend container image, where the resulting frontend container image should be pushed, and what triggers a build to kickoff. When the BuildConfig is applied into the cluster, OpenShift will generate a webhook URL which gets copied and configured into the frontend GitHub hosted repository.
With this configuration in place, automatic builds for the frontend image will be triggered within the cluster anytime a developer pushes code updates into the frontend repo. Each build performed within the cluster is represented by an OpenShift build resource. Build resources can either be manually created resulting in manual builds, or automatically triggered as per the previous description. Additionally to complement this process, the frontend DeploymentConfig will kick in anytime it detects a new frontend container image has been pushed into the frontend ImageStream, and as a result perform an automatic rolling update of the frontend pods.
Okay, that concludes the OpenShift theory part of this course. Next we get into demonstration mode, first by provisioning a new OpenShift 4 container platform cluster, and then concluding with a full deployment of our sample cloud-native voting application.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).