OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Jeremy] Okay, welcome back. In this demonstration I'll clone the frontend builder repo stored on GitHub. By the time we complete this demo, you should have a clear understanding of how S2I works and how it is expected to be used as a build technology for building containers. Okay, lets begin. Dropping back into the runbook, I'll retrieve the step 19 git clone command and clone the repo within the terminal like so. Next I'll navigate into the openshift-s2i-frontendbuilder directory and examine its structure. This is much the same as the previous demo, however where it significantly differs is in the customization of the Dockerfile in both the assemble and run scripts.
Let's now take a closer inspection at each of the relevant files starting with the Makefile. Here we can see that it is going to tag the builder image with the container tag cloudacademydevops/frontendbuilder. Next I'll peek into the Dockerfile. Now without going into it in too much detail, the Dockfile is essentially going to be used to build an Nginx based web server builder image that has the extra ability to perform yarn install and build commands to download 3rd party java script libraries and then to perform a full transpile over the frontend react based JSX code. Next I'll peek into the assemble script. This contains the actual yarn install and yarn build commands.
The assemble script gets executed by the S2I build command when we actually inject the actual react based JSX code. The resulting transpiled assets are then moved into the configured Nginx root directory /Nginx/html. And lastly, let's examine the internals of the run script. Here we can see here that the run script first changes directory into the configured Nginx home directory and then invokes a shell script named env.sh, followed by starting the Nginx process in the foreground using the Nginx -g "daemon off" command. For those curious about the purpose of the env.sh script file, it is used to read in known named environment variables. This gives us a chance to pass in the external host location of the API service at deployment time. You'll see this later on when we get to the deploymentconfig used by the frontend in step 25.
Okay, let's move on and execute the make command to create the S2I frontendbuilder image. Now that the build has completed let's check for its existence locally like so. Here we can see that we have previously built this image and nothing has changed since as it has the same docker image id of a55e393. Now that we have our S2I frontendbuilder image available let's use it to perform a test build of a frontend container. To do so we need the frontend source code and for that we can reuse the publicly available openshift-voteapp-frontend-react GitHub repo found here. To invoke a local S2I build copy the "s2i build" command found under step 20. I'll paste it into the terminal and execute it. Now the s2i build command works by injecting the react based frontend source code located in the openshift-voteapp-frontend-react GitHub repo. It will automatically clone this within a container instance of the frontendbuilder image. It then runs the assemble script and finally outputs a new docker image tagged with the cloudacademydevops/frontend:demo-v1. As you can see, we have successfully created a new frontend container image using the S2I technology.
Again, let's check for the image locally by running the command "docker images" pull up the results through to grep, grep for cloudacademydevops/frontend and then grep again for demo-v1. The next thing we can do is test the newly created frontend container image locally confirming that it is functional. To do this I'll copy the docker run command from the runsheet and execute it within the terminal. This docker run command sets a fictitious value on the environment variable REACT_APP_APIHOSTPORT. This environment variable is used to pass inwards the API external DNS host and port to the frontend. This is to allow the frontend to ensure that the API AJAX commands are sent to the right location.
The key point here is that this gives us the flexibility of configuring this at deployment time and not having to do it at build time which would require us to recompile. In terms of the docker run parameters, we use the -it parameter to output the containers standard out to the terminal. The --rm parameter is used to remove the container automatically after it exits since we are only running it only to validate that it is functional. We give it the name test and port forward 8080 to 8080.
Okay, now that the test frontend container is running lets test what happens when we navigate to localhost port 8080 within the browser. So this looks good. We can see that the Voting App frontend is rendered in the browser albeit partially. The bottom half of the UI is empty. Now as you might expect this was always going to be the case. Why? Because the API ajax calls are heading to a fictional host that doesn't exist. Recall the fictitious value on the environment variable we passed in. Here let me show within the developer tools network capture pane.
Here I'll filter down the captured network calls to only those that are AJAX calls. If I select any of them you can see the host that the calls are being made to. This maps back to the environment variable we configured on the docker run command. So this is actually a good result and implies that the frontend container launching sequence is behaving internally as designed. We can actually repeat this container launch sequence but instead use a different value for the REACT_APP_APIHOSTPORT environment variable. Reloading the page we can see that the ajax calls within the browser have changed accordingly.
Okay, now that we have confirmed that the S2I frontendbuilder image is correctly building frontend images as per our tests that we just ran let's now push the builder image up into the docker container registry. This will then allows us import it down into our OpenShift cluster. To do so let's jump over into step 21 in the runbook and copy the docker login command executing it within the terminal like so. Great, the docker login has authenticated me successfully. I can confirm this via the Docker Desktop utility that I'm logged into the cloudacademydevops docker repo. Next, I'll jump back into step 21 of the runbook and copy the docker push command executing it within the terminal like so. Cool, the latest version of the S2I frontendbuilder image is being pushed up into the docker registry. This has now completed successfully.
Let's now import this back down into our OpenShift cluster. This time I'll copy the "oc import-image" command from step 21 and run it within the terminal. The import will download the builder image and store it within a new ImageStream named frontendbuilder within the cluster. Jumping back into the OpenShift web admin console and navigating to Builds and then ImageStreams, we can see the newly created imagestream named frontendbuilder and if we drill down into it we can then see our S2I frontendbuilder image has been successfully pulled down and registered. We can also confirm this from the command line. Lets run the commands "oc get imagestreams" and oc describe is for image stream frontendbuilder to see the specifics about the new frontendbuilder image stream and the frontendbuilder image that we just imported.
Okay, that completes steps 19, 20, and 21 which were used to create the S2I frontendbuilder image, test and validate it, and import it into the OpenShift cluster via the docker registry. Now that we have it in the cluster, I'll proceed to creating a Frontend BuildConfig resource to establish a build procedure for the frontend within the cluster.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).