OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at email@example.com.
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- Okay, welcome back. In this demo, I'm going to deploy the Voting App API component into the cluster. The API has been built using Go and exposes a simple rest-based API, exposed over HTTP to read and write data to and from the database. The single API binary has been packaged into a docker image that is publicly available, tagged as cloudacademydevops/api:v1 and is hosted in the Docker registry. Now, since the API needs to connect to the MongoDB replicaset, we need to inform it how to do so with connection string information.
A default connection string is already embedded within the API binary and one of which understands the naming convention that has already been applied and used for the MongoDB replicaset as per the headless service we previously established. Therefore, there is no need to edit or reconfigure this externally. Having said that, if we needed to, the API is designed to read and overwrite the default embedded connection string with a connection string configured against an environment variable named MONGO_CONN_STR. But, like I earlier said, I'll just leverage the default behavior of the API in terms of the connection string management. Okay, let's kick off the API deployment, which will result in the creation of the following cluster resources, a Deployment, a Service, and a Route, which once fully deployed will allow us to hit the API externally over HTTP on port 80.
To do this, let's jump over into the runsheet and copy the command in step 15. And then execute this in the terminal like so. We can see each of the three resources have been created successfully within the cluster. Let's now examine the API deployment rollout. I'll first run the command, oc rollout status deployment API. This allows us to watch in real time how the cluster goes about performing the deployment, which consists of four replicas for the API. Next, we can examine the view of the pods themselves, starting with listing out all pods in the current CloudAcademy project. Next, I'll filter on pod label, role = API to see just the four API pods.
Let's now take one of the API pod names and examine the logging that the API has performed to standard out. Here we can see indeed that this API pod has connected successfully to the backend MongoDB database. Therefore, it looks like the API pods are ready to have requests sent to them. Okay, everything looks good, we can now move on to step 16, and copy the command from the runsheet to query out the dns host from the API route object. This will allow us to easily run curl commands to the externally exposed API endpoint.
Let's try it out by hitting the API health check ok endpoint which simply responds with the string OK and an HTTP 200 response code to indicate that the API is healthy. And as expected it does, so this is a great result. Expanding on this, we can actually attempt to hit one of the API endpoints, forcing it to connect to the backend MongoDB database and read out data. To do this, I'll try the languages endpoint and then pipe the response through the jq utility for formatting purposes like so. And again, this confirms that our API and MongoDB cluster resources are behaving as designed.
We can exercise the API even further by querying data for a specific programming language, such as Go, then Java, and then nodejs. Finally, let's take a look within the OpenShift web admin console and view the four API pods. Here indeed, we can see the four new API pods, each in a running status. From here, let's view the API service and API route resources. Under Networking, I'll select services and here, we can see the new API service resource. Then, selecting routes, we can see the new API route resource which provides a publicly resolvable URL for the API service, allowing external traffic to hit it.
Okay, that completes steps 15 and 16. We have successfully deployed the API and have connected it to the backend MongoDB database. That concludes this demonstration. We're now ready to attack the front-end cluster resources.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).