OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at firstname.lastname@example.org.
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Presenter] Okay, welcome back. In this demonstration I'll create a new front end build config, configured to use the S2I front end builder image. And configure it to build the front end source code located in the OpenShift vote app front end react GitHub repository. Now, before I proceed with step 22 it is highly recommended that you fork your own copy of the OpenShift vote app front end react GitHub repository, and update the git URI field within the build config that we are about to create with your own forked repo URL. The main reason for this is that later on in step 27 I'll show you how to trigger automatic builds using a web hook configured within the GitHub repository.
By having your own forked version of this repository you'll be able to point the web hook at your own cluster, and be able to push code updates triggering an auto-build of the front end within your cluster. More on this later. For now let's jump into step 22 and I'll copy the build config as is, and execute it directly within the terminal, like so. Excellent. This has created a new build config within the cluster. We can see this by examining current build configs, by running the commands, OC get buildconfig. And OC describe buildconfig frontend. The last command in particular shows the finer details of the front end build config that we just created. Take note of the web hook URL.
This needs to be copied and will be used in step 25. Now that the front end build config is in place, we are almost ready to run our first front end image build within the OpenShift cluster, using the S2I front end builder image. But before we do we need to create a new image stream for the front end images that the build system will create. To do this, I'll copy and run the command in step 23, like so. I can again examine the list of available image streams configured within the current cloud academy project, by simply running the command, OC get IS. We're now ready to run our first front end build within the cluster. Let's go ahead and do this by copying the command from step 24. And executing it within the terminal like so. This command will actually tail the build from start till finish. I'll sit on the tail and wait till the entire build run completes successfully.
Excellent. We can see that the build has just completed successfully. This should have resulted in a new front end image, which will have been pushed into the previously created front end image stream. We can check this by running the command, OC describe IS frontend. Or alternatively by looking within the OpenShift web admin console. Clicking on build configs we can see our front end build config. Clicking on it, we can then see it's details, and under builds we can see each of the builds that have been performed. Here we can see just the one, which we had just kicked off. Clicking on it, we can see the build details, such as the memory, CPU, and file system usage. Additionally, under logs we can see all of the logging as omitted by the build. This is is extremely useful for debugging broken builds. Clicking on image streams, we can now see that we have a new front end image stream, which contains our newly built front end container image, a perfect result.
Okay, that completes steps 22, 23, and 24. We're now ready to move on and set up the front end deployment config, which will utilize the front end image that the build has just generated for us.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes.