OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.
This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.
We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at support@cloudacademy.com.
Learning Objectives
By completing this course, you will:
- Learn and understand what OpenShift is and what it brings to the table
- Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
- Learn and understand the basic principles of deploying a cloud native application into OpenShift
- Understand how to work with and configure many of the key OpenShift value add cluster resources
- Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
- Learn how to work with the oc command line tool to manage and administer OpenShift deployments
- And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle
Intended Audience
This course is intended for:
- Anyone interested in learning OpenShift
- Software Developers interested in OpenShift containerisation, orchestration, and scheduling
- DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift
Prerequisites
To get the most from this course, you should have at least:
- A basic understanding of containers and containerisation
- A basic understanding of Kubernetes - and container orchestration and scheduling
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
Source Code
This course references the following CloudAcademy GitHub hosted repos:
- https://github.com/cloudacademy/openshift-voteapp-demo (OpenShift VoteApp Runbook)
- https://github.com/cloudacademy/openshift-s2i-frontendbuilder (OpenShift S2I Frontend Builder)
- https://github.com/cloudacademy/openshift-voteapp-frontend-react (VoteApp Frontend UI)
- [Jeremy] Okay, welcome back. From here on in, we're in demonstration mode. This is the fun part. Now, for starters, I want to remind you of the installation and deployment runbook hosted in the Cloud Academy GitHub openshift-voteapp-demo repository. This contains the entire end-to-end instruction set to create not only a new OpenShift cluster, but to then also deploy our sample cloud native voting application into it. The runbook consists of steps one through to 30.
I'll continue to reference this runbook throughout the duration of the full demonstration. Having this available will allow you to watch and review each of the steps performed, and additionally, if chosen to, complete the steps within your own environment. Okay, let's proceed. In this demonstration, were going to cover steps one through to five. These steps combined will be used to spin up a new OpenShift 4 container platform cluster running on the AWS cloud. In particular, we're going to use the OpenShift installer command line utility to orchestrate the cluster provisioning process. Step one requires you to log into the cloud.redhat.com web portal. you will need to login with your own Red Hat credentials. If you need to register, then do so now. Now, the reason we need to access the cloud.redhat.com web portal is to retrieve a pull secret, which is tied to your Red Hat account. This allows Red Hat to track your OpenShift 4 cluster subscriptions.
Let's open the Red Hat cloud portal. Within the Clusters section, I'll choose AWS for the infrastructure provider, but take note here of the other available options. I'll then go with the Installer-Provisioned Infrastructure option. In this case, the installer will create all required cluster infrastructure, nice and easy. Next, we are presented with several options, including the ability to download the installer itself. However, all we need to do is download the pull secret or copy it to the clipboard. I'll click on the Copy Pull Secret option. This will copy the pull secret to the clipboard. I'm now going to store this in a temp file within Visual Studio Code, so that I can reference it later when I get to step four. Now I'm moving on to step two. Let's install the OpenShift installer client by copying the following commands. Jumping into the local terminal, I'm starting out in the demo-openshift folder, which is empty. Okay, next, I'm going to paste the commands to download, extract, and copy into the usrlocal bin folder, which is configured on my local path.
Next, let's now run the openshift-installer command without any parameters to confirm that it is available. Excellent, it's ready to use. Here, we can see how it should be used in terms of the parameters and inputs, et cetera, that it expects. Let's now query the version of it, like so. Moving on to step three, let's now generate and scaffold a new install-config.yaml file. The install-config.yaml file contains all of the OpenShift cluster configuration that we want applied. I'll copy the openshift-install command here and execute it back within the terminal. We are then presented with a sequence of menu options asking us for details like where the cluster should be created, the region it should be created within, and the pull secret that we retrieved earlier. When this command completes, a new install-config.yaml file will be created and placed within the current directory.
Let's now display the contents of this file like so. This demonstrates how to scaffold and generate the basic structure required. I'll first take a backup of this file. And then, moving forward, we are simply going to overwrite this file with the configuration, as declared in step four. Let's now open this file up within Visual Studio Code. We need to edit and update some of the configuration. In particular, I'll update and add the pull secret that we retrieved earlier. I'll also add in an SSH public key which will allow us to SSH into the master and worker cluster nodes later if we need to. The ssh public key can be extracted from an existing private key by using the ssh-keygen command. Make sure to save the updated install-config.yaml file.
Now, let's take the opportunity to review some of the configuration declared within this file. The cluster config is designed for demo purposes, motivated primarily by reducing running costs, since we are in demo mode. For starters, we are provisioning the master and worker nodes in a single availability zone. We are using the m4.xlarge for both the master and worker nodes. We've decided to configure a root volume of 100 gigabytes using the GP2 volume type. The cluster network is configured with an IP overlay for the pods using a CIDR of 10.128.0.0/14. Each node within the cluster gets a slice of this CIDR. In this case, a /23hostprefix allows approximately 512 IPs to be available for the pods on any one node. The machine CIDR 192.168.0.0/20 is the address space that the VPC, or virtual private cloud, within AWS will be provisioned with. The network type is set to use the default OpenShift SDN plugin. In this case, OpenShift will be configured to use VXLan technology to implement the IP network overlay used within the cluster. The service network setting is set to use a 172.30.0.0/16 address range. This means any of the cluster service resources you provision will draw an IP from this CIDR range.
Now that we have updated and saved our install-config.yaml file, we are ready to launch. Let's go ahead and do this. I'll copy the create cluster command from here and then, run it back within the same directory containing the install-config.yaml file. Okay, the OpenShift cluster provisioning process has begun.
Now, this takes approximately 20 to 30 minutes to complete. Rather than watching the entire provisioning process, let's jump ahead to the point where the full provisioning process has completed successfully. Don't be too concerned if warnings are logged out during the provisioning process, as the cluster is likely to be still running through its bootstrapping scripts in the background. Given enough time, it will converge to a fully functional cluster, as you can now see. Okay, the cluster has been successfully provisioned as per this install complete message. We can also see in the output that several of the cluster particulars are logged out, such as the URL for the OpenShift 4 web administration console together with the credentials to log in. We can now navigate into the new Auth directory and examine the cluster credential files like so.
The kubeadmin-password file contains the password for the temp kubeadmin user. The kubeconfig file, on the other hand, contains information about the cluster, users, namespaces, and possible authentication mechanisms to connect to the newly formed OpenShift cluster. If we now jump over into the AWS console, we can see the underlying cluster infrastructure resources, which have been created to support the cluster. In the EC2 service, we can see four m4.xlarge instances; three for the master nodes and for the one worker node. Under Load Balancers, we can see two new network load balancers and one new classic load balancer. Navigating into the Route 53, we can see a new DNS hosted zone with nine newly populated records created for the purposes of resolving cluster traffic. As per step five, let's now quickly try accessing the OpenShift 4 web administration console. I'll retrieve the URL and credentials from the terminal output. I'll jump over into my browser and launch the web admin console like so. Ignore and proceed past the SSL warnings since we're just in demonstration mode. In production, you'd create and configure proper certs, but for now, let's just ignore these.
Finally, we get the login page to the OpenShift admin console. I'll need to copy the kubeadmin username and password, and then, present them within the login page. As you can see, we have successfully authenticated into the cluster with the temporary kubeadmin credentials and the dashboard view is now presented to us. Here, we can see the current cluster aggregated CPU, memory, storage and network stats. The admin console provides us with the ability to control many aspects of the cluster. On the left-hand side, we have the main menu for navigating within the cluster. Options include Home, Operators, Workloads, Networking, Storage, Builds, Monitoring, Compute, and Administration. Now, one common pattern you'll see throughout the remainder of this demonstration is that there are multiple ways in which you can create and manage cluster resources, either by performing point and click operations within this admin console or alternatively using the oc command line utility.
Okay, that completes steps one through to five. Next, I'll show you how to download and setup the OpenShift oc command line utility.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).