1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. OpenShift 101 - Quick Start

OpenShift Introduction

play-arrow
OpenShift Introduction
Overview
DifficultyBeginner
Duration1h 42m
Students534
Ratings
5/5
starstarstarstarstar

Description

OpenShift is a rock solid platform engineered for the enterprise. It's built on top of Kubernetes and provides many value add features, tools, and services which help to streamline the complete end-to-end container development and deployment lifecycle.  

This introductory level training course is designed to bring you quickly up to speed with the key features that OpenShift provides. You'll then get to observe first hand how to launch a new OpenShift Container Platform 4.2 cluster on AWS and then deploy a real world cloud native application into it.

 

 

We’d love to get your feedback on this course, so please give it a rating when you’re finished. If you have any queries or suggestions, please contact us at support@cloudacademy.com.

Learning Objectives

By completing this course, you will:

  • Learn and understand what OpenShift is and what it brings to the table
  • Learn and understand how to provision a brand new OpenShift 4.2 cluster on AWS
  • Learn and understand the basic principles of deploying a cloud native application into OpenShift
  • Understand how to work with and configure many of the key OpenShift value add cluster resources
  • Learn how to work with the OpenShift web administration console to manage and administer OpenShift deployments
  • Learn how to work with the oc command line tool to manage and administer OpenShift deployments
  • And finally, you’ll learn how to manage deployments and OpenShift resources through their full lifecycle

Intended Audience

This course is intended for:

  • Anyone interested in learning OpenShift
  • Software Developers interested in OpenShift containerisation, orchestration, and scheduling
  • DevOps practitioners looking to learn how to provision and manage and maintain applications on OpenShift

Prerequisites

To get the most from this course, you should have at least:

  • A basic understanding of containers and containerisation
  • A basic understanding of Kubernetes - and container orchestration and scheduling
  • A basic understanding of software development and the software development life cycle
  • A basic understanding of networks and networking

Source Code

This course references the following CloudAcademy GitHub hosted repos:

 

Transcript

- [Jeremy] Okay, welcome back. In this lecture, I'll provide you with a quick introduction to OpenShift, what it is, what it provides, and how you can use it to run and operate your own containerized workloads. Let's begin. For starters, let's quickly consider what differentiates OpenShift from past and present solutions. If we consider each of the vertical perspectives in this diagram, we can see how development processes, application architectures, packaging, and infrastructural approaches have each evolved to where we're at today. 

With these in mind, we can begin to understand the key requirements and dynamics that an enterprise needs to consider when now building and developing cloud native applications. This evolution towards DevOps, microservices, containers, and cloud is really at the heart of what has driven the development of the very popular open source project, Kubernetes, and its enterprise-focused derivative, OpenShift, provided by Red Hat. OpenShift, as you now might have guessed, is an upstream version of the very popular and well used open source Kubernetes project. OpenShift, produced by Red Hat, is pitched as an enterprise-grade container platform enhanced with many innovative features that sit on top of the baseline Kubernetes platform. OpenShift provides and addresses many enterprise development needs. 

As seen in this diagram, OpenShift provides support for self-service, multi-language, automation, and collaboration. And by design, it is standards-based, web scale, open source, and enterprise-grade. OpenShift can be considered a PaaS, or platform as a service offering, that is you use it to host and run your own applications and workloads. The underlying physical infrastructure is abstracted away from you, so that, as an application developer, you don't have to worry about powering up servers and installing hard disks, et cetera. Having said that, OpenShift does come in several product versions, some of which focus on being an on-prem solution and therefore require some form of infrastructure to be ready and available, but once the OpenShift installation is complete, you, as an application developer, never really need to consider the underlying physical infrastructure. Application developers instead work with container abstractions such as pods, replica sets, deployments, et cetera, to deploy their own workloads. As mentioned, OpenShift is provided in several versions, each being slightly different and optimized for a specific use case. 

Red Hat OpensShift Container Platform is a hybrid cloud, enterprise Kubernetes platform used to build and deliver better applications faster. Red Hat OpenShift Dedicated is a online, single-tenant, high-availability Kubernetes cluster managed by Red Hat on AWS. Red Hat OpenShift Microsoft Azure is a fully-managed Red Hat OpenShift service that can be launched on Microsoft's Azure public cloud. And Red Hat OpenShift Online is an online PaaS-styled version of OpenShift considered the be fastest and easiest way for beginners to build, deploy, and scale out solutions. Other flavors of OpenShift also exist in the form of OKD, and open source free-to-public distribution of Kubernetes which looks and feels like OpenShift. 

In fact, a lot of the upstream features developed and tested within OKD are ported back into the mainstream OpenShift enterprise version. Minishift, a version which is used to launch a single node virtualized local OKD cluster, extremely useful for developers who want to build and test and prototype on their local workstation. Code Ready Containers, again, developer-focused, used to launch a minimal preconfigured OpenShift 4.x cluster on your local workstation. In the coming demonstrations, I'm going to provision and create an OpensShift Container Platform-based cluster which will be deployed into the AWS cloud. The OpenShift Container Platform, as you now know, is built on top of Kubernetes. Kubernetes is made up of master and worker nodes. 

The master nodes collectively are considered the control plane for the cluster, with the worker nodes being used to perform the actual work. Typically, worker nodes are the nodes where you'll host your application workloads in the form of running containers contained within pods. The OpenShift control plane consists of three or more master nodes, with the default being three. Now, without going into too much detail, the minimum requirement of three master nodes allows the consensus algorithm used by the etcd distributed key value store to establish a quorum for any action taken, for example, add, remove, or update within the cluster. This is all part of the way Kubernetes is designed to be fault-tolerant and highly available. 

The number of work nodes configured within an OpenShift cluster is arbitrary decision and can range from one to N, where N satisfies the capacity and performance requirements of all workloads deployed within the cluster. Generally speaking, the more worker nodes means the more capacity and hence the more workloads that you can deploy into the cluster. Worker nodes can be scaled up and down as capacity requirements change. When deploying OpenShift, both the master nodes and worker nodes must run on top of the Red Hat Enterprise Linux operating system. Again, OpenShift can be deployed to physical on-prem infrastructure, virtual infrastructure, such as VMware vSphere, and/or private and public clouds. OpenShift provides you with various ways to administer and maintain it. 

There is a web-based admin console and also a command line tool in the form of the oc utility. The web based admin console is feature-rich and provides you with all of the dials required to maintain the cluster's health. For those already familiar with Kubernetes, the oc command line tool is OpenShift's equivalent of Kubernetes kubectl utility. Using the oc command, you can perform all the same cluster resource management commands. You can use it to create, update, and delete cluster resources such as pods, replica sets, deployments, services, et cetera. You can also use it to print out the status of existing cluster resources. 

In terms of DevOps capabilities, OpenShift provides some very useful internal CICD tooling, which when used makes building and packaging new container images a breeze. Together with OpenShift-specific cluster resources, such as build configs and builds, which are used to encapsulate build configurations and actual builds, OpenShift also provides several build strategies, which include the following. Docker build invokes the docker build command over your source repository, which must contain a Docker file located in the root of your project. Custom build leverages a customized Docker image which contains all of the build tools and logic required to perform a build. 

Pipeline build, based on Jenkins. This allows developers to define a CICD pipeline. This strategy defaults to using a Jenkins file located in the root of your source repository. And source-to-image build, a proprietary build technology and workflow centered around containers. Source-to-image, or in abbreviated form, S2I, enables you to create custom builder images, which you then inject your source code into at build time. The builder image then performs the required build logic on the source code, assembling it into runtime artifacts, and then, snapshots a final output container image. You then launch pods into the OpenShift cluster that consist of container instances of this final output container image. Source-to-image aids the development experience when it comes to creating and building container images. 

An internal container registry is also provided within OpenShift, again, very useful for managing and organizing containers in an enterprise context. An internal container registry is also provided, again, very useful for managing and organizing container images in large enterprise environments. 

Okay, that completes this introductory lecture on OpenShift. You should now have enough knowledge to understand what its main features and capabilities are. The best way now to move forward and learn how to use OpenShift is to see it firsthand in action. The next lecture will introduce you to a sample cloud native application that we will then deploy into a newly provisioned OpenShift cluster, all of which we will demonstrate to you. Go ahead and close this lecture, and I'll see shortly in the next one.

About the Author

Students31388
Labs32
Courses93
Learning paths22

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.