1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Deploying Containerized Applications with Red Hat OpenShift

Overview of Kubernetes and OpenShift

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 40m
Students269
Ratings
4.6/5
starstarstarstarstar-half

Description

Containers - why all the hype? If you're looking for answers then this course is for you!

"Deploying Containerized Applications Technical Overview" provides an in-depth review of containers and why they've become so popular. Complementing the technical discussions, several hands-on demonstrations are provided, enabling you to learn about the concepts of containerization by seeing them in action.

You'll also learn about containerizing applications and services, testing them using Docker, and deploying them into a Kubernetes cluster using Red Hat OpenShift.

Transcript

So now that we've talked a little bit about containers I want to talk about what Kubernetes and OpenShift bring to the table. Why do we need Kubernetes and Openshift. So i've been talking about how great containers are but of course there are some limitations. 

If we start to think about how containers they do provide an easy way for us to package and run our applications but you don't want to just have your application running in one single container. Really the best way to leverage containers is to make more and more and more and more copies of them because it's going to make your system more reliable you have more opportunities for if you know container was to die you can replace it with another container. Now you could do all of this manually but if you think about it for more than a few seconds you'll think you know this is impossible like you can't just manually be updating all of your IP addresses and and manually bringing up and down containers but it would just consume all of your time. 

So you know this is this is a problem with containers we need a way to orchestrate our containers. So here's some needs that enterprises have when they start thinking about you know how can I take containers to the next level how do I take them to an enterprise ready level to deploy and scale our applications. Okay so enterprises need easy communication between a large number of services so containers aren't ever as simple as just having the same old container. They always need to be able to talk to each other so they need to be able to easily communicate with each other. 

We need resource limits on applications regardless of the number of containers running them. So again one of the benefits of containers is that we can set these resource limits and enforce these in an isolated way .So that the container failure with one container or you know isn't going to affect the other containers we can maintain a set number of resources for each one. We want to respond to application usage spikes to increase or decrease the number of running containers. 

There's no point in running a thousand containers of an application you know necessarily I like 4:00 in the morning if it's just not going to be used right then. So would be great if we had some kind of platform that would help us you know scale up or scale down based on the usage of our application. We need something that reacts to service deterioration using something like health checks so we want to be able to have our platform intelligently know whether or not there's an issue with our service and react to that rather than just failing for our end-users. 

We also want to gradually roll out a new release to a set of users. We don't want our users to experience downtime when we're releasing version 2 of our application. We want to be able to be intelligent about it we want to release it maybe just to a few users or maybe we want to do some you know other deployment strategies that we'll talk about a little bit more later. We want to be able to control how that release is working and we also want to make sure we're minimizing downtime. 

So Kubernetes. Kubernetes is a term you've I'm sure heard a lot and maybe have some experience with. The basic premise of Kubernetes is that it orchestrates and schedules containers for high availability. So it's responsibility, if a container fails is that it will automatically create another container. Now there are three basic principles and challenges to maintaining a container cluster architecture. So I've created a drawing here and I'm going to talk through each of these three things. 

So Orchestration. Okay so Orchestration is if let's say this blue container that I've got drawn here is trying to talk to this green container that I've got over here and it's also talking to this red container. Okay everything is fine in this until something were to happen. Right so let's say that this container dies and let's say this container dies. What we need is a platform that is able to say okay well that's fine, that's not a problem. We have other green and other red containers running and it would just redirect this arrow over to here to this green one and redirect us over here to this red one. Okay that's the ideal scenario. 

We're able to orchestrate our architecture around the assumption that things are going to fail. Too often we try to build systems where the last thing in the world we want is for the system at any point to fail. Well I mean the reality is that things break, it happens. So why don't we design a system that is meant to fail so that we are prepared for it when it does. That's what Kubernetes is trying to achieve, scheduling, all right. 

So I already talked a little bit about what happens when a container dies we want it to be orchestrated. Scheduling is going to be responsible for scaling up and scaling down the number of containers that we have. So like we talked about earlier you know if you have a huge surge in traffic and all of a sudden this green application starts getting a lot of traffic we want a platform that's able just to start scaling up immediately as soon as as soon as it can. And similarly you know if the red application isn't getting much traffic we want to just start scaling it down because we don't need as much. 

Finally isolation, so we want to make sure that you know if the green application starts having issues and it starts breaking then we want to make sure it's not going to affect any of our other applications. You know the blue applications the other green container is the red container all of those are going to still be maintaining their isolation and the system can continue to function as it is intended. So some of the features I've already mentioned here of Kubernetes which is you know trying to achieve those three goals. 

There's service discovery and load balancing so there's a concept called services in Kubernetes which is a feature that builds on top of containers. What that does it makes sure that we are able to direct traffic to a specific IP address to point to our applications. So flip back over here to this diagram, our green red blue application it's going to have a service in front of it that says every time you hit this IP address, I'm going to direct my traffic to the blue service. Another service is going to direct traffic to the green service and so on and so forth that will automatically load balance as well. 

Horizontal scaling we're able to scale up or down the number of containers, self-healing with user-defined health checks so we can specify health checks in our application that more intelligently say whether or not, for example if we have a REST service whether or not that service is actually ready to handle requests. 

It's important for Kubernetes to know when exactly the system is okay or not, you know sometimes our applications deploy fine but that doesn't mean they're necessarily working right. With custom health checks we can configure and say okay now this application is actually ready to handle requests. Automated roll out and roll back, so we can configure how our applications are rolling out. If there's an issue we can define how we want them to roll back. 

We can have centralized configuration management using secrets and ConfigMaps is the name of the other resource. So using these two resource types we are able to centralize all of our configuration which is just going to make the system a lot more portable and a lot more reliable. And finally we have operators which helps us automate a lot of the things that are within our cluster. Okay so it's a lot of great features that Kubernetes provides and you're probably wondering how OpenShift fits into this big picture. 

So Openshift is a set of modular components and services that are built on top of a Kubernetes container infrastructure. So if you think about it Openshift is just sitting on top of Kubernetes. It provides a lot of additional features and really makes it an enterprise ready platform. So on top of that on top of kubernetes we're adding multi-tenancy. So we're creating a user interface that allows developers and admins to access and you know monitor their applications. So with that we could have increased security, we have auditing, we have an integrated developer workflow. So you're allowed to you know get into if you're a developer, you can you know access the user interface, you can use the developer catalog to help kick-start some of your application deployments. 

Now we have a feature called Routes which build on top of services. Routes essentially take services and then expose them to the outside Internet. Built into the user interface is a lot of great metrics and logging and all of that is wrapped under that unified UI which has been recently updated for OpenShift for that is looking better than ever and is even more user friendly. 

So that concludes this video join us in the next video and we'll start getting some hands-on with containers.

About the Author
Students38367
Labs34
Courses93
Learning paths24

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.