1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Deploying Containerized Applications with Red Hat OpenShift

Deploying Containerized Applications with Red Hat OpenShift

Developed with
Red Hat
play-arrow
Overview of Container Technology
Overview
DifficultyBeginner
Duration1h 40m
Students224
Ratings
4.6/5
starstarstarstarstar-half

Description

Containers - why all the hype? If you're looking for answers then this course is for you!

"Deploying Containerized Applications Technical Overview" provides an in-depth review of containers and why they've become so popular. Complementing the technical discussions, several hands-on demonstrations are provided, enabling you to learn about the concepts of containerization by seeing them in action.

You'll also learn about containerizing applications and services, testing them using Docker, and deploying them into a Kubernetes cluster using Red Hat OpenShift.

Transcript

Everyone and welcome to this Technical Overview titled Deploying Containerized Applications DO080. My name is Zach Gutterman, I am a curriculum architect for Red Hat training and in this Technical Overview I will be introducing you to the what how and why of containers. I'll step you through how to create your own containers, how to create container images and then later on the course will start talking about OpenShift and Kubernetes and how we can orchestrate and scale all of our applications so let's jump right into it and I'll give you a quick Overview of Container Technology. 

So before we dig into containers let's talk a little bit about where containers came from, so what were we doing 5, 10 or maybe even some of you are still doing now how are we deploying our applications. Now typically applications are deployed you know on top of Hardware, on top of an operating system and usually using some sort of a virtual machine or something like that. Our applications actually I want to go ahead and draw this for us. So let's say we've got our hardware, here at the bottom. 

Okay, this is our hardware and now on top of our hardware we've got our operating system, so this is what's running order application is going to be running on top of, ok so on the top of our operating system we've got all of these libraries and these are the libraries that are supporting our applications and finally all the way at the top we have our applications. Ok so let's say that this application right here, this one is a Python application and this one's a java application, ok so each of these apps they're borrowing you know they're using this library, the Python one is using this one, the Java ones are using this one and maybe they even share some of the same libraries. Ok so what are some of the struggles with this type of architecture. 

One of the big problems is that this Python and this Java application right here, these guys are closely coupled to the operating system, so if anything changes down at this layer here at this operating system level, it's going to affect those applications. Okay not only that another issue is that if anything changes with these libraries so let's say that there is an update to this library right here remember this is a library that's shared by both of our applications, there's any change of those lot to one of those libraries we have to go through and test all of our applications again. 

Now in this example we only have two applications but you can imagine that if we had 5 6 7 8 9 10 applications deployed on it on here any small change that we make to the operating system is going to require us to go through and test every single application, it's just not a very viable strategy for scaling out, it's not a good strategy for organizations that really want to grow. 

So what did this lead to, it led to users needing a fast and lightweight way to spin up virtual machines, right virtual machines are great because they're really portable but they're still really heavy they take a really long time for us to start up, they include a lot of things that we don't necessarily need and they still have this problem where you know we're still going to be tied or too closely coupled for our applications. 

So it would be great if we had a way to sort of isolate these virtual machines in a better way and if they could be more lightweight so that could start up a lot faster. So containers were born okay so what what is a container you've probably heard the term so many times but maybe you haven't actually seen a definition of what a container is? So containers are a set of one or more processes that are isolated from the rest of the system. Now I realize that's kind of an abstract definition but what that really means is a container has a lot of the same benefits as a virtual machine and it allows you in a very isolated way to deploy your applications. 

So when I say process, that process, that can be starting an application server, it can be starting a MySQL server, so you know when you think of process you can really think of your applications. So again a lot of the same benefits as a virtual machine but they have a few benefits on top of a virtual machine, so one of those is having a low Hardware Footprint. 

So because of using these Linux things such as C groups and namespaces we're able to create a very isolated environment and that's going to help us minimize how much CPU and memory is being used on the host machine. They're very quick to deploy and they're very reusable so because they're based on a blueprint known as container images, we can easily reuse them as many times as we want. We can share them, we can there's registries that we can use to upload them that are public, we can just go on and find our own container images and make as many containers as we want. 

Deployment times go from minutes, tens of minutes, to just you know a few seconds once you have those images available on your machine. Environment Isolation is another important thing, so containers work in an entirely closed environment, so they can run on anywhere, they can run on your Mac it really doesn't matter what the underlying operating system is because a container is a self-contained environment that has its own operating system. Multiple Environment Deployments so building on top of this environment isolation is that containers are highly portable, you can move them anywhere. 

You don't have to worry about you know ok this works in our development environment is going to work in our staging environment, is going to work in production. You know that it's reliable because you're basing all of your containers on the same image, so you know that everything that environment is going to be to your specification. 

Ok so you saw my drawing earlier of a Traditional OS here on the Traditional Deployment on the left and then on the right we have our what a container architecture would look like. So remember again we've got these two applications on the left here and these applications are using a traditional architecture where they are borrowing their libraries from a host operating system. Now these two applications they are not truly isolated from each other and that is because how closely coupled they are to each other inherently by using the same libraries on a host operating system. 

Again if anything was to change with one of these libraries we have no idea what the effects could be on APP A or APP B. Now contrast that with a container architecture. So a container architecture says okay, let's change this whole paradigm and we were just going to bundle all of the libraries that APP A needs and we're going to put it into its own isolated environment and then we're going to do the exact same thing for app B. So yeah there may be you know maybe library A and library B, I mean maybe there's some overlap there, maybe we're doing a little bit of duplication but what we're getting in return is we are getting complete isolation so that any changes that we need to make in app A if we have to update one of the libraries we don't have to ever worry about what how that's going to affect app B, or app C or app D or app E. So it's really going to cut down on our need to test further because of this we are not needing to install an entire host operating system. So we are only using the container operating systems. 

We'll talk a lot more about what types of base images as they're called or base operating systems that we're using but essentially these operating systems are a lot more stripped-down than the typical VM operating systems which are fully fledged RHEL operating systems things like that. We're just using a stripped down version of it which is going to make it a lot more efficient. Alright so that concludes our Overview of Container Technology and join us in the next video and we'll start talking about Container Architecture. 

About the Author

Students27882
Labs32
Courses93
Learning paths22

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.