1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Deploying Containerized Applications with Red Hat OpenShift

Creating Applications with the Source-to-Image Facility

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyBeginner
Duration1h 40m
Students279
Ratings
4.6/5
starstarstarstarstar-half

Description

Containers - why all the hype? If you're looking for answers then this course is for you!

"Deploying Containerized Applications Technical Overview" provides an in-depth review of containers and why they've become so popular. Complementing the technical discussions, several hands-on demonstrations are provided, enabling you to learn about the concepts of containerization by seeing them in action.

You'll also learn about containerizing applications and services, testing them using Docker, and deploying them into a Kubernetes cluster using Red Hat OpenShift.

Transcript

In this next video we're going to take a closer look at the Source to Image Feature that you've already seen it's use. It's an OpenShift based feature and what it allows us to do is to take our application source code and automatically create an image and then deploy pods based on just the source code. 

Ok so the first thing that happens is that you run the oc new-app command. When you do that it's going to create a Build Configuration, so when you run oc new-app you recall that the one of the things we do is we specify sometimes we want to specify the Builder Image or the Image Stream, we're going to use such as like PHP or MySQL and we'll also provide a URL to the Git repository, it doesn't have to be a remote repository it could also be a local repository. 

What that Build Config is going to do is that is going to create a Builder Pod, so it's going to create that resource definition a Builder Pod is going to spin up and what the Builder pod is going to do is that it's going to clone from the Git Server that source code. 

So it's going to clone the source code and then it's going to go and create an image based on that source code and push it into the internal registry and the external registry and it also pull down from the local cache if we have any images that we would need to use in order to make our image. So the Image Stream, the S2I image stream if there are any changes that happen to the upstream images so for example for using a PHP, PHP based application, if there is a change or an update or like a security fix that is that occurs at the Image Stream, that change is going to be notified to the Build Config and we can restart this whole Build Process to make sure that our application is the most up-to-date. 

The other thing that's happening is that the Deployment Configuration is being created, so I'll talk a little bit more about the distinction between the Build Configuration and the Deployment Configuration in a second but the thing to know here is that, the builder's going to be responsible for creating the image whereas the deployment configuration is going to then be responsible for taking that image and creating the pods based on that image. So what are the benefits of Source-to-Image. 

First of all it is very efficient so developers don't need to necessarily go and understand how to create Dockerfiles or even really understand all of the basic concepts behind containers, they're able just to do a simple you know simple oc new-app command and they can already have their applications deployed onto OpenShift, it's very simple, it's very intuitive. 

Another benefit is Patching, so like I mentioned Source-to-Image allows you to more easily make updates to your application, so if you if there's an issue with like the PHP base image and there needs to be a security patch update that can happen upstream in the in of sorry in the Image Stream and that will get notified down to the base image and will start the whole process over again. Another big benefit is speed S2I is as far quicker than if you were to create all of these resources by hand so at the very least it not only makes a more efficient build but it also gets you up and running a lot faster. 

Ecosystem, so S2I encourages you to share your images, share your images with other developers, to work on your images and to improve them, so it's another way of making this more about than just more than being just about containers, it's really about a culture change and a change in how you share information with other developers and with other parts of your organization. 

So an Image Stream is a resource that defines an alias for a for a set of related images, so typically this would be something like you know a PHP image stream would contain all of the versions of PHP, so if you take a look at this screen down at the bottom, here you can see that a PHP image stream contains all of these different versions. So the nice thing about this is that OpenShift can be configured to watch some of these image streams, so that if there is a change to one of the versions then our application and the platform itself is going to be able to respond to that change and we can decide whether or not we want to have automatically updating applications based on changes to one of these images in the image stream. As we've seen before in order to kick off in a source-to-image build we just need to run the oc new-app command. 

We can define our image stream in this case we're using PHP and then we'll pass in the URL for our Git repository. If we do not provide an image stream this is kind of optional because OpenShift is able to determine the language of the application simply based on the Git repository. So for example if I had passed in a java application, OpenShift would look in that with the buildconfig would clone that Git repository it would see okay I see the presence of a pom.xml file, so I know that this is a Java app and it would then base the application image on a Java EE image. 

The same thing for like a Python application would use requirements text and for a bunch of other languages. Okay, to talk a little bit more about the build config and the deployment config as I mentioned the build config is responsible for cloning and building the image based on the source code whereas the deployment config is really responsible for scheduling and deploying pods into OpenShift. The one thing that note about this interaction is that these things aren't necessarily directly tied together there like as in nothing in the BuildConfig is saying I know exactly what this deployconfig is, I'm going to go call it directly. 

It's more of an indirect interaction because the buildconfig is going to build this image put it up into the repository and the deployment config is just waiting for a new image to come up. So if there's a change in the image the deployment config knows to automatically react to it. So if I was to say like let's say that I did a source to image build and I have my application up and running but there was a bug in my code so I wanted to release a new version of the code, so I makes my changes I pushed it into my Git repository build can I start a new build, so the buildconfig builds a new image for me the deployment kit config is automatically going to respond to that new image and then redeploy my application for me. 

So let's do a demonstration where we do exactly that. Ok so I've already created a new project here in my OpenShift cluster, I've named it S2I and I'm going to run the oc new -app command and I'm going to pass in the php image stream, I'm going to give it a name phphelloworld and I'm going to point to my source code at services.lab. example.com/php -helloworld Okay, so I'm going to run oc get pods and we'll watch these pods come up the first pod we're going to see is our build pod our builder pod once that's finished then we'll see the deploy pod and once that is finished then we'll have our final pod available for our application. 

Okay it looks like our pod has finished running, so let's do an expose on the service so that we can access the application. We're on an oc get service to find the service name, okay alright oc expose svc/php -helloworld ok and now if I run oc get route, I'll have the route named php-helloworld -s2i.apps.cluster.lab.example.com. Okay, so if i run a curl php-hello world - s2i.apps.cluster.lab.example.com, okay I get my Hello World message so our app is working as expected. Okay let's say now that we want to make a code change into our application I'll step you through the process of how we can get that change to be updated in our OpenShift cluster. So let's go ahead and download the source code okay, all right so let's edit, the index.php and we're just going to make a very simple update. 

We'll add a new line say change is going to come, all right there we go okay, so in order to get this change to appear in our app we need to commit it add and commit it to our git repository all right and then we're going to push it up to the repo and then we just need to kick off a new build and we should see the application updated okay, oc start-build php-hello world, okay so the new build is started let's do an oc get pods and we'll watch the pods come back up. You can see here we've got a second php helloworld -2 -build, so that build is the is the second build so it's trying to create another image on top of the existing one. 

Okay looks like our pod has finished running so let's do another oc get pods just to see ok yes, we've got php helloworld -2 up and running, so this is their revised application, so I am going to, we don't have to worry about our route creating anymore because we still have the route from the last time we created it that doesn't go away even if we do a new build. 

So I'm just going to run that curl command again okay and then we can see that we get the hello world message and then our new message a change is going to come. 

Okay so that concludes this video and this exercise. So join us in the next video and we'll review the web console 

About the Author
Students41581
Labs34
Courses94
Learning paths28

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.