Creating Basic Kubernetes and OpenShift Resources
Containers - why all the hype? If you're looking for answers then this course is for you!
"Deploying Containerized Applications Technical Overview" provides an in-depth review of containers and why they've become so popular. Complementing the technical discussions, several hands-on demonstrations are provided, enabling you to learn about the concepts of containerization by seeing them in action.
You'll also learn about containerizing applications and services, testing them using Docker, and deploying them into a Kubernetes cluster using Red Hat OpenShift.
In this video I'm going to step you through a few Basic Kubernetes and OpenShift Resources, we'll take a step back and look a little bit more at the Architecture of Kubernetes and OpenShift. And then I'll do a quick demonstration to show you how to deploy a MySQL service within OpenShift. So we've talked a lot about containers, containers, containers and really when we're just thinking about using podman, container is the smallest unit that is manageable with just podman but with Kubernetes, it sort of takes a level above that and they use what is called a pod.
So a pod consists of one or more containers and it is it shares within the pod its storage resources and the IP address. It's not uncommon for a pod just to have only one container in fact I think we do it that in all of our demonstrations today but oftentimes it's useful to have multiple resources or containers within a single pod. So a couple terms to define here a master node, a master node is a node that provides the basic cluster functions so that's going to have all of our APIs and controllers.
A worker node is going to be where the actual pods are deployed to. So our pods that have all of our containers are going to be scheduled onto the node, the worker node by the master node. Controller is a Kubernetes process that is responsible for watching certain resources and it's going to react to any changes or any differences in that resource. So for example if we have a something like a replication controller which I'll talk about in a second but a replication controller is responsible for maintaining the exact number of pods that we are expecting in our architecture. So some other things I've mentioned services before but what service does is that it provides a single persistent IP and port combination to provide access to your pods.
So a container and a pod any time it dies it's supposed to be getting a new IP address, what a service does is that it exists outside of that pod and is able to always direct and load balance to that pod with the same IP address. So that way when we're developing our applications we can know which IP addresses to expect.
As I mentioned a Replication controller is a type of controller that is responsible for ensuring that we have the expected number of pods in our application, so this helps us scale horizontally. Persistent Volumes and Persistent Volume Claims is how we maintain data despite the ephemeral nature of containers and pods. So containers and pods are supposed to be able to start up and disappear and we don't ever have to worry about whatever was in that container because it's just going to go away.
Again we're designing for failure here but in a lot of instances we do want to persist data such as having a database. We want to make sure that data that does not go away when the pod goes away. So we have the concept of a persistent volume. This is usually a path on the host or on another machine that we are dedicating to being used for Persisting Volume and then applications will use what is called a Persistent Volume Claim. We'll attach this to our pod and we'll say hello, we would like some hello Mr. Kubernetes or Mr. Open Shift, we would like some resources for our data.
OpenShift will go and look for any Persistent Volumes that match the requests within the claim and then it will bind that claim to the volume. Finally we have ConfigMaps and Secrets, this is a way for us to centralize our configuration. ConfigMaps we would use for any general configuration, say we have specific configurations that we need for development versus production or staging. Secrets are just like ConfigMaps except they are meant to be used, they're encrypted so they're meant to be used for information that's more sensitive such as password and credentials. So on top of those are all Kubernetes resources on top of that OpenShift provides a few additional resources that are worth mentioning. So the first is the Deployment Config. So this is the configuration file that is responsible for managing how your application is deployed and scheduled on to the pod. So within the Deployment Config is contained the deployment strategy that's to be used so it could be like a rolling update or something like that.
It also contains the replication controller and so that will decide how the application is scaled. We also have the Build Config. So the Build Config is what is used when we are using a feature and OpenShift called Source-to-Image that is when we provide OpenShift with a link to a Git repository. OpenShift clones the Git repository and then we'll build an image for us that is just based on that source code. So the Build Config is responsible for all of that.
Finally we have Routes and Routes build on top of services and essentially what they do is a take a service a Kubernetes service and they'll expose that to the Internet. So a service is just an IP address but a route is a DNS hostname, so we'll have a whole section on that in the next video. So here's a high-level look at the architecture of OpenShift. So as I mentioned here if we have a Master Node and then we have a few nodes here. So these are also known as worker nodes.
The master node is responsible for the Scheduler, so that's the Scheduler is going to be scheduling pods onto our worker nodes. So we can have as many worker nodes as we need. You know in my classroom environment here I have 2 I've got 2 worker nodes, there's also an infra node or infrastructure node. So this is responsible for things like the pod that's responsible for provisioning routes for your applications. So OpenShift just dedicates several some space for it to use infrastructure pods for anything that it needs to maintain the system integrity.
The final thing I want to mention about this diagram is also we've got our persistent storage so the storage can live really outside of OpenShift, if we wanted it to this is probably the safest way for us to have our persistent storage there. Okay so you may have heard that we recently released OpenShift 4. OpenShift 4 is a major update especially under the covers to the types of changes that have been made.
One of the primary changes is that the underlying operating system that is on all of our nodes is no longer RHEL but it's now CoreOS based. This is a big change because CoreOS is specifically designed to run containerized applications. So it's highly efficient, it makes the installation much much simpler, so installation used to be a very arduous task, now it's really simple. You can run a single command and you can you know set up your OpenShift cluster and you know just by the press of a button.
Similarly the updates are really simple and it can be done automatically now. I also mentioned that operators are new in OpenShift 4. That's a Kubernetes feature that we've brought over, so this is a convenient way for you to deploy your applications and for your application to leverage the container platform and there's also a entirely refreshed and new updated user interface. So similar to how we use podman to interact with all of our containers, OpenShift has its own utility and that is the oc utility. This is the main way that we're going to be interacting with our OpenShift cluster.
So the first thing you need to do before you do any other Openshift commands is that you need to do an oc login command. From there you can do a command such as oc new-app which is really powerful. This is how you can leverage that source to image feature I was talking about. So for example if I ran oc new-app I could just specify my github URL to my Ruby HelloWorld application.
I can specify the docker image if I wanted to but I actually could just you know give it the URL and OpenShift is smart enough to figure out which language I'm using and will figure out a compatible image for my application. Some other useful commands are the oc get command, this is what we would use to see all of our resources. So if I ran oc new-app for example, what it's going to do is it's going to create all of these OpenShift resources. And when I run oc get pods I will see that all those pods have already been created for us and I can run something like oc get all if I want to see every resource, that includes all my routes, my services, my deployment configs, my build configs. It's a good way to help to stay on top of all these sprawling resources.
If I want to get some more information about one of these resources I can use the oc describe command and the oc describe command will just give me a little bit more in-depth information on my pod or on my build config or my deployment config and we'll see a demonstration of that in a few moments. Some other commands that are really useful is the oc export, so OpenShift resources are usually yml or JSON based and oc export will allow you to take a resource such as a pod or build config. You can export it as a yml file, you can make some modifications to it and then you can run the oc create command and pass in that yml or JSON file.
You can use the oc edit command if you want to directly just edit that file, oc delete will help you remove that resource and just like podman exec allowed us to go inside of a container, you can use oc exec if you also wanted to go inside of the container inside of a pod. This will allow you to do some sort of debugging or if you just needed to inspect like a log file for your App server or something like that you can use oc exec to do that. Okay, so let's jump into a demonstration and I'll show you how to get MySQL instance running in OpenShift.
Okay, so I have already logged in to OpenShift here, you can see I'm in the default project, so I haven't joined a project yet but my next step is going to be running oc new-project mysql -openshift and projects are similar to the Kubernetes namespace and what it does is just helps keep all of our resources around a specific project together. So that way we don't have you know it just helps keep things a little more organized.
We don't have to worry about resources in one project affecting another project. Okay so now we're in our project we are going to run an oc new-app command, we are going to specify our image by using the docker-image option and we're going to point it to our registry using the rhscl/mysql -57- rhel7 with the tag latest. We're going to give a name which is mysql-openshift and we need to pass in a few environment variables if you remember, so MYSQL_ USER=user1 MYSQL_PASSWORD= mypa55. We're going to specify the database that is going to be automatically created by MySQL as testdb and we're going to specify the root password as rootpa55 and we're doing knowledge that this is an insecure registry.
Okay looks good. Okay so let's do an oc get pods and we'll see what's going on now because when you run oc new-app what we're doing is we're creating all of those OpenShift resources. So OpenShift is going to create for us a service, it's going to create a build and a deploy config well not a build config in this instance because we're just we're not using any source build. You can see here this pod that's spinning up is our deployment config creating and scheduling our application on to a pod.
So this is the pod that we are waiting for. If I do an oc describe pod mysql-openshift-1 you can see that I get a lot more in-depth information about my pod. This event section is really useful this is a great way to help debug. If there's an issue with your pod, specify if we have any volumes, they're being used by our pod so this is just a really useful tool to get a little bit more information about your pod. So let's continue to watch and wait for our pod to come up.
Okay looks like our pod is ready to go I'm going to do an oc get service we'll see the name of our service, okay so it's mysql openshift that means our container is running on this IP address 172.30.236.44, so the next step here is we are going to expose, we're going to create a route for that service. And remember a route is responsible for taking a service and exposing it with a DNS hostname.
So I'm going to run oc expose service mysql-openshift. Ok so now if I run oc get route you can see that this mysql openshift domain name here is going to point to that IP address in the service which is pointing to our container. Alright so now I'm going to run an oc port-forward this is going to allow us to connect to our mysql instance from the host machine. I need to get the name of our pod again so you get pods ok start on oc port-forward mysql-openshift-1-jg6bk and we're going to forward it from the default my sql port to the default mysql port.
Ok so this is going to start a long-running process so I'm going to want to create or open up a new tab and from here I'm going to try to connect that mysql instance, so I'm using user1 using the password these are all the things I configured when I set up those environment variables when I did the run the oc new out some protocol tcp or connecting via localhost I'm not specifying the port here because the default port is 3306 which is going to map inside of our container. Ok so I'm going to run that command.
Alright so we have connected to the mysql instance if I do a show databases we should see our test database in here and we do, ok great. So we were able to start up a mysql service that is running an OpenShift, we exposed it as a route, we were able to connect to it from our host machine and we could see that environment variable where we specify the name of the database was populated within that MySQL service so I exit out of here.
And that concludes this demo, so join us in the next video. We're going to start taking a little closer look at OpenShift routes.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.