This course covers a range of techniques to help you run your own containerized applications using Red Hat. Throughout this course, you will follow along with guided presentations from the Red Hat platform to get a first-hand understanding of how to run containers and manage your workflows using Red Hat.
Learning Objectives
- Learn the basics of setting up web servers and containers
- Understand how to find and manage containers
- Understand how to perform advanced container management
- Learn how to attach persistent storage to a container
- Learn how to manage containers as services
Intended Audience
This course is ideal for anyone who wants to learn how to run containers with Red Hat.
Prerequisites
To get the most out of this course, you should have a basic understanding of Red Hat and of how containers work.
Hey guys! Right now we are going to have a look at Running a Basic Container. So, let's take this to the whiteboard and remind ourselves as to what is required in order to get started with containerization. We need a container host, this is the machine on which we're going to be running our containers. So, here's our container host, it is running Red Hat Enterprise Linux and it's because of the stock standard Linux kernel that we get the features that we need in order to run containers and those features are cgroups, SELinux, namespaces as well as seccomp and there's no daemon that is required, there's no sort of podman daemon that is required and again these are features that we are taking advantage of in the stock standard Linux kernel and if anyone tells you that you need a specific daemon well they're not exactly being truthful now, are they? So, what we are going to do right now is that we are going to be using the podman command to take advantage of these features, to create the namespaces, to create the rules that we need in order to facilitate running our application inside of a container. Now it's not only podman that we can make use of but there are other tools as well. So, let's go and have a look at that. There's a module, a yum module in particular called container-tools that we can install and it's because of the container-tools yum module that we get some of the really cool tools that we need in order to manage our containers and that includes podman. So, podman is a tool that we could use to start, stop and manage our containers. There's also Skopeo and Skopeo is a tool that is used to manage container images. It's all focused on the image so, we can use Skopeo to to copy images from one location to another and to inspect the image to learn more about how it behaves and then if you are wanting to build an image you can make use of the Builder tool.
So, let's go and have a look at a new whiteboard right now, we're going to talk about how things normally run. So, when you start up your system everything belongs to a namespace and that namespace is called namespace0. So, let's just say that this is your operating system environment, everything belongs to namespace0 by default. So, I want you to think about systemd starting, where does systemd run? It runs in namespace0, where's SSHD? Where's GNOME, it's all part of namespace0 along with all of your other files. Now what we get to do is that we get to tell the kernel to create another namespace. So, let's just say that here we have a namespace and we're going to call this namespace1, in fact the name of the namespace is not really that important, but yeah let's just call it namespace1. So what we now get to do is that we get to have an environment that is completely isolated from namespace0 and we're going to be using that environment to deploy the contents of the image that is why the image is so important.
So, let's just say that you have a another component right now, here we have the image and think about an image as being like a tarball and inside of this tarball we have the all the files needed to support your particular application and again you would only need what is required to support your application. So, here we have the image, the image has a bunch of libraries, maybe it has some content and the image of course would have the binary needed to start the actual application. Whether the actual application is a web server or whether the actual application is a database it really doesn't matter but it's all built into that image. So, what we now do is that we deploy the contents of that image into your namespace. So, inside of the namespace right now we have the, dare we call it the file system but it's really just the files. We take the files that are contained within the image and we deploy that to the names into the namespace. So, inside of the image there's also a bit of metadata and let's just say that the metadata also tells us the binary to initialize whenever you start a container from that particular image. So, let's just say we're going to start up usr/svn/httpd. So, now we have the httpd daemon running inside of this particular namespace. Now all of the files, all of the files in all of the binaries are completely isolated from the other namespaces. Now in this case on our system, on a container host we have two namespaces.
We have namespace0 which is for the entire operating system and then we have an isolated namespace called namespace1 which contains our files, the files that are derived from the container image. So, guys if you connect to namespace 1, if you connect your content let's just call it containers now. It's based on name spacing so when you do connect to your container and if you run a command like ps-ef, it's only going to show you the processes inside of that particular namespace. Of course if you run the ps-ef command inside of namespace0 you're going to get different output because it's going to show you the processes that are inside of namespace0. Now on top of that I also want you to think about whether you can run the ps command.
Well in order for you to run the ps command inside of your container that binary all the supporting files needed for the ps command are needed inside of the namespace and again all of this comes from your container image. So, you understand that the container image is where our application is where our application lives. It's really important that that you have a container image in order to start a container. So, where do we get these container images from? Let's go and talk about that right now. So, in order for you to get started with containers, it all starts with an image right so, where do these images come from. So, typically we take images and they are hosted at image registries and we have choice for images we have choice of image registries on the internet.
Now one thing that I would encourage you to be mindful of is to only acquire images from sources that you trust, only acquire images from vendors that you trust and vendors that you will that will provide you with support. So, at Red Hat we do have a good number of image registries that we use to publish images for you to use. So, in the cloud we have an image registry and the image registry provides us with a good number of images. So, what you would do is that you would use your container host and you would use the podman command in particular to start a container. So, you would use the podman command and you would tell podman to go and download an image from an image registry and again this would be a trusted source. Download an image from the image registry. The image is now stored on your computer so, let's go and pick on the red image.
So, here we have the red image downloaded to the container host and of course based on that red image what we are now going to do is start a container. So, a container is a running instance of an image.
Your images follow a particular naming convention and typically as part of the image name what we would find is the registry path.
So, let's just say that we have an image registry that is hosted at registry.example.com.
What follows that is typically the username, the user that published the image, the team, name, the team responsible for creating and maintaining the image or it could even be the vendor name and then afterwards we would have the actual name of the image. Now the last element is going to be the tag and the tag facilitates version control. Now if you don't specify a tag, the tag latest is always going to be, it was always going to be assumed but what I get to do is that when I build an image I get to give it a tag of let's just say version 1.0 or I get to call it just 1.0. In fact I could call it anything so, if I wanted to call my images dev, qa and then prod maybe I want to do it as devon release. I have choice in that, I get to determine the name of the image.
Now there are choices of several registries that you can make use of in the internet, at Red Hat we have Quay.io. So, Quay.io is a registry that is geared towards publishers of images and what this means right now is that you can take an image that you have created and you can create an account at Quay.io. It is free and you can go and publish your image at this particular registry.
So, for example I could go and create an image located at Quay.io, my user account is rdacosta and I have got an image over there called mywebserver:latest and that is an image that I built as part of a demonstration. So, here we can see that we have the registry name, we have the name of the user, the name of the image and we have the tag.
So, guys I am really itchy to get my hands dirty to show you all of these things, it's fun and games to look it on the whiteboard for me to talk about it right now but the real learning happens when you get your hands dirty. So what we're going to do right now is move over to the next video where I will meet you for the guided exercise.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).