Setting up a Web Server

Contents

Introduction & Overview
Managing Containers as Services
Review & Lab Session
Setting up a Web Server
Difficulty
Intermediate
Duration
1h 46m
Students
122
Ratings
3.6/5
starstarstarstar-halfstar-border
Description

This course covers a range of techniques to help you run your own containerized applications using Red Hat. Throughout this course, you will follow along with guided presentations from the Red Hat platform to get a first-hand understanding of how to run containers and manage your workflows using Red Hat.

Learning Objectives

  • Learn the basics of setting up web servers and containers
  • Understand how to find and manage containers
  • Understand how to perform advanced container management
  • Learn how to attach persistent storage to a container
  • Learn how to manage containers as services

Intended Audience

This course is ideal for anyone who wants to learn how to run containers with Red Hat.

Prerequisites

To get the most out of this course, you should have a basic understanding of Red Hat and of how containers work.

 

Transcript

So, let's just take a step back for a moment.

Let's just say that you wanted to set up a basic web server, an Apache httpd web server, what would you need? So, let's go and take this to the whiteboard. So, we would need a virtual machine because let's face it, you're not really going to be using a physical machine. So, here we have a virtual machine and let's go and think about the things that we need to do in order to facilitate that end goal right now.

So, first up what we need to do is install the software. So, you would use the yum install command to install httpd. Once you've taken care of that you need to start and enable the services.

So, you would use the systemctl enable --now command to go and start up httpd. Next up guys, what you would do is that you would install your content.

So, your content would typically be installed to the directory var/www/html. Now all of these things had to be done as root. Okay! so, let's just say that you have root access for a moment, I still haven't given you a compelling reason to consider containerization but let's now go and start to deploy applications right now. So, what we want to do is that we want to set up a basic PHP application.

Yeah! let's deal with PHP. So, let's just say for example that you have a PHP application that needs to be deployed. So, in order to get to that particular goal you know what you need to do, you need to install the PHP interpreter, so thats the PHP package. You need to tie the PHP interpreter with your web services layer, that's easy that is mod_php which is an Apache module and now you can install your PHP content, not a problem.

So, let's go and take care of this right now. So, over here we're going to be adding PHP and we're going to be doing the integration between Apache and PHP. So, once we've done that we can now go ahead and install the PHP content.

Great! not a problem. So, your PHP application is happily running right now and you have the need to run another PHP based application. Now in the background we have been using PHP 7 and yet you need to run another application and this application was written for PHP 5. So, let's just say that the application that you have is not compatible with a PHP interpreter, PHP version 7, so what now? Let's go and think about it. What we could go and do right now is go and set up another virtual machine. We could install Apache, we can start and enable Apache we could install PHP version 5 which is needed by the application and now we have 2 virtual machines, 2 web servers, one is running an application that is PHP 7 based. Another one is running an application that is PHP 5 based. So, now you can imagine that this doesn't scale well. So, how do we solve this challenge? How do we run two PHP related applications on the same system? How do we run several PHP applications on the same machine? How do we run several applications in general using various runtimes as well as various versions of said runtimes on the same system? So guys this is where containerization comes into play because inside of a container you have all the bits and pieces specific to run that particular application and you don't have the overhead that virtualization typically incurs in that you have to set up a virtual machine, you have to set up an operating system, so, you have systemd that is running you have all these other bits and pieces that you don't really need in order to run your application but all these other bits and pieces are required to support the operating system and you need that operating system environment in order to support your web server or your runtime or whatever it is that you are running.

Now, while containerization is really cool and we are going to be exploring that in this class, do understand that not everything can be containerized. So, if you have something that requires low level access to hardware well containerization or containerizing the application may not be a good fit. Now in order for you to get started with containers you need a couple of bits and pieces right. So, let's go talk about what they are? If you want to run a content, if you want to start a container you need an image. All that an image is, think of it as like a tarball and it's a tarball that has all of the bits and pieces needed in order to support your actual application. So, this is not a full operating system, this is not systemd inclined. It has the files needed that would represent the processes that you are running inside of your container based application. So, for example if we want to run web-based services what we would need is httpd. We may need other things as well, other binaries but what we would need is httpd. As far as libraries are concerned we would need the libraries to support that particular httpd application. So, inside of this image right now would be the files, we would have the files that would support our application.

Now what we need to do is pass that image off to something that is going to run our containers and this is where containerization is really slick.

Now if you are making use of Red Hat Enterprise Linux, you don't need to be the root user in order to run containers and yes I do understand that it comes with certain limitations, certain limitations that you may not be able to run a container and attach it to a privileged port.

Yeah! With the other technologies, we could use to help us in that respect. However in Red Hat Enterprise Linux we do not need to be a privileged user in order to start a container.

So, to get started over here we need an image.

So, what we now do is that we pass it off to our operating system, naturally Red Hat Enterprise Linux and we tell it to start a container.

Now we do provide you with a good number of tools that would facilitate managing containers and building images but the tool that we're going to be using very shortly is called podman. So what podman is used to do is that you can start, stop and manage your containers using this particular tool and if you're wondering about a daemon, well we don't need a daemon in order to run containers because all that containers do is that they expose technologies already present in the kernel and these technologies are control groups, cgroups.

We have security enhanced Linux or SELinux, we have namespaces, we also have seccomp. So, guys these are stock standard Linux kernel features so, don't let someone dupe you into thinking that you need a daemon in order to get started with container technologies. So what we now do is that we have this tool called podman. So, this is the tool that we would use and we say hey! podman, I have got here this image and what I want you to do right now is to start an environment. I want you to create the namespaces. I want for you to go and create the security rules to isolate my container from the rest of the operating system. So, what podman now does is that it looks at the container image that that you've supplied and it starts a container, a running container based on that particular image. So, what we're now doing is that we're taking the bits and pieces, we're taking the files, we're taking the binaries, we're taking all of the bits and pieces that are present inside of your image.

And we now go and deploy it into your running container and it's really important to note that the contents of your running container are based on the image. Now these images are immutable, in other words they're unchangeable. So what we could do is that while we could go into our running container and we could go ahead over here and make a change. So let's go and add a change, we're going to add something and we're going to remove something else. While it is possible for you to do this, you're conducting that transaction against the running container, not against the image from which the container was built. So, if you wanted to make changes to the actual image itself you're going to need to make use of a different tool. You're going to have to go and build your container image and there are different approaches that we can make use of.

But the point to run this conversation right now is to talk about and to introduce you to a tool called Builder. So, Builder is a great tool because we could use Builder to produce brand new container images. Brand new container images that you can use to start your containers.

When it comes to managing your images and the things that we would typically do would copy images from somewhere to somewhere else, look at the contents of images get the properties of these images. There's another tool that we have and that is called Skopeo.

So, podman build and Skopeo are going to be the tools that we're going to be using in this class. So, let's go and have a look at a brand new whiteboard right now. So, here I have got a virtual machine and this virtual machine is running Red Hat Enterprise Linux and on a stock standard Red Hat Enterprise Linux installation, we have got the Linux kernel and the Linux kernel provides us with control groups, namespaces, SELinux as well as seccomp and this is what we need in order to get started with containers.

So, all that the podman command is going to do is that it's going to take your image, the image that you supply and it's going to start a container based on that particular image. So, it's going to start this isolated environment and inside of this nicely isolated environment what it's going to do is that it's going to deploy the file system inside of your container and it's going to start the process, the process that you identify in your container and inside of your container image as the one that you want to start. So what we have right now is that we have a basic application, we have a basic container right now running on this particular virtual machine and what I want to talk about is how you would go about sort of setting this up at scale, setting this up for your for your companies, for your enterprise. So, here we have an application running on a virtual machine. So, let's just say that what we have done right now is that we've instructed our users to connect to this application. So, here is a happy user and we tell our user to connect to the containerized application by connecting to the virtual machine and the virtual machine then has port forwarding set up. So, we forward a port on the virtual machine to a port inside of the container application and that is how the user accesses the application itself. However what happens when that particular virtual machine fails? What if we have to take it down for maintenance? Well what we could do and this is really easy, what we could do is that we could now go to another virtual machine, another virtual machine also running Red Hat Enterprise Linux and we could start a brand new container using the very same image. Well that's not a problem, that's easy okay! Great! So, now we have that same containerized application running on another virtual machine and what we now need to do is that we need to instruct the user to connect to that other virtual machine.

Doesn't that suck a little bit because now the users may be connecting to a different IP address or fully qualified domain name.

So, when it comes to managing containers at scale, when it comes to you having a framework that is able to dynamically respond to load, in other words when you have a lot of users connecting to your application and if one measly container is not able to satisfy the needs of hundreds or thousands of users, what we want to do is that we want to scale our workload outwards. What you also have to be mindful of is that when it comes to scaling outwards we also want to scale back in.

In other words we want to terminate the instances that we don't need should the load be low. When it comes to how about doing a health check, when we have a problem inside of an application, what we want to do is that we want to terminate that particular application and we want to restart it. You see Native container technologies can't do this and this is where Kubernetes comes in.

Now Kubernetes is a container orchestrator however, Kubernetes is not sufficient for your enterprise goals. This is where you want OpenShift, in fact this is where you need OpenShift because what OpenShift can do is that it can manage your containers at scale, it can react to changes inside of your environments. Using OpenShift is very much beyond the scope of this particular class. Over here we're going to be focusing on raw containers, the very basics and as you journey through this class you will realize that perhaps, perhaps there's a little bit more to this containerization picture than is needed. So, guys I would encourage you to look at OpenShift after taking this class.

However what you may want to do first is have a look at CodeReady Containers because OpenShift is a really complex piece of technology, CodeReady Containers is basically having an OpenShift cluster, an all in one OpenShift cluster on your laptop and the best part is that it is freely available from the developers portal at redhat.com.

About the Author
Students
125909
Labs
66
Courses
113
Learning Paths
180

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).