Defining the Cloud
Defining the Cloud
Difficulty
Intermediate
Duration
2h 2m
Students
471
Ratings
4.3/5
starstarstarstarstar-half
Description

This course covers the Red Hat OpenStack Platform, a flexible infrastructure project that allows you to virtualize your cloud resources and use them when you need them. The course kicks off with an introduction to the basics of cloud computing, before defining the Red Hat OpenStack Platform and explaining how it can be used in conjunction with compute, storage and network functions. The course also explains the ways in which OpenStack is highly available and finally, it talks about deployment of the platform. Demonstrations and use cases throughout the course allow you to see how the Red Hat OpenStack Platform can be used in real-world situations.

Learning Objectives

  • Learn the basics of cloud
  • Understand what Red Hat OpenStack Platform is
  • Learn how Red Hat OpenStack works with compute, storage and network resources.
  • Learn how to deploy the Red Hat Enterprise Linux OpenStack Platform

Intended Audience

  • IT leaders, administrators, engineers, and architects
  • Individuals wanting to understand the features and capabilities of Red Hat OpenStack Platform

Prerequisites

There are no prerequisites for this course.

Transcript

In this video, what we want to do is start off by defining Cloud Computing. I mean, let's face it, cloud is such a loaded term that we all have to deal with, and we all come into this with our own preconceived notions. So we'll start here by trying to level set, and describe what we believe cloud computing to be. We'll also talk about some scalability models and some implementation choices that we can make. So let's start.

Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources. Now, that might include networks, servers, storage, applications and even services. The goal here is to be able to rapidly provision and release these instances, these virtual machines, for example, as needed, without requiring any kind of provision assistance from a service provider. This cloud consumer decides which computing resources that they want to be able to use for their instances.

Now cloud computing has several essential characteristics. The first would be self-service. As I just mentioned, our goal here is to allow cloud consumers to be able to provision those instances with their computing resources on their own. Another key is multitenancy, allowing multiple cloud consumers to share the underlying hardware. Elasticity, being able to scale out, add additional resources on demand or perhaps even scale in; we need to be able to be elastic. And then finally, telemetry. Resources need to be able to be monitored, and ultimately metered by that service provider as well as by the cloud consumer.

Now in this environment, we talk about a couple of different types of traditional workloads. In a traditional workload, we are looking at needing to be able to scale up. I mean, let's think about the database server that you have back in your data center. As you want to satisfy additional demand as your business is growing, what do you need to do? You need to make that database server, or that data center virtualized database server, you need to make it bigger. You need to add RAM. You need to add storage. You need to add CPU capacity. 

You see, cloud workloads are fundamentally different. The goal there is the idea of being able to scale out that elastic nature we were just talking about. Being able to add additional instances of a web server, for example. I mean, let's say that you're designing a web environment that is serving up content for some major sporting event. Well, let's face it, for 11 months out of the year, people could care less about that sporting event. They'll just occasionally connect, so you might only need one or two web servers. But during the event itself, we are all looking to gain access to that information online, and so the number of web servers that you need is going to be huge, so we need to be able to scale out by adding those additional virtual instances to meet that demand.

Now, in terms of deployment, we actually have a few different models that we can work from. The first would be the idea of a private cloud. This is a cloud used by a single organization. In fact, for many, they have the hardware residing in their own data center, but the applications are cloud workload applications, and they look to be able to scale out their resources. The second idea would be a public cloud. This is a cloud that's available to the general public, and is run by a cloud provider. Of course, you can't just have private or public. We need to be able to combine these, so the hybrid cloud provides perhaps the most interesting deployment model. In this case, we are using the private cloud, perhaps for the core of our resources, the standard amount of resources that, let's say, we need for that 11 months that the sporting event is not happening. And then in that 12th month during the sporting event, we are able to, well, some call this a cloudburst, we expand into the public cloud, same application, same type of resources.

Now we also have several different models of implementation. Infrastructure-as-a-Service is a term that is used to talk about providing infrastructure services. Things like those virtual machines, those instances, those servers, maybe some network resources, additional storage resources. You know, those things that most of us think about that we are building out in our data center. That's infrastructure. And so a cloud consumer wants to be able to provision those types of resources, and products like Red Hat Enterprise Linux OpenStack Platform provides us with a core element there. Red Hat CloudForms provides a management piece, but we'll get into both of those a little bit more later on.

One of the keys to Infrastructure-as-a-Service is also being able to have a service catalog, maybe in that public cloud, where we could automate provisioning, and lifecycle management. We can even define policies that identify quotas on different consumers. I mean, we all have that rogue department that likes to use up every resource within the company, right? And, of course, chargeback and metering, particularly important for public clouds, but also important for perhaps our internal chargeback mechanisms. 

A second form of the cloud is Platform-as-a-Service, and in this realm we're talking about application development, being able to streamline and speed up the delivery of those applications, and so Platform-as-a-Service is providing us with those development platforms. It helps us to automate the deployment of those applications, and is also providing platforms that are more cloud-aware for those cloud type workloads.

Now Red Hat provides an open hybrid cloud. And as we look at this sort of outline of elements, we see at the bottom is our physical infrastructure. We have tools, which are able to abstract our storage, like the Red Hat Gluster Storage environment and the Red Hat Ceph Storage environments. To abstract our compute infrastructure, Red Hat Enterprise Linux and the built-in virtualization technologies like KVM and libvirt provide us with that layer of abstraction. But then are we looking to do server virtualization, you know, the traditional workloads where we're scaling up? Well, RHEV, all right, Red Hat Enterprise Virtualization would be a good choice layer. If instead we're looking to be able to scale out and implement cloud workloads, well, then we're looking for an IAAS solution and Red Hat Enterprise Linux OpenStack Platform would provide us with that.

A Platform-as-a-Service engine that has received a lot of interest is OpenShift. We also have an extended Platform-as-a-Service engine provided through JBoss. We then run our applications on top of that, but of course, we need to be able to manage all of these different cloud engines, and so CloudForms gives us our cloud orchestration. Satellite can help us with operating system and application lifecycle management.

Let's take a little closer look at virtualization versus cloud computing. In traditional virtualization, we might see a diagram something along these lines. I have my underlying hosts, and then I'm able to dynamically allocate the virtual machines that I want to implement. I can even define virtual machine pools may be a group of remote desktop virtual machines. But one of the keys to traditional virtualization is this idea that these infrastructures are considered stateful. We care about those virtual machines. We want to be able to scale them up. We want to be able to build fault tolerance in that particular virtual machine. This, of course, layers on top of clusters where we're defining storage, network elements, CPUs that we're tying into. We can organize these physical objects to support the virtual objects we're looking to implement.

In a cloud infrastructure, we have those same basic infrastructure elements. We have the compute, storage, networking elements. We layer this all on top of Red Hat Enterprise Linux, and, of course, being able to monitor and control the environment, all right, is provided with our, in this case, monitoring, data processing and orchestration.

The purpose of cloud computing is to allow for greater scalability. It gives us a model that is ultimately decoupling the virtual infrastructure from the hardware environments. Administrators are going to be able to provide developers with scalable environments that are only going to require expanding hardware capabilities. I simply add some additional hardware to my data center to perhaps grow the environment. 

So, we want to take a look at a demonstration of application scalability. In this demonstration, we'll show you how to scale an application using one of the OpenStack services called Ceilometer to monitor the load on a couple of web servers. And with that monitoring, if we find that the load is getting too high, that we would like Heat, the templating environment, the orchestration piece, to go ahead and scale the application environment. 

So to start off here, we are looking at, on our screen, the Horizon web interface. And, of course, I need to log in as a user in the correct tenant or project. So my username is adm1 and my password, as many of you will probably guess, is redhat. As I sign in, the Horizon interface brings up a summary screen of tenants that are there. I currently have one project named project1, and this is the project we're going to go ahead and play a little bit with. Now as I mentioned, we're going to be working off of a template. So, in order to do that, let's come up here to the top pane. We're going to go to the project. We are then going to look under Orchestration and go to the Stacks. 

Now, you notice that there are no items to display because I'm not currently running any stacks within this particular tenant. So we'll come over here to the right and click on Launch Stack, and this brings up an assistant to sort of step us through the process of going ahead and configuring this environment. Now you'll notice that there are two sources listed here. One is a template source, so this will be the basic template. Maybe it was preconfigured by some of your sys admins in the environment or maybe this is a template that was defined by your own developers as they are looking to do some of their own self-service work. 

But besides a template source, we also have an environment source, and the environment source can be used to override values that may be in the template. So maybe the template has 90% of what you want, but you want to customize a few things before we go with that last piece. So I've, to make this a little bit easier so you don't watch me doing a lot of typing, I'm actually going to, while this is open, open up another window over here, and I have up on our server the files that we're going to be working with. So, in our Materials directory under Heat, I'm going to pick up environment.yaml.

We have an ability to go ahead and open it with a text editor, which is what I'll go ahead and do. And so, here we can see the elements of some information, some parameters that I want to override in my template. So, for example, the ssh key that I want to have available for these particular instances so that I can remotely log in with keys, which as we know is much more secure than doing it with passwords. I also want to override the flavor. In this case, I want to use a flavor called m1.small. Now, a flavor identifies the basic resources that a particular instance is going to use. And then finally, the name of the image from Glance that we want to bring in and boot. 

So with that open, let me come back over here and we will go ahead back to that assistant that we had. So for the template, I'm just going to directly download that with the URL and I'll put in here the location of our template. Again, in the same place as I was getting the environment file, I've stored these files too, so this one is called scaling.template. And then for the environment source, I can do a direct input. I'll just copy and paste it from the text editor that's running.

So as I choose Direct Input, there is the environment data. Let me go over to the window where I have this information. I'm going to highlight it, all of that text, hit Ctrl+c to copy, come back over here, Ctrl+v to paste. Now some of you with eagle eyes may have noticed that the syntax here of this environment data is a little bit different. Parameters was left justified, but the other three lines, the actual parameters themselves of value and, key name and value pair is indented by a space. Indentation on this environment data is very important.

So I will come down here to Next. And now that I've provided the template, the source of my information, and the environment stuff that I want to override with, now I can go ahead and specify the information about the stack itself. So why don't we go ahead and give this the name of AutoScaling. Now we can have a creation timeout, how long we want to wait before giving up on getting this started, because remember what a template is doing is it's customizing those images, multiple images, multiple instances as they're going ahead to boot.

We are going to have it rollback on failure, and I need a password so that it's able to remotely get in and deposit some of that information. The rest of the information, I'm going to leave the default, but you'll notice there are some related to networking we're going to be tied to. We also have some database elements that we're being tied to, and notice that it's filled in some of my environment information for me. 

So I'll go ahead with that information selected and click Launch. And what we see here is a summary screen and the upper right-hand corner for a moment it showed a success. The stack has been created, but let's face it, there's probably a lot more going on behind the scenes at the moment, which is why my status bar is currently showing In Progress. Now the stack is in the progress of creating the underlying instances, and in this particular case, we are looking to configure and set up three instances and a load balancer.

So with this, the three instances will be a database server and a web server, where the web server is obtaining its back-end information from the database. Then, as we go to play with this stack, we will put a load on the web server so that we can see it dynamically grow. Remember the idea we were talking about earlier in this section, is we were talking about scaling up or scaling out. And in the case of cloud applications, this is a matter of scaling out, so as my load goes up, I should have additional instances; again, a dime a dozen. We just get a bunch of instances together to provide support for our growing load. And in fact, we could even have Ceilometer and Heat be scaling back down should that load perhaps go down.

So we can see the process is continuing here where the stack is looking to get created, and it finally says here Status Complete. So this means that the various instances have all been established. If I click on the stack name, what we see here is a visual representation of the various elements within this stack. My screen is a little on the small side so that we're not looking at little tiny icons, so some of it is sort of edging off my screen a little bit, but each of these elements is referring to a component of the stack. For example, I've highlighted the scale up policy, which is the monitoring of the web instance, to see how that load might affect things.

So each one of these instances, the AutoScaling group, the pool that's been created, when I come down here, there's my neutron load balancer down further. So there's my topology. There are my various resources. I can see that here within this environment. I can go to Overview, for example, and begin to see some things about my stack. We have a website URL, which can show me how I could access that particular website. Let me go ahead and click it right here. This will open up a new tab and we see that it's showing a connection to that IP address and I'm seeing the content from that website URL.

Now this is the public IP, the virtual IP, that is being served up and will be load balanced across my one or more web servers that are there. Remember I said I was configuring this so it would automatically scale out. How am I going to make that happen? Well, we've configured Ceilometer with some alarms, and when those alarms are triggered, new instances will be created. 

So, how can I go about generating some activity because frankly, me going to this page and just seeing, you know, three words of text, isn’t really putting a strain on my web server. So let me come back to the Horizon interface. I'll come back up top here, and I look under Compute and if I choose Instances, I can see a list of the instances that are running or available. So, here I've got 31 and 32 being shown. If I select one of these, I can go to its console. Let me go ahead and click here to show just the console. First time you connect to a console, it's using SSL for that connection, so I'll need to add an exception for the certificate that's being used, but here you can see me ready to log in to that first web server.

I can do root and my password. And while I'm in here, I can run. Oops, apparently I got stuck on the E. I can run launchme. Now launchme is a script. as you can see here on the screen, it says that it's going to simulate some CPU load on that web server, and as that CPU load expands, what we will find is that additional web server instances will go ahead and be created for us. 

Now this will run for about 5 minutes, providing additional stress, creating additional web servers, and then once it ends, Heat has what's known as a cooldown period, which determines when it should go ahead and scale that resource to perhaps look to eliminate one or more of those web servers that had automatically scaled out.

So while this is running, let me go ahead back to the Horizon interface, and if I go back to my list of instances, still just see 31 and 32, so I haven't generated enough content to get this concerned, but after a little bit of time it will go ahead and launch another instance for me. So this is the idea behind the monitoring of Ceilometer and the orchestration of Heat that will go ahead and go through a cloud, a typical cloud workload.

So in this video we went ahead and we defined our cloud terms to make sure we were all speaking the same language. We also went through and identified different deployment models, different workloads between these various elements, and we were able to get us all understanding what is a cloud. Now that we've seen this one, let's get ready to move on to that next video.

About the Author
Students
126552
Labs
66
Courses
113
Learning Paths
180

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).

Covered Topics