This course covers the Red Hat OpenStack Platform, a flexible infrastructure project that allows you to virtualize your cloud resources and use them when you need them. The course kicks off with an introduction to the basics of cloud computing, before defining the Red Hat OpenStack Platform and explaining how it can be used in conjunction with compute, storage and network functions. The course also explains the ways in which OpenStack is highly available and finally, it talks about deployment of the platform. Demonstrations and use cases throughout the course allow you to see how the Red Hat OpenStack Platform can be used in real-world situations.
Learning Objectives
- Learn the basics of cloud
- Understand what Red Hat OpenStack Platform is
- Learn how Red Hat OpenStack works with compute, storage and network resources.
- Learn how to deploy the Red Hat Enterprise Linux OpenStack Platform
Intended Audience
- IT leaders, administrators, engineers, and architects
- Individuals wanting to understand the features and capabilities of Red Hat OpenStack Platform
Prerequisites
There are no prerequisites for this course.
In this video, what we want to do is focus a little bit on the scalability of the environment of Red Hat Enterprise Linux OpenStack Platform focusing initially here on compute within OpenStack.
So, I am going to discuss some scaling strategies. We are going to talk about how we can scale and add resources to our OpenStack environment. So, to start off with here, remember Red Hat Enterprise Linux OpenStack Platform services is built of all of these various component services. The one in this particular chapter we're going to focus on is Nova Compute looking to expand its resources.
OpenStack allows administrators not only to provision new software environments for developers and applications, but it also allows for quick scaling of physical resources. OpenStack nodes can be added to increase cloud capacity. Upon provisioning, those resources are then ready to be consumed by users almost immediately. OpenStack, being hardware agnostic, existing services can be scaled on commodity hardware and in heterogeneous environments. We can also integrate this with Red Hat suites such as Red Hat Satellite, which gives us that advance lifecycle management platform for Red Hat Enterprise Linux infrastructures.
OpenStack can be scaled by deploying services onto new nodes. This gives us more capacity by, for example, deploying additional compute nodes. So, we have control elements within OpenStack. We have networking elements, and then our compute elements. And then I can replicate those compute elements across several physical pieces of hardware.
Now, how do I make this happen? Well, Red Hat actually offers various deployment and management methods for scaling our services. Coming soon, we have the new Director, sometimes called the RDO-Manager. It's a set of tools for deploying and managing OpenStack, which is built on the TripleO project. But today in Red Hat Enterprise Linux OpenStack Platform Version 6, we have Red Hat Foreman, a deployment management tool that provides us with a web user interface for managing the installation and configuration of remote systems.
Deployment of changes is performed using Puppet, but the core element that a lot of folks use is PackStack. PackStack is a command-line utility that uses public modules to support rapid deployment of OpenStack on existing servers over an SSH connection. PackStack is suitable for deploying both single node, proof-of-concept installations, and some slightly more complex multi-node installations.
Now, choosing hardware needs to be assessed on the service the hardware will provide. The OpenStack architecture that's going to be deployed. For example, when you're adding nodes, hardware for a Nova node, administrators are probably going to prefer to have multicore CPUs to improve resource allocation per tenant. Also, we need to worry about how much memory should be in line with the flavors that you're ultimately going to make available.
When you're adding hardware for a Glance node, now remember Glance is part of storage, so then it should be all about the disk. Administrators should consider a solid state drive to improve caching and decrease latency. You may also want to consider a dedicated storage network to improve image transfer performance.
When you're adding hardware for controller service, you may want to consider high-availability, something we'll talk about in a later video. The Red Hat Cluster Suite, or Pacemaker, can be used to define that high-availability cluster.
When you're adding hardware for Swift, remember Swift was our Object Store files, well, again it's storage. So, just like before, you should be focusing on solid state drives. You also want to have enough memory because one of the keys that Swift performs is replication operations. Again, a separate high-speed network might improve performance and reduce latency.
You know, while Nova nodes can run and perform on commodity hardware, well, Swift nodes might benefit from being on customized or tailored hardware.
So, Scaling OpenStack. As we talked about the different types of installers that we have, whether it's the new director, the Red Hat Enterprise Linux OpenStack Platform Installer, also known as foreman, the goal with these enterprise installers is that we go through a configure process that will automatically deploy the appropriate services out to our multi-node OpenStack environment.
When do we scale? Well, when you're running out of resources, frankly. Most OpenStack services are stateless. So, they're going to be able to be deployed and stack on various physical servers based on your need. Every service is then accessed through its API and they are all sharing a common message broker. You can scale your storage network or compute services and seamlessly integrate them within that existing infrastructure. That infrastructure can be scaled up or, frankly, even scaled-down.
OpenStack offers features that allow you to migrate running resources from one node to a different node, which we are going to take a look at here in a little bit in a demonstration. So, instances can be migrated or live migrated depending on our storage configuration. We could be using shared storage to improve that transfer speed and data security.
Of course, layered all on top of this we have Red Hat CloudForms.CloudForms gives us the ability to monitor and measure those metrics, those capabilities. How much are we consuming? What is our resource utilization? What is running well? What is not running well? So, at this point, what I'd like to do is demonstrate showing how to add a new compute node and then show a migration where we are moving a running instance over to that new compute node.
So, in this demonstration, I am going to scale the infrastructure by adding a new compute node and then moving a running resource over to that node. Now, to be honest with you, the installation process can take a bit of time. So, instead of you staring at a bunch of dead air and a bunch of stuff scrolling off the screen, I actually ran the routine ahead of time. So, let's look at my terminal window here and up near the top what you see is that I have a little lab script here that I'm highlighting called configure-migration
and what the script did was it went ahead and ran one of those three ways, one of those three types of installers. In this case, the one called Packstack. And you may recall Packstack uses Puppet modules to go ahead and configure the various systems.
Now, before I ran this lab, I actually already had one system set up to be an all-in-one implementation of OpenStack. So, it had all the controller mechanisms. It had compute storage and networking, all on what I term servera. And so as I'm running this lab script, it's actually going to expand this configuration so that serverb additionally is a compute node pointing back to servera for all of the other basic services whether it be storage or whether it be some of the other infrastructure services like Keystone or Horizon.
If we scroll down through this a little bit, we will see that it SSHs over to serverb to go ahead and configure connections there. A little bit further down you will see it installing some packages and configuring the network. So, for example, we will talk about a little bit later on networking and OpenStack where Open vSwitch is a component that's built into OpenStack and allows us to define those software-defined networks. And so we needed to go ahead and get that installed over on serverb. Packstack took care of all of these elements for me.
So, as it goes through the installation of Packstack, we'll see a screen that's scrolling by looking a little bit like this of various elements being configured and it's focusing on the changes and configuration for us. So, you'll see here for example, we are 172.25.0.10 for the api_nova.pp for the puppet module, and it's applying it to .10 and .11. So, .10 is my servera; .11 is my serverb. And we will see that occur for nova; we will see it occur for neutron. But then a few of them, like the infrastructure pieces up here, Keystone, Glance, Cinder, those are only configured on 10, only on servera.
Once Packstack finishes up, my lab script goes through and provides some credentials, if I can get this to scroll to the right spot, and so it identifies some administrative elements that I might want to use. For example, it automatically sets up a keystonerc_admin file. The purpose of that file is to give me command line administrative access to the OpenStack services. So, I can source that file and it will give me the credentials to be able to execute most of the OpenStack commands and be able to make changes or look at the status of some things.
A little further down, what you'll see is it's preparing OpenStack and it's configuring some of the elements like the flavor that we want to use, making sure that the correct images are up there, disk-oriented things, but then it goes ahead and launches a couple of instances for me. And at the very end it tells me that, yes, the instance has been created, and so I passed. So, that identifies to me that the lab script did execute correctly. I've got a serverb up and running, but let's go ahead and double-check on some of this.
So, where do we go to administer typically from a graphical perspective? We'll go to the Horizon interface. So, let me go ahead and log in as adm1 and we will use that password as we've seen before of Red Hat. Again, I get that basic overview look here under System. But from here, I can go ahead and take a look at instances. And as I look at instances, what I end up seeing are two running instances at the moment. One is called Live-MigrateMe and the other is called MigrateMe.
But notice over here in the second column it's saying that both of these are running on servera, the first of my compute nodes. If I go over to the far side here, there are actions that I can go ahead and perform on those instances. If I click on the pulldown and scroll my window here a little bit, oh, we can see some red ones. Red ones like Migrate Instance. So, this is what I'm going to choose to allow me to move this running instance over to serverb. Now I'm going to do just a regular migrate. For live migration, I would have to have a common storage infrastructure on the backend. So, right now, I don't have that.
So, let me go ahead and click on MigrateMe to migrate it. Go ahead and say, yes to migrate the instance and what we see down here is that it's getting ready to go ahead and migrate that instance. Now, this is going to transition through a few different states. It starts off by saying Resizing or Migrating and, in fact, we're not doing a live migration, so this does mean we are stopping the instance, and then going to get ready to move it over to serverb? This, after being in Resizing or Migrating for a little bit, will then shift its status, its state, to Finishing, Resizing, or Migrating, as it's doing just those last couple of things. And very lastly, it will then transition to Confirm or Revert the Resize/Migrate.
So, we can go ahead and confirm at which point it will migrate it to another available host or we could revert it, which would allow us to have it restart on the existing host. So, this'll take a little bit for it to shut down that running instance and make sure everything is in a good steady-state for us to be able to move on. We see that it's moved to its second state now, Finishing, Resize or Migrate. So, a few more moments and we should hopefully see it ready for us to confirm the movement.
Now, why might we want to migrate instances? Well, if you have a wealth of hosts out there and you have some that you want to perform some maintenance tasks on, well, certainly we want to migrate the instances off of that host so you can go ahead and perform that scheduled maintenance. We can see it's gotten to that final status of Confirm or Revert Resize/Migrate. So, over on the right-hand side you'll see I've got a new button that says go ahead and confirm the Resize or Migrate.
Let me go ahead and click on that and we will see that it is now a status of Active. But if I look back in that second column, where do I see it running? It's now running on serverb where the original instance, the other instance Live-MigrateMe is still running on servera. Now, I could move it back to the original host by simply going through that same procedure, click on the down arrow over on the right, choose to Migrate, and have it step through and move back to servera.
So, here we have seen how we can scale our compute nodes. We can be adding compute nodes using one of the installers. In my case, I showed you doing it with Packstack. Then once I have that additional node, I can distribute some of my instances between them by shifting, in this case, one of my two instances over to that new compute node. Or, I could be migrating instances because I have some maintenance tasks that I want to perform on the underlying physical structure of my environment.
So, in this section we were focused on Nova, that component of Red Hat Enterprise Linux OpenStack Platform that satisfies the needs of the compute element. And we've seen that we can scale that compute environment to be able to satisfy whatever you need in setting up that private cloud. Now, that we've seen that here in this video, let's get ready to move on to the next one.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).