1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Red Hat OpenStack Technical Overview

Highly Available Red Hat Enterprise Linux OpenStack Platform

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyIntermediate
Duration2h 2m
Students124
Ratings
4.4/5
starstarstarstarstar-half

Description

This course covers the Red Hat OpenStack Platform, a flexible infrastructure project that allows you to virtualize your cloud resources and use them when you need them. The course kicks off with an introduction to the basics of cloud computing, before defining the Red Hat OpenStack Platform and explaining how it can be used in conjunction with compute, storage and network functions. The course also explains the ways in which OpenStack is highly available and finally, it talks about deployment of the platform. Demonstrations and use cases throughout the course allow you to see how the Red Hat OpenStack Platform can be used in real-world situations.

Learning Objectives

  • Learn the basics of cloud
  • Understand what Red Hat OpenStack Platform is
  • Learn how Red Hat OpenStack works with compute, storage and network resources.
  • Learn how to deploy the Red Hat Enterprise Linux OpenStack Platform

Intended Audience

  • IT leaders, administrators, engineers, and architects
  • Individuals wanting to understand the features and capabilities of Red Hat OpenStack Platform

Prerequisites

There are no prerequisites for this course.

Transcript

So, now in this video, what we want to do is take a look at implementing Red Hat Enterprise Linux OpenStack Platform in a highly available configuration because let's face it, if we're going to rely on this as a core infrastructure piece, fault tolerance is going to be key. Now, high availability, of course, refers to a system or a component that is continuously operational for a desirable length of time. It can be measured relative to the required duration of service availability.

Such continuity of service is often referred to as the Uptime. Service Level Agreements. SLAs, will determine this availability between a solution provider and the customer. For example, 99% Uptime would only allow the service to be down for 8.76 hours per year. High availability can be a design... can be designed as a set of practices to ensure the continuity of service and prevent downtimes. 

While OpenStack does not natively propose high availability, it can, nevertheless, be made highly available. Now, when I say highly available, what kind of designs am I talking about? Well, one type of design is called active/passive, and this is where the infrastructure runs with an inactivated replica. That replica is only enabled if the active, the main system, fails. So, there is some time to that failover.

As an alternative to that, we have active/active systems and in this configuration, we have two or more systems that are running simultaneously and most likely sharing the data. Now, this as an interesting solution, can also give us some load balancing perhaps as both systems are active at the same time.

Reality though for many configurations is that we tend to run with a combination of active/passive and active/active. I mean, think about the OpenStack environment and all the different services that we have in there. So, some services, it may make sense to configure in an active/passive configuration and other services may make more sense to configure in an active/active configuration.

Even if we're just talking regular applications, an example here might be web servers functioning in an active/active configuration because they're sharing the load but the backend of those web servers, the database server, might be an active/passive configuration because only one database server is active at a time.

So, let's talk a little bit about the elements within OpenStack as we try to assess how we might want to do this. Now, as I mentioned, there's a lot of services and such in Red Hat Enterprise Linux OpenStack Platform, and we need to consider each of those services, in essence one at a time, as to how we might want to configure them. For example, in the lower-left here, we have the controllers which are referring to infrastructure services like Keystone and Horizon and RabbitMQ. Keystone our identity service. Horizon our web administrative interface and RabbitMQ, that shared messaging bus.

But then we have our storage elements like Glance and Cinder. We have compute elements like Nova. Of course, backending, a lot of this stuff is probably a database storing all of the various objects and so we may want to look at technologies that can provide some fault tolerance there.

In fact, in Swift itself, there are some built-in technologies to look to perform replication between various Swift storage elements. Of course, I'd probably take a look at putting Red Hat Ceph Storage as the backend of these various storage elements since it has that high availability built into its design.

Red Hat offers with Red Hat Enterprise Linux, something called the High Availability Add-On. Historically, we called this the Red Hat Cluster Suite but with each version of Red Hat Enterprise Linux, we have some form of high availability that we can establish. And, how does that High Availability Add-On? Well, let's say we wanted to make a web server highly available. And in this case, we've designed it in an active/passive configuration. So, a web client goes to a supposed apache web server looking at a virtual IP address. So, your DNS entries for the hostname of that web server would point to that particular virtual IP address. That environment, the High Availability Add-On, in this case, Pacemaker, would redirect that virtual IP request over to the active node. In this case, Node 1, and my content will be served up from the shared storage on the backend.

Now, as we are monitoring Node 1, as we are making sure that it's functioning correctly, all of a sudden we detect that something has gone wrong. And so we need to ensure that Node 1 doesn't accidentally wake from its slumber and start writing to that shared storage as we're firing up Node 2, shifting it from passive to active. And so we also configure into those things a power fencing device. This allows us to ensure that Node 1, when we deem it unhealthy, that it's truly dead. We shoot it in the head by turning off the power.

So, Node 2 comes up, it then is serving that content for us. And so this is the idea behind, again, what we used to call the Red Hat Cluster Suite and we're now referring to as the High Availability Add-On.

So, if I'm going to take that Red Hat Enterprise Linux add-on and look to put our OpenStack Platform elements into a highly available configuration, it might look a little bit like this. You'll notice the pacemaker clustered load balancer, so more of an active/active configuration, but we're still doing that virtual IP address on the frontend for Horizon that then points down to a Horizon Cluster, an active/active cluster, where apparently I've got three nodes running the Horizon interface. We've got keystone with its virtual IP again, serviced by these clustered services on the backend.

So, all of these individual services being configured with Pacemaker, having a virtual IP that is then being served out to the various requests. And then for MYSQL, we'll rely on the Galera to go ahead and provide us with that configuration.

Now, remember, I mentioned this earlier, if we implement Red Hat Ceph Storage, that already implements its own high availability mechanisms. And so administrators would only need to configure high availability for the OpenStack core services, and Ceph would be highly available on its own with what it provides up to however many of the storage services of OpenStack we point to it.

So, high availability for OpenStack is going to provide us with the following benefits: continuity of service, the data is replicated and shared, greatly decreasing downtime periods, ease of maintenance, infrastructure elements can be added or removed with no impact on service availability. For example, I could take Horizon C3 out of the picture and update it and see if it works. And if it's good, then I'll bring it back into the cluster, take C1 and C2 offline so that I can then update those, but now I've got the newer version running for those that need it.

Ultimately, I'll increase my service performance because frankly, with an active/active cluster, bottlenecks and single points of failure are removed. When someone goes to that virtual IP address for Glance, well, I don't care which of the three underlying services inside the cluster are providing the answer, and so the load gets to be shared.

Data security from the end-user data to the OpenStack configuration, everything is being replicated, better securing our environment. So, it is possible to configure OpenStack in a highly available configuration using something like Pacemaker. In fact, we've been developing puppet modules to integrate into our enterprise installers, to help ease the implementation here. But from this diagram, you could create your own highly available environment.

And so now that we've looked at implementing OpenStack and a highly available configuration, let's get ready to head on to our next video.

 

About the Author

Students26328
Labs32
Courses82
Learning paths21

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.
 

Covered Topics