1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Red Hat OpenStack Technical Overview

Storage in Red Hat Enterprise Linux OpenStack Platform

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyIntermediate
Duration2h 2m
Students141
Ratings
4.5/5
starstarstarstarstar-half

Description

This course covers the Red Hat OpenStack Platform, a flexible infrastructure project that allows you to virtualize your cloud resources and use them when you need them. The course kicks off with an introduction to the basics of cloud computing, before defining the Red Hat OpenStack Platform and explaining how it can be used in conjunction with compute, storage and network functions. The course also explains the ways in which OpenStack is highly available and finally, it talks about deployment of the platform. Demonstrations and use cases throughout the course allow you to see how the Red Hat OpenStack Platform can be used in real-world situations.

Learning Objectives

  • Learn the basics of cloud
  • Understand what Red Hat OpenStack Platform is
  • Learn how Red Hat OpenStack works with compute, storage and network resources.
  • Learn how to deploy the Red Hat Enterprise Linux OpenStack Platform

Intended Audience

  • IT leaders, administrators, engineers, and architects
  • Individuals wanting to understand the features and capabilities of Red Hat OpenStack Platform

Prerequisites

There are no prerequisites for this course.

Transcript

So, in this video, again our theme is expanding or scaling the OpenStack environment. I want to take a look at storage in OpenStack. As we saw with the earlier pictures of the list of services, we saw that there were three distinct storage components. So, I want to take a look at, well, what exactly are those three components doing and how do they operate? And what are some choices we might have for the backends of those storage elements that might help us to extend our storage capabilities?

So, here in this slide, we again see the basic architecture of Red Hat Enterprise Linux OpenStack Platform and the basic services. Now, remember, for block storage that's managed by Cinder. The block storage is designed to allow the use of a reference implementation such as LVM to present storage resources to end users that can be consumed by the OpenStack Compute Project.

Swift is Object Storage offering cloud storage software to store and retrieve data with an API. Swift can be used to store unstructured data like files.

Image storage, this is managed by Glance. The image storage offers users a catalog to store and retrieve their cloud images. The store supports various formats such as raw images, qcow2 images, or even Amazon images.

Now, behind each of these storage elements, we can place behind Glance, Swift, and Cinder the ability to connect to various types of storage area networks. Cinder, for example, can simultaneously manage multiple backends such as LVM over iSCSI, Red Hat Ceph Storage, or Red Hat Cluster Storage as pictured here.

In order to interface those new storage backends, administrators will, of course, need to configure this. You will need to find some credentials that allow Cinder to access that backend, declare the new backend within Cinder, define the new storage type, and then we should be able to function with it.

So, here's a picture of the Cinder environment. You see our compute nodes in the upper right, our users in the upper left. Accessing that block storage through an API service, through that common message broker of OpenStack, they're able to go to the Block Storage volume service, but then notice off to the lower right there. The Block Storage volume providers. And so this is where I could be configuring my different types of backends to Cinder. 

Swift operates in a similar fashion. We can be able to connect Swift to the various backends, object backends, so we can use things like Red Hat Ceph Storage or Red Hat Cluster Storage. We can then expose the backend as a file system on top of a network drive or use native connectors. For example, we do have a Red Hat Ceph Storage Gateway that exposes an s3-compatible interface.

Now, Glance, of course, is pretty tied into Nova since it's providing the images to our compute environment. So, the cloud image is usually embedding a bootable partition. That partition is capable of running an operating system that has been sealed. What do we mean by sealed? Well, if I'm going to be using that image for multiple instances then certain unique identifying information cannot reside inside that image. So, they need to be sealed before they're uploaded to Glance. Remove all of that unique information like a hardware MAC address or an IP address or hostname or unique keys that may have been established on that image.

Now, Glance is able to store a number of different various image types such as raw images, qcow images, and Amazon images, as I mentioned before, and while Glance will use a local drive by default, we can connect Glance to external storage providers. What's interesting about this is that I can actually connect Glance to Swift or Cinder to function as my backend storage or directly to a storage provider like Red Hat Ceph Storage. 

I brought up Red Hat Ceph Storage a few times here. So, let's take a little closer look at the Ceph Storage Project. The Ceph Storage Project was born at UCSC in 2003. It was initiated to solve a scalability problem. We needed a parallel distributed file system that could be used generally for large-scale cluster computing. The Ceph Project project was incubated by DreamHost and quickly became included in the Linux kernel as well as in many open source distributions. The Ceph design is by nature scalable. It's also fault-tolerant, with no single point of failure. It's flexible due to the numerous ways of interacting with the cluster and it's unified by the way it provides various ways of accessing the cluster that all follow the same approach.

So, here I've got my Cephed object store at the bottom that fault-tolerant, no single-point-of-failure, scalable structure. But then the interfaces listed up top here identify a Ceph gateway, which can provide objects. We have a Ceph block device, which can provide virtual disks and we actually have a Ceph file system that can provide various files and directories.

Gee, this sounds very familiar as to the types of things that OpenStack might be looking for. So, through the Swift API, the Cinder API, the Glance API, even Keystone, with its identity management elements, can be storing its information in Ceph.

Cinder can store virtual machines as well as volumes in the cluster through Ceph. A new backend for Ceph would need to be set up, of course. Glance can store cloud images in the cluster the same way.

Swift, you'll notice, is over on the left, it's storing objects. So, here we see the object gateway and the block device, two of the interfaces that are being provided by Ceph. Using Ceph to store user data allows administrators to work with a single fault-tolerant cluster instead of mixed storage technologies. This increases efficiency and decreases infrastructure costs.

Ceph provides many gateways to access the cluster. In fact, its gateway is able to connect with Keystone, as we see, providing a unified authentication platform for both the cluster and

OpenStack services and users. Ceph data is stored as objects, which prevents the various traditional file system limitations we run into in some of our network-attached storage implementations. Moreover, administrators can use Ceph features such as Clones and Snapshots, which provides a flexible way to manage and secure the OpenStack data directly within Ceph. 

In fact, Ceph provides us with a Calamari console. This gives administrators the possibility to manage and administer their Ceph Cluster using this web console and here we see an example of one where the health is OK, of course, or at least 38 seconds ago.

Now we can extend storage capabilities from within OpenStack. The OpenStack storage services follow a very specific set of specifications to allow connections, API calls, authentication routines, the implementation itself. Vendors will expose their storage through OpenStack drivers. Cinder, Glance, and Swift, all support these external vendors. So, we can connect it to the Red Hat storage facilities but we can also integrate it with various third-party storage elements.

So, storage is naturally scalable and, in fact, by using RedHat Ceph Storage under the hood for our OpenStack services, we receive that ultimate fault-tolerant scalable solution for storage to work with Red Hat Enterprise Linux OpenStack Platform.

So, now that we've taken a look at some storage strategies, let's get ready to move on to our next video.

 

About the Author

Students31827
Labs32
Courses93
Learning paths22

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.
 

Covered Topics