This course covers the Red Hat OpenStack Platform, a flexible infrastructure project that allows you to virtualize your cloud resources and use them when you need them. The course kicks off with an introduction to the basics of cloud computing, before defining the Red Hat OpenStack Platform and explaining how it can be used in conjunction with compute, storage and network functions. The course also explains the ways in which OpenStack is highly available and finally, it talks about deployment of the platform. Demonstrations and use cases throughout the course allow you to see how the Red Hat OpenStack Platform can be used in real-world situations.
Learning Objectives
- Learn the basics of cloud
- Understand what Red Hat OpenStack Platform is
- Learn how Red Hat OpenStack works with compute, storage and network resources.
- Learn how to deploy the Red Hat Enterprise Linux OpenStack Platform
Intended Audience
- IT leaders, administrators, engineers, and architects
- Individuals wanting to understand the features and capabilities of Red Hat OpenStack Platform
Prerequisites
There are no prerequisites for this course.
Now, in this video, what we want to do is take a look at a particular use case. We want to take a look at implementing Infrastructure-as-a-Service as a private cloud. So, in this particular example, we're going to talk about a web application company. They're facing a number of challenges, like many of you out there, to meet the demanding needs of their internal IT services as well as their DevOps application development. The company also has some legacy applications that run on older operating systems that are no longer supported on modern hardware. I'm sure none of you have ever faced that. The Red Hat Enterprise Linux OpenStack Platform is looking to deliver everything needed by this company for their private cloud infrastructure.
So, what sort of requirements were put into place here? We want to be able to easily scale, allowing them to expand in the future. We need to provide increased density over their legacy hardware-based applications. The platform also needs to provide a secure open source platform capable of running both open source and proprietary operating systems. It also needs to provide self-service deployment. We need quicker response time, thus the possibility for developers to create their own virtual environment in seconds instead of hours, days or weeks. Remember, most IT departments are a bit busy. These new requests, new applications, take them away from managing that existing infrastructure. And so being able to be agile, flexible, and support those developers, it's what's needed.
So, what's the solution to this? Well, the solution is the Red Hat Cloud Infrastructure, as we were just talking about. Now, in that environment, what we can look to do is establish some orchestration, orchestration which can support both the application deployments, but also the infrastructure layers. In particular, look down here at the bottom. I want to do some network orchestration. I want to have automatically set up load-balancing, service chaining, quality of service rules that we want to establish, and have this tie into my existing standard network switches.
Various resources need to be able to be provisioned automatically. I need to be able to have instances spun based upon some predefined requirements, and then I want to be able to deploy my applications. Now, in this particular picture, we are focusing on the use of Heat within the Red Hat Enterprise Linux OpenStack Platform.
Using this, the company was able to build templates, to build interdependent VMs. Remember, multiple VMs tied together into a stack that can be deployed at a press of a button or from a single command. For example, if the company needs a web server that depends on the database server, such VMs can be built from a template that configures the VMs as they boot, passing the information that they need, such as IP addresses between the VMs as needed. The web servers, of course, would require the IP address of the database server for their configuration because I'm guessing that's their backend.
Now, in the Heat environment, we can go ahead and see this in that Heat interface, and see the various elements being tied together. Each of these little circles representing one of those resources that's been tailored by the cloud operator. I mean, for example, certain groups within an organization could be limited to only deploying a certain number of VMs, CPUs or RAM.
The tuning can play into that self-service ability of the end user to deploy on-demand. I mean, let's face it, sometimes developers forget to turn things off when they're done with them, and so these types of templates and self-service can be monitored. You can establish your quotas.
OpenStack also possesses multitenant capabilities, making it possible to separate the users into different tenants, sort of like different groups and set limits on each of those groups as opposed to looking to set them on individual users.
Now, because OpenStack can easily scale out, we can add more compute nodes to this environment as demand increases. We can upgrade hardware as needed. We can remove nodes. It's all very easy and seamless in our operation. The hardware can actually be heterogeneous, different levels of hardware that we have acquired at different times. We can live migrate VMs between these compute nodes.
Now, OpenStack provides a pluggable network architecture, and many of the network hardware vendors have proposed and written plug-ins for OpenStack that allow the infrastructure to configure and automatically provision the networks themselves on-demand.
Software-defined network, this infrastructure allows companies that do not have high-end network hardware to use networks that will not conflict with that existing infrastructure. Such features will provide great flexibility with the networks that are often used within OpenStack.
But remember, we acquired this as part of Red Hat Cloud Infrastructure. RHCI is empowering you to build and manage a private Infrastructure as a Service cloud based on data center virtualization and management technologies for traditional workloads. It is also providing that on-ramp to that highly scalable public cloudlike infrastructure that's based on OpenStack so you can be managing both worlds all from a single interface.
And so this web application company that we're looking at here was able to reduce costs, be more responsive to their DevOps community, right, and ultimately let the company continue to move forward and innovate as those developers were looking to do. Now that we've taken a look at this use case, now it's time for us to go on to our next video.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).