In this course, you will learn about the technical platforms that Red Hat offers for integration and messaging purposes. The course begins with a comprehensive look at the OpenShift Container Platform and then dives into Red Hat AMQ, which allows you to asynchronously connect various application platforms and exchange information reliably between them. Moving onto Red Hat Fuse, you will learn how to connect disparate systems through technologies such as Apache Camel. The course also looks at Red Hat's 3Scale Management Platform, a highly versatile system for controlling APIs. Finally, a demonstration shows you how these three technologies can be used through an example that implements a Camel Route to follow a twitter account and then translates the Twitter feed into a particular language.
Learning Objectives
- Gain an in-depth knowledge of the OpenShift Container Platform
- Learn about Red Hat's technical platforms and how they can be used
Intended Audience
This course is intended for:
- System administrators, architects, developers and application administrators
Hi, my name is Grega Bremec and welcome to DO040 the Agile Integration Technical Overview. I've been a Red Hat trainer for about ten years, working with open source technologies for the past 20, and we're going to look together at a couple of technical platforms that are included in the integration portfolio.
Those would be Red Hat Fuse which is a distributed integration platform that uses Camel and other technologies to allow you to integrate a variety of disparate systems. Red Hat AMQ which allows you to connect, asynchronously connect, various application platforms and exchange information between those in a reliable and secure way, and ultimately, Red Hat 3scale API Management platform which gives you the ability to control which REST and SOAP APIs that you host in your either on-premise or cloud platform you expose, gives you the ability to provide rate limits to the API consumers and also implement chargebacks in similar features.
First, though, we're going to have a look at the OpenShift Container Platform which can be used as a facilitator for the enablement of all of these technologies. So the OpenShift Container Platform is a Platform as a Service environment which is based off of Kubernetes and a couple of other technologies. It uses at its core a container engine called CRI-O which is a standards-based container, a runtime engine.
It uses a number of Kubernetes container orchestration features to enable containers being deployed and managed in the platform and provides us with a certain set of Kubernetes extensions which basically enrich the existing functionality that Kubernetes has and turn it into a fully blown Platform as a Service environment, complete with multi-tenancy support for building container images inside the platform and other features.
It comes with a set of containerized services that enable us to implement enterprise authentication authorization, extensible and pluggable networking solutions, and provides a set of ready-to-use container images which we can then use to deploy our applications on top of and deploy in the Openshift Container Platform. OpenShift Container Platform essentially consists of a set of compute nodes which we can broadly divide into two categories. The one we call the control plane which is responsible for allowing us enabling us to communicate with the container platform and tell it what to do, what we wanted to deploy, and how we want it to be organized and usually a larger set of worker nodes or, as we call them as well, compute nodes, which are there to run our containerized applications and make sure that they're made available to the external users where that is necessary.
As we said, the container platform comes with a set of containerized internal services which take care of exposing applications to external users, providing your applications with persistent storage and enabling applications to contact, to find an established contact with each other using the built-in facilities by the platform. From the perspective of an end-user developer, typically it provides us with an easy way to communicate to it using the command-line utilities such as OC (the OpenShift client) and the web console, which is an easy-to-use interface for developers to explain what they want the container platform to do. Typically it will also come with add-ons such as logging and monitoring being provided which allows to monitor application resource consumption and obviously review the events, the logs of the events, that have happened so far in our applications.
So let's just look at a simple demonstration that's going to show us how easy it is to deploy an application on OpenShift Container Platform. I'm going to log in to a cluster that I've already provisioned for the purposes of this tech overview with the username and password. There we go, I'm going to create a simple project that I'm going to use to deploy a trivial application called PHP HelloWorld which basically just says, "Hello." I can also show you that application in the or that application source code in the set of resources we have available for this demonstration, so it's a really really trivial application that only consists of an index.php file which essentially simply says "Hello, world! This is a simple PHP application!" and prints out the source code for the hostname for the container that it's running in.
So deploying the application once we have the source code available somewhere where OpenShift can actually reach it is then as easy as just telling the OpenShift, once we've logged in, telling it to create a new application, actually selecting the project that we've created previously and then telling it to create a new application from the source code in our GitHub repository. Telling it that the subdirectory we want to deploy it from, or context directory inside the source code, is PHP and naming the application the way we want it to. Openshift then automatically detects the language the application is written in, selects the appropriate base container image as we see. So it says here it's going to use the PHP image stream for PHP applications and creates all the resources that are required for our application to be deployed.
So a word or two while we're waiting for the application to be built. There we go, we have an overview of resources even in the web console. A word or two about the relationships between the resources that are being created. OC new-app is a very versatile command that can be used to create and deploy applications to OpenShift, either from application source code or from a pre-existing container image (that's also a possibility), and depending on how we invoke it, it creates different kinds of resources inside OpenShift. So most specifically, the application image that we're deploying, in the case we're deploying a pre-created image, is being monitored and for that we create a special resource called an image stream and that image stream can detect whenever an application image has changed and automatically trigger a new deployment.
The deployment of the application itself is controlled by a resource called the Deployment Configuration which is a multi-generation resource that actually can roll forward and roll back even if our application needs to be rolled back. And that Deployment Configuration has several different strategies about how to roll out our application, be it, for example, a rolling deployment or a recreation deployment. In any case, those strategies are actually implemented by certain internal containerized components which are called deployers. Deployers in turn control yet another resource which is not created by OpenShift directly. It is created by Deployment Configuration in turn which is called a Replication Controller and Replication Controllers can be used to easily scale our applications up and down according to our resource requirements.
So if an application is under heavy load, a Replication Controller can be used to scale that application to more than one instance. It is also a resource that monitors the instances that we've deployed and if they crash, or they're being deleted by someone, it will automatically ensure that always at any time we have a specified number of applications running in our project. In turn, also a service resource is created and a Kubernetes service is basically a very simple resource that monitors the deployed applications based on some label filter and similarly to how Replication Controller creates and maintains a set of application instances in existence, the service gives access to those existing instances to potential application clients that want to use it from within OpenShift.
When we use the oc new-app from source code, what we get is actually an additional resource called the Build Configuration which actually determines how the application source code is going to be embedded into the base image that we've selected or that OpenShift has detected it needs to use. And then once the build is finished, the application image is stored inside OpenShift for the Deployment Configuration to be able to deploy it. I believe in the meantime our application has already been built and has deployed, so if we look at the terminal and have a look at the trace of what's been going on, ultimately we want our application to be deployed in what we call a pod.
A Pod can contain one or more containers and, in our case, it only contains one because it's a trivial application, but there are also other use cases where you could deploy several containers at the same time within the same pod such as, for example, what we call "sidecar" containers which perform certain non-functional services like logging, monitoring or even the implementation of the service mesh.
Now, the application pod that we deployed, as we can see, is now running but prior to its coming into existence, an application build was performed by what we see as being called hello-1-build. This is a pod that is expected to complete at some point and this is actually the pod that performs the merging of our source code in the base image and produces an application image ready to use, which is then, in turn, deployed by something we call the deployer pod, as you can see, exists here and is also finished with this job as we can also determine by our application actually running.
Question now is how do we access this application? How can we use it? We said that a deployment consists of a deployment configuration, which is in charge of actually starting and keeping our application alive, and Kubernetes service which exposes or makes this application available to clients within OpenShift. If we look at the configuration of the service that was just created, we can see it has some sort of an internal IP and it exposes a couple of ports, namely 8080 and 8443, for HTTP access by other applications that are also running in OpenShift.
To expose this application to external users, actually all we need to do is just tell the OpenShift to expose the service we have just been looking at and that, in turn, as we can see, creates a route which other external clients can use to access our application. If we have a look at the description of the route that was just created, we can see that a hostname for it was just registered and that hostname can now already be accessed by clients. We can see it produces the output that the PHP source code was intended to print inside the HTML document that actually is a simple application that we've deployed in a matter of minutes to OpenShift with zero hassle.
Openshift is a very, very powerful and featureful environment to use, so if you want to find out more about it, we have another technical overview which is called DO080, which will explain to you the fundamentals of containers and orchestration of them, but we also have a number of courses that will explain to you in many more details and in practical exercises about how our containers and container orchestration work, which is called DO180, how you can administer the container platform, which is called DO280, and how you can containerize your own applications and deploy them on top of OpenShift, of course, which has a code of DO288. Join us in the next video where we will also discuss AMQ and have a look at how it can be used to integrate various asynchronously connected applications.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).