image
Combining It All Together
Start course
Difficulty
Intermediate
Duration
2h 6m
Students
436
Ratings
3.2/5
Description

In this course, you will learn about the technical platforms that Red Hat offers for integration and messaging purposes. The course begins with a comprehensive look at the OpenShift Container Platform and then dives into Red Hat AMQ, which allows you to asynchronously connect various application platforms and exchange information reliably between them. Moving onto Red Hat Fuse, you will learn how to connect disparate systems through technologies such as Apache Camel. The course also looks at Red Hat's 3Scale Management Platform, a highly versatile system for controlling APIs. Finally, a demonstration shows you how these three technologies can be used through an example that implements a Camel Route to follow a twitter account and then translates the Twitter feed into a particular language.

Learning Objectives

  • Gain an in-depth knowledge of the OpenShift Container Platform
  • Learn about Red Hat's technical platforms and how they can be used

Intended Audience

This course is intended for:

  • System administrators, architects, developers and application administrators

 

Transcript

In this final video, I want to show you a demonstration of bringing all these components to work together and while doing this we'll be first implementing a Camel Route that follows a twitter account by consuming the Twitter API. And then for each of the tweets published by a certain account, it's going to invoke yet another route which translates those Twitter feeds into a particular language. So we will have a couple of languages of choice and for that, it's going to be consuming a public API from a translation service. So upon receiving the translated tweets, the second Camel Route is going to publish them onto a couple of destinations in our AMQ Messaging Broker.

There will be one destination per language and a RESTful web service that is going to expose the latest tweets in a couple of languages. So the latest tweets of a particular feed in a number of languages is going to consume those destinations and expose them through some API endpoints. So, of course, we want to offer this service to our customers that are interested. So we'll be enabling access to this service through some application plans which are going to be designed in the 3scale API Management Platform and will enable those users to either sign up for a free plan with the rate limits and slightly, well, limited basic plan with slightly higher rate limits but also a certain cost per invocation, but no subscription fee, so cost per invocation, and then the third option is going to be to enable a sort of golden plan which is going to come with a subscription but with no rate limits and no fee per invocation. So we're going to have an attempt at accessing those Twitter feeds through all these three plans and demonstrate how the 3scale API platform can be used to implement all those functions.

We're going to have to log in to OpenShift first. So that's the first prerequisite because obviously we're going to have to deploy those applications to it. Then we create a new project and in the next step what we're going to do is deploy AMQ. The AMQ broker is the central component that integrates the two other applications together through a set of destinations that they use to publish and receive messages. So since we're deploying AMQ 7.4 (the latest available version), we will have to first create some image streams that present the broker images to OpenShift and then deploy the broker from a template which, all of which, the resources namely, are available from GitHub. So what we're going to do to save us some typing, we're going to store the GIT URL to those resources into AMQ_GIT_URL shell variable and the release version of the AMQ we're deploying, we're going to just store it into AMQ_REL variable.

Because the images are available from a registry called registry.redhat.io which requires an authentication, we also have to create secrets that allows us to authenticate against that registry what allows OpenShift to authenticate against that registry on our behalf. So for that, we're going to use an authentication file that we can easily create by using Docker or Podman to log in to the target registry and then obtain it for authentication. I have copied it to the workstation from another source so that allows me to just create the secret without sharing any of the credentials just right now.

So what we're going to do is create a generic secret. The file we're creating it from is the authentication file that I was just talking about but in the target secret it has to be called .dockerconfigjson. The type of that secret, even though it's generic (it can be recognized by the platform by certain labels), is kubernetes.io/dockerconfigjson. So the secret registry off in our current project now contains the authentication data that OpenShift will need to access AMQ images.

So now we can actually create the AMQ image streams for the broker. And if we have a look at the image stream, it's evident that it was successfully updated. It discovered the tag 7.4 for the AMQ broker which is the version of the broker that we want to deploy. Just to check again, we will look at the detailed description of the AMQ broker image stream. Just looking at the last two lines to see actually what the image ID is, if there is one and this is proof that an image was actually found and registered with OpenShift. So we have presented the images. Now we're going to have to create an AMQ deployment from a template and that template, there are several different templates, the template we're going to use is called AMQ broker custom which allows us to actually supply our own custom broker XML file and a couple of other settings. And I've already downloaded that broker XML file from a project on GitHub that I've published with all the source code for this demo.

So let's just have a quick look at how our broker is configured. The first thing we can see is that persistence is enabled in the broker but actually, the custom template does not use any persistent storages because this is a simple demo. And at any rate, the broker itself is only used to store the last copy of the tweet so it's not irreversible damage if we lose the storage. The persistence in the broker is basically there just to allow it to pick up in case it crashes for any reason or any of that sorts.

We will also see that the only acceptor that we've configured in the broker is one on port 61616. But it's through that one port we can actually connect to the broker using any of the supported protocols for it.

One other feature that I've turned off just for the sake of simplicity is the security in the broker. This allows me to deploy all the applications for the purposes of this demo without having to configure authentication. Although, as you will see in the next part of this demonstration, I have configured actually the Twitter listener and the translation components to authenticate to the broker nevertheless, just again as part of the demo. So I'm using the default address settings allowing to automatically create destinations and so on, but nevertheless I have declared a couple of destinations that the applications need, most notably, a topic called translate. That one is used by the Camel application for whenever a tweet is received from Twitter it is automatically pushed to the translate topic where three other translation components are listening. And that's why it has to be a topic so multicast destination and each of those three routes' application components then picks up the message from the translate topic, looks at its current language, and tries to determine whether or not it has to be sent to the translation services.

The three components are the English, Spanish and French translators and if they determine that the message has to be translated, namely it is not in their language, they will use the translation services, the translation API, and deposit the result to one of the three destinations that are meant for that purpose: tweets.en, tweets.es, and tweets.fr. So that's the first component that links to the second one through AMQ. It receives tweets, sends them to translation topic where three other routes that are part of the same application can pick up those tweets and just simply send them to translation services and then deposit the results to one of the three destinations which are queues, right? So first in first out unicast destinations. If the tweet is already in the language that the component is written for, then it doesn't use the translation services. It just simply says, "Okay, the language is already the one that I'm actually interested in so there's nothing to translate," and just directly deposits the message to the target destination.

Then the second application is a Java EE application, so the first, the Camel application is running on top of SpringBoot with a couple of settings that allow it to communicate to the AMQ broker. The second application is a Java EE one and basically what it does is it just contains one message driven bean that monitors three queues in the pin broker and for any of those queues it actually is able to read the message from those queues and deposit it into what is called a buffer, alright? And the buffer basically always contains just one message, just the latest message because the exercise is basically I don't want to have to look at endless Twitter feed, I'm just interested in the last news from a certain account, the last tweet. And for that reason, what we're actually doing is we're creating three JMS queues, which are non-destructive, which means, basically, if a message is read from that queue, it is not removed. So what we're doing is actually we're creating three queues which are non-destructive. So when the message is picked up from that queue it is not actually removed, alright, such that if the API application which feeds us the latest tweets has to be restarted for some reason, it can always reconnect back to the broker and still receive the latest tweet.

Now, what is the relationship between the destinations that the Camel application deploys that tweets to and the destinations that the Java EE application picks them up from? For that reason we have three diverts configured in the broker configuration whereby the source address where English messages are being deposited is being forwarded to the address where the Java EE application expects the messages to be picked up from. The exclusive option here just simply means that messages are actually removed from the tweets.en and placed on JMS queue tweets.en rather than just being copied. So these three diverts actually take care that messages when they become translated, when they finish the translation path, they get moved inside the broker to this nation where the Java EE application can find them. So we'll be using this broker-config to actually configure our AMQ broker to perform functions, integration functions, as our two applications need them, but we will also have to pass some additional data, some additional configuration data to the broker.

So for that reason, we'll create a configmap, we'll call it broker-config. In it we will include the BROKER_XML file. We'll also set an amqadmin user so that we'll be able to log in to the web console and we will globally turn off any login requirements for the broker towards the applications. We'll also create a secret broker-auth which we'll use to store the very secret password for the AMQ broker. So the config map and secret being prepared, we can now use the AMQ broker 7.4 custom template. We'll just use oc new-app with the -f option pointing it to the online location of this template. There's no need to create this template to be able to deploy an application described by it, we can just use it directly from GitHub. And you can see that OpenShift is now basically creating the resources. So this is the default broker XML file which is part of the template and a default logging properties file again which is part of the template. So what happens is a couple of services get created. The deployment configuration for the broker gets created and a route that points to the Jolokia web console also gets created.

So what I want to do now is actually to reset the deploymentconfig configuration to take some settings from the config map and the secrets that I've just created prior to deploying the broker from a template. So I'm going to use oc set env pointing to deployment config broker-amq and tell it to reset some environment variables from the keys that are stored in the broker-config configmap and the broker-auth secret. So if I look at what's happened after these two commands have been issued, we see that the broker-amq deployment configuration has been pumped up by two revisions. That's because there is an automatic configuration change trigger and this pumps it up to the second revision and this template change pumps it up to the third revision.

You can see that the third generation pod is being deployed right now and if I look at the environment settings that are now active in the deployment config for broker-amq, you can see that AMQ_USER, AMQ_PASSWORD and AMQ_REQUIRE_LOGIN and BROKER_XML are actually being loaded from configmap in secret as I've just specified. The logging properties is still present in the deployment config verbatim but I'm not changing that I really have no need to configure the login right now.

While the broker is being deployed, let's have a look at the services. We see that there is an AMQ jolokia service. We know that this one is actually being used by the route but also remembering from having a look at the custom broker XML file, we also know that the broker is configured to only listen on port 61616 and not the amqp, mqtt and stomp ports. So these three services are basically redundant so we can remove them. And since the route is using an automatically generated hostname, we're just going to reset the route, deleting it and exposing the service broker amq jolokia again with our own custom hostname just because it looks a little bit nicer than the default one which was generated here.

So this is now the file situation after having modified services and routes a little bit. We didn't have to do this but just for the sake of cleaning up the project a little bit and not having redundant useless objects present we did anyway. Remember that broker-amq-tcp is the service that listens on port 61616 so this is also the host name that both our Camel application and our Java EE application are going to be using when they want to connect to it. In the meantime, the broker part has been deployed and if we just have a look at the logs, we see that everything's been started it's up and running and the Artemis Console is also available. So this concludes the first part of this demo. The broker is now up and running and we can proceed to deploying the Camel application.

The Camel application depends on some authentication settings. As we already know, it's using the Twitter API in the SYSTRAN translation API so for that reason it needs to have some translation and Twitter environment variables set. I've just created a simple text file which I can use to import those actual values into the shell so I don't have to present the actual secrets and access keys in this demo. Otherwise, I could have if I was the only one looking at this, I could have actually obviously used literal values here. So what I want to do for my camel application is, first of all, specify which Twitter account I wanted to follow. I want to tell it which broker to connect to. Remember the service is called broker-amq-tcp on port 61616 and this is the authentication setting I was talking about. I will configure my application to use amqadmin when connecting to the broker. So even if security was enabled in the broker, my application would still be able to connect to it based on the configuration files that are embedded inside the project that we've seen in one of the previous videos when we were looking at the Camel application running standalone.

The config map is used to store the publicly viewable values of the Twitter accounts, the broker URL, and username, and I will create a secret that will store the Twitter consumer key, the Twitter consumer secret, the Twitter access token, and the Twitter access secret, and also the SYSTRAN application key which is used for translation services and obviously the AMQ password that we would need to connect to the broker had security been enabled in it.

So having prepared those two configuration objects, config map and the secret, I can now proceed to creating a new application. I will just simply use oc new-app. No template is needed here because OpenShift is smart enough to figure out how to create a build and the deployment, so I'll just create a new app pointing it to the GitHub URL where the example source code is stored. I will call the new application follow-feed because it's following a Twitter feed and I want to build it in a source to image way so I'll actually select strategy being source strategy and the base image because this is SpringBoot so the Maven build process will actually build this an uber jar.

So it doesn't need any specific execution environment except for Java vm. The image stream I'll be using to deploy my application on top of is going to be the open JDK image version 1.5. Now this is where a couple of interesting settings have to be specified for the build. First of all, I want the dependency downloads to happen as quickly as possible. So there is a nexus repository manager deployed somewhere in the local cluster that I've configured or been able to use and I will just tell the build process to use the MAVEN_MIRROR in the local cluster called nexus-common.apps.ocp.na1, etc, etc, and the repository inside nexus we'll be using is Java repository.

Rather than specifying a context directory in this Git repository, because of the design of the application, because it's sharing some data model classes between both the Camel application in the Java EE API application, I actually have to build two modules in a way that the dependencies, the model classes, can be found by Camel during the build. So what I have to tell Maven is basically to put anything that it builds into the Camel project's target directory and then in the Maven parameters for the build, I want to tell it to build basically two modules. One module from the Git repository is called model and the other one is called camel. And for the second model to be able to find the first one rather than using the package goal, I will actually use the install goal which will compile package, the first project, and push it into the Maven repository cache and then the second build can find that first artifact in the Maven cache and incorporate it into the jar files. So this was a small tiny detail.

Another approach could be to actually create one build, so not one application, one single build separately from this new app and configure that build actually to instead of doing a package goal to do a deploy goal but that will require me to have write access to the nexus repository manager which in this case I do not. So it's much simpler to build two projects, two modules in the same build in a way that the dependent module can find the dependency and just incorporate it into the application.

Now, the oc new-app command basically just simply creates an image stream for the resulting application image, creates a build configuration incorporating my build environment variables and it creates a deployment configuration for the application to be deployed after the bill has finished. It also creates a service but honestly this camel application will not need to be accessed by any other because it acts solely as a client, it pulls data from Twitter, it sends requests so again it's a client, it sends requests to the translation services and it sends the resulting translations to AMQ.

So nobody needs to contact this application, so this service is basically not me that I could have removed it if I wanted. A build is then automatically scheduled. So while the build is taking place, I'm going to take the time as an opportunity to reset some environment variables in the deployment configuration for this application, namely the Twitter account, the broker URL and the amq username which are coming from a camel-config configmap and the other authentication-related variables which are coming from the camel-auth secrets. I'll just use again oc set env pointing to the follow-feed deployment config, telling the command to set variables as they are present in the config map and the secret. And just to check what exact variables are being set in the deployment config you see absolutely all the variables that the application needs are loaded either from the camel-config config map or the camel-auth secret. So now all that remains is basically to wait for the build to finish and in a couple of minutes it actually does finish.

So if you just quickly check the build logs we can see that Maven actually did build two projects. A project called Simple Shared Data Model and then another project called Agile Camel on Fuse which incorporates data model in the jar files. So the build actually happened the way that I wanted it to and we can also see that the application is also already running and it's being deployed, it's up. So we can have a look at the logs, see SpringBoots actually starting up, loading the routes and starting them so you can see routes 1, 2, 3, 4 are being started and then if we look at the tail end of the logs we can also notice that tweets are being actually translated as you can see here or this is a nice example.

So a new tweet has just been received saying there have been a few times this week, etc, etc. So that actually automatically triggers a deployment or push to the translation topic multicast destination where the English translation service picks it up and says, "Well, there's basically..." oh, sorry that's the Spanish translation that picks it up and sends it to the translation services. The English translation basically says, "Well, I've got nothing to do. The source language is actually already the target language so I'll just forward this to the translated destination." And this is the French translator that also requests the translation services to translate the source text. Then here we have the Spanish result having been returned and the French result having been returned for the same tweet. So this also finishes our integration component, our transport routing and mediation engine translation component using Camel and Fuse. We've deployed it.

We now see that tweets are actually being picked up and sent to the translation services and then forwarded to the destinations where a Java EE application is supposed to pick them up and expose the last, the very last tweet that has been received through the REST API. So for that application, we will also have to configure just a couple of settings really. We will have to tell it where the broker is (it's expecting some sort of a JMS broker to download the messages from) and we will tell it what port to use and experimentally, since Java EE does require a little bit more memory than your normal trim down SpringBoot application, experimentally I have discovered that the metaspace by default which is set to 128 megabytes becomes a little bit small. So the application will run much more reliably if I ask it to configure the JVM to have a METASPACE maximum of 256 megabytes. So those are the only three settings really that we need for the Java API application.

So what we're going to do is use the similar build approaches we did with Camel, we'll point the oc new-app command at Agile Integration GitHub repository. We'll call the application last-tweet. Again, we're doing a source-to-image build using just the open JDK image because this is actually a Thorntail application which, similarly to SpringBoots, does not have any requirements for a specific runtime; it's Maven Build Plugins that actually assemble an uber jar that executes the Java EE platform, so the only thing we need is an open JDK image. Again, I'm using the internal nexus repository manager to speed up the builds and I'm building two modules: model and API. So I have to tell Maven to put anything that it builds into the target directory of the API project. API depends on the shared model, so it needs to find the model jar in its own target directory.

Just as usual the reactor project list is model, which is the dependency in the API which contains the application source code. Again I'm using the install goal rather than package because I want the model jar to be pushed into Maven repository cache, where the API builds can find it in the second part of the build. Oc new-app abides, then creates an image stream for the application image, a build configuration to build the new image from the source code and OpenJDK image together and a deployment configuration that will deploy the application once it's finished building. And it also creates a service. Now, this service will be important for us because we will want to use this API at least in the testing phase to access the application's data which is the latest tweets and the build again is immediately scheduled.

While the build is taking place, I've just used the opportunity to set the environment variables in the deployment configuration in accordance with what I've stored in the api-config, config map. So if I review the environment settings, you can see BROKER_HOSTNAME, BROKER_PORT and Garbage Collectors MAX_METASPACE_SIZE are being initialized from the api-config config map.

So the build is now running, as we can see here, last tweet build is up and going on and in a couple of minutes it finishes. So the deployment is started and the application itself is deployed. We just quickly review the build logs of the last tweet application.

Here we see as well that Maven correctly figured out that the Shared Data Model is the dependency, so it should be built first and then the agile API on Thorntail depends on it so it should be built second. The build overall was successful and the application was basically incorporated in the new image and stored into the internal registry. So having a look at the deployed applications log, we can see that Thorntail is starting up. It's installing the fractions needed to run our sample application, it connects to JMS services, etc., etc., and if we look at the last couple of lines of logs in the application, we can see that it's just received one of the latest translations of tweets from English to French language, so clearly the application was able to connect to AMQ and receive the data from it.

Another look at the services. Last tweet Kubernetes service is a multi-port service, as you can see here, but I haven't configured any HTTPS supports in my Throntail application, I have used no certificates or such, so what I'm going to do is just expose the plain text port. So I'm going to create a new route, I'll call it last-tweet as well but I will tell it to only expose port 8080. Now I'll be using a hostname a custom hostname called tweet-agile-demo.application default domain.

So if I use curl to send a couple of requests to this URL, you can see there is a health endpoint exposed which we can use as a liveness probe. For example, in the application which tells us that the application actually is "UP",  it has no detailed checks, so this just basically tells us the application service started and the application has successfully deployed, and then we have three API endpoints in that app - one for each language.

So if we send a request to /api/tweet/en, we can see the original text that was downloaded from Twitter. If we send the request to fr we can see the French translation, and if we send the request to es, we can see the Spanish translation. So this proves that the API application is actually able to receive the translated messages from the Camel application which we just previously deployed.

So the last thing we need to have a look at is how to use the 3scale API Management Platform to actually allow users access to these three endpoints in a controlled way where we can apply rate limits and maybe even some billing services. So in the API Management Platform control page control interface which I've already configured to work with my freshly deployed latest tweet API, I will basically showcase a couple of application plans which I already pre-created and then I will sign up as a new user for a free demo plan but with very restrictive limits.

So first let's look at the Integration Configuration. As you can see in the APIcast config, I've already entered my latest tweet URL and I've also configured the Mapping Rules for the valid exposable API methods, so English, Spanish and French tweets. I've also associated them with method. Each request to one of these three URLs counts as one invocation of the method plus an overall catch-all Pattern that basically just counts the number of hits that have been submitted towards my application regardless of what the exact endpoint they were they were using, for example, health or whatever else. So that's basically the configuration of the APIcast integration.

Now let's have a look at Application Plans. There are three Application Plans. An unlimited one, which is simply for power users that have some sort of contract. We could impose a monthly cost for that as well and that would basically be a recurring invoice that allows users to use our API with no limits and no additional pricing. Then there's another Application Plan called Basic, which is a paid plan. It doesn't involve a recurring monthly cost but for it has some limits for the API methods, it has some limits for the overall number of requests and as far as the API methods go in excess of ten requests, each request up to a hundred will cost one cent and then any subsequent requests above 100 will cost a half a cent. So this is the payment rule for this Basic Application Plan. So as we said that it doesn't come with a set up or monthly fee but requests in excess of a certain frequency in a month will incur some additional cost. The third Application Plan which does not have an application associated with it yet is called a Demo plan. It's only valid for five days and it has very restrictive limits. So it's just got five requests an hour and of any kind in one API request per hour for any registered method. So this just allows, let's say, some users to sign up for the plan, test our API in a couple of days, and then decide if they wanted to promote their application to a high-ranked paid plan: either Basic or Unlimited.

You see that Basic and Unlimited plans are already associated with some applications. So let's have a look at those. We see there's a Developer account which is using the Basic plan, so the paid Basic plan, there's a power user from Power Org which is using the unlimited plan and both of those applications have already been assigned some keys. So the developer has a key of a4049cdd, etc. and we can actually use this key to test the access to the application. So we could actually say, "Okay, let's see how great limits work for English tweets with the user key belonging to the basic plan's developer user."

So we see that after a couple of requests, we get a warning that the application plan's limits were exceeded. However, that does not invalidate requests to the non-metered health method. Of course, I forgot to add a user key, there we go. These are still available because the hits metric has a much higher limit than any individual API metered API method invocation. So that was the Basic Plan for the developer.

If we look at the other application, the power user, there's a different user key associated with it. If we use that key well, it's hard to prove because the application is unlimited so we'd have to submit requests for a very long time to actually prove that it's unlimited but let's see just maybe a couple of requests. So let's request some French tweets. So first of all what we see is that the API platform recognizes the key and even after a very large number of requests the application is still available. So we'll see in a bit how that counts against the analytical counters.

So for a new user to be able to access my application, the first thing I'm going to have to do is promote the configuration I've been testing so far to production. There we go, the production environment is now running the same configuration as the staging one was and also in the audience's menu I'll have to open the developer portal. This is basically a link we can share with any user that is interested in our API, so there's a developer portal here that we can edit the content off, we can actually publish our API documentation on it. It supports Swagger documentation specification, so the users that actually sign up for our application can access that application's API documentation and test it, even through the developer portal itself. But as you can see on the front page, you also have a choice of different plans and if I want to sign up for the Demo plan, all I need to do is click sign up to plan demo specify what more my organization is going to be.

Let's call it Test Org, pick up a username, an email, and pick a password. Upon signing up, depending on my account plans that I could also configure in the API Management Platform, an email will have been sent to me. Of course, I didn't use a valid email and even email servers are not configured in my local instance of the API management platform so this email is not going anywhere. But if it was, all I had to do as a newly signed-up user was to open an email client, click a link to confirm my email address and I'll get access.

Alternatively, I can contact API Management Platform admin and ask them actually to activate my account as you can see here in the account list, the new user - the signed-up user - has appeared. They already have an application created for them so all we need to do is activate them. There we go. Let's have a look at the application, you see that that application has also been assigned with a user key and the Application Plan has correctly been set to Demo with all the limits that are applied.

So back to the developer portal, I can use the sign-in link to actually sign in as the user I've just created and there I am. So I have all the information I need to use to start using my App. There's a link to the application but, more importantly, there's my application key that I can use to access the App. We also see a reminder that this is a Trial account and we have five days remaining.

So in the application information page, we see a couple of additional bits such as what the plan is, we can request a change of the application plan, we can review these plans' specifications here. So, as you can see, a plan change request is enabled, so at some point if you wanted to convert our Demo plan into something more serious, we could actually easily do that through the developer portal.

So what remains for me right now is actually just to test this plan's axis. So instead of using the French tweets, I'm going to try and access Spanish ones and I'm going to use the user key that was assigned to me when I signed up for the demo plan. There we go. First request successful, second request as well and then the limits have been exceeded. You can see now for the next hour I have to live with the fact that I've used up my quota. Again this is for the metered methods. I may still be able to use the unmetered methods but only again up to a global image of the numbers of hits per hour which is if you remember is let's have a look five hits per hour, one of which can be to an API method. So if now I found this application useful to integrate with my other App that I'm currently working on in developing, I can actually request the change to either the Basic or the Unlimited plan. Let's say I want to change to the Unlimited plan.

This is now under consideration. So if the 3scale API Management Platform admin is to react on this plan change request, all they have to do is open the dashboard where they'll see some messages and one of those is obviously that some action is required because test user from test org requested an app plan change. Having reviewed those messages (or the admin actually), all they have to do is just select the application that belongs to the test user, scroll down and select the plan that they wanted to approve - either unlimited or a basic one or whatever they decide - and change it. From this moment on, the application plan has been switched and if we go back to the developer portal we can see that our application plan is actually being changed to unlimited.

So just a couple of extra words about metrics. We can see that in the statistics tab here for a particular user we do have the ability to review the usage of particular users' apps, such as the number of Get requests, the number of hits in a certain period of time. Of course, since there was a dismal number of requests, we do not see any of those rendered here yet. May take some time also before they show up. And similarly, an API Management Platform admin can actually review the usage of the APIs in the analytics section. So they can just simply have a look at the overall usage per certain timespan. I have a summary over here at the top and they can select the timespan they're interested in over here in this menu. There's also breakdown per metric or per method, as we can see here: 47 hits and forget requests. 

This concludes our demonstration in our Technical Overview. I hope you enjoyed it and again if you want to find out more about either of those technologies you can always have a look at the courses that we offer on these subjects. JB440 is the Red Hat JBoss AMQ Administration training. Then we have a JBoss 421 Training which is about Camel integration on Red Hat's Fuse on OpenShift, and lastly, JB240 is about building and administering APIs using the 3Scale API Management Platform.

About the Author
Students
143181
Labs
69
Courses
109
Learning Paths
209

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).