image
Red Hat AMQ
Red Hat AMQ
Difficulty
Intermediate
Duration
2h 6m
Students
436
Ratings
3.2/5
Description

In this course, you will learn about the technical platforms that Red Hat offers for integration and messaging purposes. The course begins with a comprehensive look at the OpenShift Container Platform and then dives into Red Hat AMQ, which allows you to asynchronously connect various application platforms and exchange information reliably between them. Moving onto Red Hat Fuse, you will learn how to connect disparate systems through technologies such as Apache Camel. The course also looks at Red Hat's 3Scale Management Platform, a highly versatile system for controlling APIs. Finally, a demonstration shows you how these three technologies can be used through an example that implements a Camel Route to follow a twitter account and then translates the Twitter feed into a particular language.

Learning Objectives

  • Gain an in-depth knowledge of the OpenShift Container Platform
  • Learn about Red Hat's technical platforms and how they can be used

Intended Audience

This course is intended for:

  • System administrators, architects, developers and application administrators

 

Transcript

In this section we're going to look at Red Hat AMQ as the first of the building blocks in the Agile Integration Portfolio. AMQ is actually not just a single product; it is several technologies that are being brought together in an attempt to enable integration of various disparate applications from, you know, the different environments like, I don't know, .Net, C++, Java, but even legacy applications like mainframe-based and many others through a messaging system that can reliably, asynchronously and securely exchange information between these applications. The messaging system itself is a typical type of MOM or Message Oriented Middleware which, by design, doesn't require the producers or the senders of the messages in the exchanges to know about the existence of who, whoever is consuming the produced messages. The messaging system itself is a typical form of MOM, or as we know it to be Message Oriented Middleware, which, by design, works by a principle of messages being sent into it and then delivered to interested parties. But it does not require the producers or the senders of messages to even know about the existence of the consumers and it certainly doesn't require all of them to be active at the same time, that's why we say the system is asynchronous.

It can be used to implement message exchanges in various different types of topologies from a typical hub-and-spoke, where we have a number of senders which use clients written in different languages to connect to what we call a messaging broker and receivers on the other side which receive the messages that are stored by the broker intermittently, again through various clients written in various languages to securely deliver and reliably deliver the information produced by the senders.

Typically what we'll be looking at in Message Oriented Middleware are two scenarios whereby the first one, the simpler one, is called anycast, where we have typically one or more senders of messages that deliver those messages to the broker and then, if in most cases there will only be one consumer of those messages, one receiver which guarantees that the order of the messages received by the consumer is actually the same as the order the messages were being sent in. The moment we have more than one consumer in an anycast scenario, the messages delivered to those consumers are actually round-robin distributed across the consumers.

So each message that is sent to the broker is delivered once and only once to the target consumers. So Consumer 1 and Consumer 2 in this example will each receive exactly one half of messages that were sent by the Producer to the Broker. Another scenario which we commonly see in message oriented middleware is the so-called multicast scenario and it's very useful when a bit of information has to be distributed to multiple interested parties at the same time. In this case, the producer sends the messages to something we call usually call a topic and the broker then makes sure that anything delivered to this address to this topic is actually distributed to as many interested parties as there may be connected at the time. Or if they express a durable interest in messages delivered, store those messages for subsequent asynchronous consumption by the interested parties. In this case, messages are literally multiplied to as many copies as there are consumers either connected if they're non-durable or at least registered if they're durable. And for each message produced, X number of messages will be delivered to the consumers, such that each of the consumers gets its own copy of the message and can do with it whatever it pleases to. In addition to the basic function of a messaging broker which may sound a little bit trivial which is just to simply receive, store and then subsequently deliver messages, there are several advanced functions that AMQ brokers support such as rewriting, routing addresses, and not least implementing highly available clusters whereby active-passive deployments are possible but also load balancing between multiple active-passive groups is also a possibility, such that clients, for example, can be serviced reliably and even depending on their own current geography they may be able to consume messages from geographically the closest available broker whereby the message distribution is taken care of by the brokers themselves.

There are many powerful means of expressing this in broker configuration. Among other things, another interesting scenario is also where we have a number of clients which all speak similar protocol but we have to enable those clients to connect to several different brokers. For that purpose, we have a special product called AMQ Interconnect which is basically a messaging router which we can explain using certain rules on how to handle traffic, how to handle requests, and how to translate protocols that the clients use to the target protocols used by the brokers. Conversely, interconnect can also be used to connect several clients using different protocols to the same broker and, of course, any combination of the former and the latter is also possible.

We have an excellent course that discusses these capabilities and ways of implementing them in much more detail, it's called JB440 - Administering Red Hat AMQ Message Broker. In the meantime though, let's have a look at the simple demonstration of how to enable an AMQ broker and how to send and receive messages using simple Java applications. I've already installed a broker deployment in my home directory but this is the main installation of the broker so we do not want to use or pollute this main installation with our own broker instances. It may even not be writable to us so what we want to create is actually a new separate isolated broker home directory just for our own personal use. So we're going to use artemis create command and give the broker a name. As you can see, artemis create tells us we need some sort of authentication credentials. I'm just going to use admin as the username and a simple easy to remember password. So the last question we need to answer is whether authentication is going to be required for clients or optional. I'm just for the simplicity of this demonstration, I'm just going to say that I will allow anonymous access to the broker and this literally generates the basic broker configuration. It does perform some simple tests and some rudimentary optimization of the broker's operation and, at the same time, creates a broker's home directory, places some scripts into it which we can then use to start it.

If we have a look at this simple-broker, we see that pretty much the only thing that was created in this directory are two scripts that can be used to start and stop the broker either in the foreground or as a background service and some configuration files among which we have an authentication property file and, most importantly, the broker configuration stored in broker

XML. So if we just have a look at the broker XML file for a moment, we can see that in its configuration things such as IP addresses, persistence, or whether or not messages should be stored to the hard drive configured, the journaling type that improves performance if our system supports asynchronous input/output, some additional tuning options that determine what to do, for example with large messages, what to do with messages that the broker can not entirely fit into memory as it's receiving them, and some other performance tuning options. More importantly, though, the broker can be configured to support several different protocols and that is actually what this section that we're currently looking at is all about. So as you can see, multiple protocols can be supported even on a single connection socket. Alright so on port 61616 on this machine if we connect to the broker it will try to automatically recognize the protocol that the client is using and then switch to talking this protocol with the client.

Alternatively, we can have a single port per protocol as we see in the other four connection configurations and some optional additional protocol optimization options appended to the acceptor configuration. Starting the broker is as easy as just simply using the provided startup script and telling it to run. It takes only a couple of seconds for it to start up and it gives us some information about which ports it's listening on and ultimately also tells us the location of the URL which we can use to access the web console using the provided authentication data which we typed in when creating the broker the broker instance. So logging in with the credentials we have a very simple web console that we can use to monitor the broker's operation. So we can see here in the Artemis menu we can see all the different kinds of acceptors that we have enabled in the configuration file and overview there of their currently active properties and also internally what delivery addresses the clients have been using so far which obviously they haven't that's why the list is fairly empty. So whenever a client delivers a message to an address inside the broker, this list will be updated with the address name that the client used to deliver it. So if we have a look at how to write a very simple application that uses AMQ to both send and receive messages, let's have a look at a clone of the resources associated with this demonstration.

So in the project that I just cloned, there is a simple JMS QueueConsumer and a QueueProducer. JMS is the Java protocol that AMQ speaks to Java clients and it's a Java standard so it's actually very easy to write such applications in Java. Let's import this project into a code-ready studio by just importing an existing Maven project. Waiting a couple of seconds for the project to be imported. Once the project imports, if we have a look around it, we will see that the only two files in this project are a simple producer application and a simple consumer application which both connect to the broker except that one of them connects and sends ten messages and the other one connects and receives whatever is waiting for it to be received. 

If we expand the list of imports, we can see that practically none of them are product specific. Everything that this application, that simple program, is working with comes from the standard javax.jms package, with the exception of a connection factory, which is the only protocol-specific class coming from AMQ library that explains to this program it should use a particular protocol to connect to the broker at a particular URL.

So, creating the connection factory allows us to then in turn create a new connection, establish a session, address a certain destination inside the broker, one we'll call an example queue here, and create a message producer which is a JMS class that can be used to produce new messages and send them to the broker. And then simply create ten text messages and send them to the broker. If we have a look at how this works in practice we'll just simply use the Maven exec plugin because it's a simple class name so we'll do an exec:java here and tell it to execute a class of com.redhat.training.agile.amq.QueueProducer.

There we go, ten messages were sent by this broker. If we switch for a tiny second to the web console and refresh the list of available addresses, we can see there is an example queue destination here which currently holds ten messages, as we can see if we also click on the property and have a look at the details. Inside the example queue address there is an anycast queue which is called exampleQueue that currently has zero consumers and ten durable messages waiting to be delivered at this destination.

So all the information is made available by the broker to whoever is working with it. Now, if we look for a second at the consumer application we'll be able to see it, too, is using, for the most part, just standard javax.jms classes. The only specific class it uses is a connection factory using a similar approach to connect to the broker, create a connection, establish a session, and provide that session an address, a destination that it's interested in consuming the messages from inside the broker. As you can see here, as a MessageConsumer, as opposed to MessageProducer, is created and a connection is started. Then, for as long as we have any messages to receive, they're being consumed with a timeout of five seconds if none are currently available. Whatever message is received is then printed on the console, and finally, the resources are cleaned up and the connection is closed. So if we also have a look at this example, notice that while I was explaining this, nobody was actually connected to the messaging broker, the producer had already finished his job and exited. So the consumer doesn't know who could use those messages, it doesn't know whether or not it's currently connected to the broker. All it really wants is to receive whatever, let's call it, mail had been delivered for it somewhere in the past, right? As we can see, it literally gets the ten messages, waits for five seconds and then exits. We'll be revisiting AMQ in the final section where we'll look at how to combine all of our solutions together. For now, though, this is it. Again, a reminder if you want to find out more on how to use AMQ in various different scenarios, have a look at the JB440 course in Red Hat training.

See you in the next section.

 

About the Author
Students
142970
Labs
69
Courses
109
Learning Paths
209

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).