1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Developing Cloud Native Applications with Microservices Architectures

Microservices Patterns

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 24m
Students638
Ratings
4.2/5
starstarstarstarstar-border

Description

Do you have a requirement to identify the right frameworks and tools to build your own Microservices Architecture? If so this course is for you!

In this course we'll teach you how to combine different frameworks and tools into a microservices architecture that fits your organizational needs.

You’ve no doubt heard about the microservices architecture, but understanding and executing it can be a bit of a challenge. Through a series of videos, this course will introduce microservices, review multiple microservices frameworks and runtimes, and show you techniques to deploy them through a hassle-free DevOps pipeline. We’ll discuss containers, Docker, Spring Boot, NodeJS, .NET, OpenShift, Jenkins, Vert.x, Kubernetes, and much more.

Transcript

Now we’re going to explore some invocation patterns. In this case we’re going to talk about not just A calling B, as we saw earlier, where the consumer calls the producer. Let’s actually get a little more complicated, show you more complicated architecture. So, in many cases we’ve seen people using microservices in this way, where the browser, like my little mobile application or mobile browser in this case, can invoke several back-end services back on the host computers. And in this case just AJAX calls, right, where the browser says call A, call B, call C. 

A good example of that might be a little shopping cart style of application. We kind of see we have our high-level description, our price, our star rating, details about the product, thumbnail images, and of course recommendations, and store availability. All those might be individual calls to individual microservices, some of which may even go to the mainframes. So, when I have talked to other folks about this particular architecture, I came up with this by talking to certain retailers who are specifically doing it this way. So, this is a common pattern for retail organizations, in particular. 

That’s why I drew the slide as I did. But you can kind of see that, like location-based availability is a really interesting one. It is specifically going back to a mainframe for a lot of these organizations, and actually saying, what is the inventory in your local store? It gets the GPS location from the phone, but in this case, it actually gets the actual inventory from your store from the mainframe based application. Now, you have to start thinking in terms of what happens when it fails. In a microservices world, you always have to think in terms of failure and resiliency. 

What happens when things break? So, we’re going to start, we’re going to talk more about resiliency a little bit later, but here, think in terms of what happens when it fails. In this case, if location based availability fails because the mainframe timed out, we just didn’t get the data back in time for a user. We’re going to actually change our user interface so it doesn’t actually have 15 available, like it did before, but it has just the closest store. 

We still know your GPS location, as an example. So, in all cases you have to start thinking in terms of what your fallback scenarios are. And that’s super-critical when it comes to our microservices architecture. So, you might also think in terms of the API gateway kind of architecture. This is fairly common one too. So, instead of the browser being the aggregator and pulling all that business logic from all those different microservices and aggregating on the client side. 

Here we’re aggregating on the server side. And this is a common architecture for a lot of organizations, because if you have a specific edge service or API gateway right there, you could aggregate the business logic there and you’re also reducing network traffic to the client. You’re also protecting yourself from the security layer perspective, as well as having all business logic and one neat, little location. You can also specialize these API gateways, and these edge services, for certain types of users, certain types of user groups or personas, or maybe even, let’s say, the hardware that they’re using. Like, for instance, I might have a specialized gateway for an iOS platform, versus an Android platform, versus a desktop Web, versus let’s say a Roku, or, you know, Apple TV, or something of that nature. 

You could definitely have a different little component on the server side with specialized business logic, specialized aggregation logic, that is integrating those different microservices. And again, if something fails you have to catch that failure at close to the point of failure, if at all possible, and don’t let it actually show up on the end user side of things. Now this last one here is a concept of chaining. And this is actually where microservices really get interesting. So, when you hear folks like Netflix and other people, Amazon, they talk about microservices, they’re dealing with situations where the chain of invocations might be 5, 10, 20, 30, 50 deep. 

So, as individual user transaction, like the user clicks button on the screen, might go through 20 or 30 or 50 different microservices to get an aggregated response back. So, you have to think in terms of what it means to call A, B, C, and D. And anybody who’s actually been doing software for a while is looking at that going, wait a second, that could be a big old problem for our organization. Because we happen to know that some of these components fail, and if they do fail, is it a cascading fail all the way back to the user. Again, you have to think in terms of failure first. 

You have to know that things will fail, and ensure that you actually break that failure before it actually shows up all the way to the user. So that’s known as the circuit breaker, and we’ll talk more about that in a second. But just be aware, these kind of pattern is super-critical. You might also mix them up, right, where the API gateway, the server side edge service, also is invoking multiple things that are chained together and you can kind of fan in, fan out. It just, you can kind of get pretty creative with at this point. We’ll also talk about tracing, a little bit later in the presentation where we talk about how do you actually know, where our specific user transaction went. Okay? So, this is the demonstration specifically where we kind of show you more this really interesting demo application. 

We kind of showed you the basics so far, but now I want to show you this specific one. So, we actually refer to this guy as helloworld-msa. You kind of see the bit.ly link here, msa-instructions, so bit.ly/msa-instructions, and it walks you through the set up of everything you’re about to see for the next several segments. And that is, how to create a Wildfly Swarm microservice, a Vert.x microservice, a Spring Boot microservice. We’ve created those for you, out of the box, even a Node.js one. How to deal within gateway scenario, and the gateway in this case uses Camel, as an example. 

How to build a front-end that sits on top of all of this. And you’ll get a chance to see that. You’ll also get a chance to see Jaeger, how to do SSO, and again we’ll show you many of these things in our demonstration, but even like pipelines and Blue/Green deployment, Canary deployment. So, it’s all nicely documented for you to experiment yourself and if you follow the getting started instructions I gave you earlier, where you set up your local minishift cdk environment just like you see me running here, you can run this exact same demo. So, let’s show you running it here. Here is my front-end, okay, so this is the frontend for the application. 

Let me go ahead and hit refresh here, just make sure we’re all clean. Okay, and we’re going to show you a lot of this capability, like here is my Hystrix monitor over here, we’re going to get to him in a second, but I just want to make sure he’s okay. Here is my Jaeger user interface, right, that’s our Zipkin replacement. We’ll talk more about them a little bit later in the presentation. But you can kind of see we have the browser as the aggregator, that pattern we just talked about, running here. We have the API gateway running over here. This is again the server-side component. In this case it’s running Apache Camel right here. Let’s zoom in on this so you can kind of see what that image looks like. 

Again, the browser makes the invocation to the, in this case, OpenShift, minishift, running on my machine. It runs this API gateway code which is Spring Boot but with Apache Camel on it. And Apache Camel is the one responsible for integrating and aggregating all these endpoints and matching -- making that all come back and you kind of see it says Aloha, Hola, Ola, and Bonjour. And then we also have the concept of the chain. And the reason we implemented all these patterns, is to show you different aspects of resiliency and load balancing and things like that. So, it’s a great place for you to come and try and experiment with these different types of items. So, let’s actually find our web console here for OpenShift, okay? You can kind of see here it all is running. So, this is the project helloworld-msa, Aloha Gateway, there’s a blue, we’ll show you Blue/Green a little bit later, Bonjour is set up for Canary deployments. We’ll show you that a little bit later. Here is our front-end. So, the front-end itself is actually running in its own individual service, its own individual pod, in this case. Hola, Ola. 

The reason we have Hola and Ola is one is the Spanish version, one is the Portuguese version. We have people who speak multiple languages on the Red Hat team. The Hystrix dashboard, Jaeger, Turbine, so you can see all this is already running here. So, I already have a ton of stuff running, but let’s just do this real quick. Let’s actually bring up a couple more Bonjours. Now, in this case watch what happens here. I could have done this replicas scale that you saw earlier. In this case I just used a little user interface. It’s the same thing underneath the covers. It’s, you know, manipulating the deployment and the replication controller requirement. But in this case, you notice, it took some time to start up, and that’s because it actually had to instantiate that docker container, right, so I think it was a docker run that had to happen. At the same time Kubernetes is doing a liveness check and a readiness check against it. Are you alive and you ready? And the readiness check is literally a business logic invocation. 

So, we know it is in fact ready, right? Meaning it is ready to respond. If I come over here to my command line. Let’s just go over here and let’s go over here, okay? Let’s see how this works out. Let’s poll_bonjour, let’s see if my poller works. Okay? And notice I’m just doing curl commands against it, and you can see there is the three different hostnames. 

Again, I like displaying the hostname because that shows me it has three unique instances of this running. And if I go over here to my user interface now, okay, and go to the browser as a client, let’s see if we can see it here. Yeah, there we are. So, we can see it’s going through load balancing in a round robin kind of way across the three different components. So, we now get load balancing for free. 

So, you saw earlier we got discovery and invocation, that was super easy, and load balancing is just part of the overall architecture. We got more to show you so please stick with us. 

About the Author
Students36337
Labs33
Courses93
Learning paths23

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.