1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Developing Cloud Native Applications with Microservices Architectures

Discovery and Invocation

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 24m
Students755
Ratings
4.3/5
starstarstarstarstar-half

Description

Do you have a requirement to identify the right frameworks and tools to build your own Microservices Architecture? If so this course is for you!

In this course we'll teach you how to combine different frameworks and tools into a microservices architecture that fits your organizational needs.

You’ve no doubt heard about the microservices architecture, but understanding and executing it can be a bit of a challenge. Through a series of videos, this course will introduce microservices, review multiple microservices frameworks and runtimes, and show you techniques to deploy them through a hassle-free DevOps pipeline. We’ll discuss containers, Docker, Spring Boot, NodeJS, .NET, OpenShift, Jenkins, Vert.x, Kubernetes, and much more.

Transcript

Okay, we’ve seen how easy it is to build the new endpoint, build a new microservice, and deploy it in Docker, Kubernetes, and OpenShift. Now we’re going to show you how to kind of weave two of them together, because really what happens when you go from a monolithic architecture to a microservices architecture is you have many, many microservices, potentially 20, 30, 50, 500, all with their own little independent two-pizza teams deploying them. 

But how does one find the other? How does A, service A, find service B, and then how does it invoke? And this is a huge architectural challenge you'll have to decide on for your organization. Many people are leveraging HTTP, as an example, because it’s easy to test each individual component, it’s easy to test each individual API, each individual microservice, and then of course allow that HTTP to come from anywhere, right? Whether it be inside the company or even outside the company, if you want business partners to interact with your APIs. But in this specific case we’re going to show you what it means to do discovery and invocation with Kubernetes specifically because we don’t need Eureka, and we don’t need Ribbon. 

A lot of people are taking advantage of Eureka and Ribbon, specifically as Netflix projects that are out there, part of Spring Cloud Universe. But in the case of Kubernetes organization we don’t need that, right? Basically, every service you create, every service that you saw me create earlier with the load balancer built into it, I get that for free. 

So, I don’t have to worry about it. So, let’s kind of come over here and actually show you this demonstration. So, we’re going to focus on discovery and invocation, and let me come over here to my command line, let’s see here, yeah, we’re still -- I was just checking my available memory, looks like I still have memory in that VM, that’s always good. I’m running a lot on it. I’m going to come over here back to my -- I’m still in that kube4docker project that we saw earlier, but in this case, I’m going to go into discovery demo. And again, I have the Readme file that kind of gives the instructions, and kind of go check it out over here. It talks about building a new project. In this case discoverydemo. 

If I look at my web console here, you can kind of see we have a discovery demo, okay? And there’s nothing deployed to it at this point. But I can come over here to producer, and just like earlier, I can do mvn fabric8:deploy, okay? So, that’s going to do the build and do the fabric8:deploy. Again, it’s doing the maven build, the docker build. Again, I don’t need a Dockerfile any longer. Specifically, the fabric8 plugin takes advantage of that, takes care of that for me. Leverages the S2I capability OpenShift, Source-to-Image, and it applies a default image associated with that payload. In this case a nice JVM-based workload. So, that makes it super easy. 

And it goes to its process, and you can kind of see, it should be starting to deploy up here now. Okay? So, let’s see if it’s -- looks like it’s come up already. Let’s look at the log file for it. Yes, fantastic. And if I come over here now to the actual Overview page and then I can click on its URL, let’s see, there it is. Okay, so that’s the, that is the guy running on the producer side. So, this is the guy that has to be found, okay? This is the one we are looking for, if you will. Let’s go look at the Java code real quickly, and actually poke around it. Okay, so the producer is very straightforward, simply like the hello world we saw earlier. Okay, it’s just responding with the data. 

In this case, it’s running on the root, okay? So that’s just running on the index if you will. There’s no sub URL under here. Now let’s look at the consumer side of it. On the consumer side, this is the Spring Boot based application. So, the producer is a Vert.x, and this case is Spring Boot. And basically, what happens is, it’s going to look for producer. So, the name of that other guy is just producer. If you come back over here, okay, you can see its name is producer. That’s the service name associated with it. We go back to my command line, to just kind of make the point. I can say, kubectl get services. 

Okay, it’s producer. That’s its name. And that’s its DNS entry and that’s taking care of us from a Kubernetes standpoint. So, OpenShift leverages this on top of Kubernetes. And I don’t have to worry about its actual IP address. If its IP address changes, I still refer to it by its name. So, discovery is super easy, in the case of Kubernetes-based application. So, if I come over here to my consumer, mvn fabric8:deploy and get that deployed, that component, and what it’ll do as it goes through the process, it’ll start deploying. I’m going to bring up my editor again. Okay? It’s going to basically have this producer:8080. What’s interesting also, because I’m using Kubernetes, I’m using OpenShift. Everybody is on 8080 even though those are all being managed for me internally, from a Kubernetes standpoint. So, every base, every component, every pod, right? 

Every microservice and every pod, Kubernetes calls them pods, is running with its own unique IP address, that’s Kubernetes handles that, and they’re all on 8080. And I don’t have to think about it, and I don’t have to worry about it. I basically say, hey, call producer. Notice that if you’re outside of Kubernetes you would use its route, that is its exposed route, that’s an OpenShift-specific feature. In this case, I’m inside the cluster, I just refer to it by its service name. So, here is an example of outside the cluster. Here is an example of inside the cluster. And actually, that outside of the cluster, we used it earlier when we’re using the browser. Okay? So, if we come over here, you kind of see, that’s the producer and you see the URL we used right there. 

So, that’s an outside the cluster example. Okay? Let’s see here, and there is our consumer. So, if the consumer is up, okay? Let’s make this bigger. You can kind of see it actually says Hello from Spring Boot and then it basically says consumer and from Vert.x producer. So, basically, we call Spring Boot's application, it went from here, through the consumer, into the producer side. So, we can actually look at the code again one more time. You can see where we actually do the getForEntity, right? We use the restTemplate that's part of the spring universe, and we basically do a System.out.println, and we return Hello from Spring Boot with the date on the hostname and of course the body from Vert.x, and you kind of see they both have their own unique hostnames here. There is the hostname for the consumer side, and the hostname for the producer side. So, discovery and invocation is super easy with the Kubernetes on OpenShift-based solution. 

That completes this topic and we’ll see you in the next video

About the Author
Students41438
Labs34
Courses94
Learning paths28

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.