1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Developing Cloud Native Applications with Microservices Architectures

Pipelines

Developed with
Red Hat
play-arrow
Start course
Overview
DifficultyBeginner
Duration2h 24m
Students648
Ratings
4.2/5
starstarstarstarstar-border

Description

Do you have a requirement to identify the right frameworks and tools to build your own Microservices Architecture? If so this course is for you!

In this course we'll teach you how to combine different frameworks and tools into a microservices architecture that fits your organizational needs.

You’ve no doubt heard about the microservices architecture, but understanding and executing it can be a bit of a challenge. Through a series of videos, this course will introduce microservices, review multiple microservices frameworks and runtimes, and show you techniques to deploy them through a hassle-free DevOps pipeline. We’ll discuss containers, Docker, Spring Boot, NodeJS, .NET, OpenShift, Jenkins, Vert.x, Kubernetes, and much more.

Transcript

Okay, now we’re going to talk about the power of the pipeline. That concept of where we actually have different stages, right, different components or different environments of our overall application. Where our microservice has to roll through this pipeline to make it out to production. So, the concept of the pipeline is actually been with us for many, many years. 

We’ve been thinking about this for a long time. For most organizations they’re just simply not automated. In some cases, you could probably define your pipeline by asking the right architects and right engineers and basically laying it on the back of a napkin. Someone might have even wrote it down in a nice PowerPoint slide, or maybe it’s actually on a whiteboard some place in the organization. For most people, they don’t actually have an automated pipeline, they just have hopefully, a described or a documented pipeline. In this case though we’re going to show you an automated pipeline. 

The idea that when a SCM check in happens, basically something happens to that source code base, we automatically do a bill, we automatically push it from a Dev environment, to a QA environment, to a staging environment, and to a production environment. And then you see here my little blue/green dots. We’re going to talk more about that in a little bit later, but the concept is that we can move that automated payload, right? That Docker image with its microservice inside it through an automated pipeline. Let’s go and show you what that looks like. Okay? So, we have a demonstration specifically of the pipeline. Again, all of this, as I said earlier, is part of this overall presentation. 

There’s set of instructions here. You can kind of see we’re into the pipeline mode here but let’s actually show you what those looks like from a user interface standpoint, because we actually have it built in, integrated from an OpenShift perspective. What we have here, oh, and actually let me show you what’s going to happen. If all goes well, you’ll notice that here when we actually talk our endpoint, okay, our Aloha endpoint actually has the wrong spelling of Aloha which is really critical to me because actually that where I’m from. Aloha is my hello and I can’t even get it spelled correct. 

That’s a problem. So, what I want to do is fix that. So, I’m going to go over here to the CI server, the CI project. Let’s look at builds and pipelines, and you can kind of see I have a previous pipeline execution but what we want to do is kick it off. Let’s go and get it started here. So, what’s happening now is it’s literally going out to GitHub, right, grabbing the sources for that Aloha component of the original GitHub, not the broken I have locally, and let’s go and run it through the SCM check-out, the maven build. It’s going to create a development environment image. It’s going to run a bunch of tests, not a lot of test but, you know, you can simulate test, promote it, and then wait for approval and then once it’s approved promote it out to the actual production environment. 

So, let’s kind of walk through what’s happening there, okay? So, one thing you’ll need to do is you’ll want to actually establish your pipeline, see aloha/pipeline.yml here. So, this is the aloha GitHub repo. You have to basically say the jenkinsPipelineStrategy is a Jenkinsfile. All right? So, this is part of setting it up correctly to basically say, hey, I want my microservice to have a pipeline. You got to have this sort of declaration and say, yes, I have a Jenkinsfile for it. If you look at the Jenkinsfile for it, it’s somewhat complicated but not too bad. 

Let’s kind of zoom in here and check it out. One is, we basically inherent from the default maven pipeline, all right. So, we want to do a maven based build. We know we’re running maven and a Java-based compile in this case, so that makes it a little bit straightforward. We want to do this persistentVolumeClaim here, because this basically ensures that our maven repository stays the same between different pipeline builds. In other words, all the download the internet maven that has to occur, this basically keeps it around so that every time we run the pipeline, we don’t have to re-download the internet, which is super awesome, super cool, nice efficiency gain there. 

You can kind of see how we load in the, another specific library. And we’ll show you that library here in a second, but you can kind of see what happens, right? We do the check out SCM, okay? We do a maven package. We do a buildApp. We’ll show you buildApp in a second. We do another run test, okay? And then we do promote image, and if go down here we do an input approve to production, and then down here another promote image, and then a Canary deploy, let's step in so you can see that. So, these stages are all that’s occurring, and these stages are what you see in this user interface over here. Let’s kind of flip back to the user interface to see how -- see if any – we've made any progress with our pipeline, okay. 

So, in this case I don’t have things cached so it’s going to actually take a little bit longer, but if I view the logs here, we’ll open that up, log in and you can see the actual Jenkins user interface as it actually executes the pipeline on our behalf. So, this is a Jenkins execution. You can kind of see what is going on here. So, it’s just at the very early stages of running that pipeline but I want to show you our script a little further. So, we’ll that keep running. Let’s go over here to the – so, if you remember earlier we had the promote image, and we had the, you know, the checkout, and we had the buildApp so like the buildApp right here, okay? 

Let’s look at those. The buildApp is right here, buildApp.groovy. This is our Jenkinsfile here. And you can kind of see what it is. It basically establishes the project we’re going to be working with, and it does an oc new-build, and an oc start-build. Now, these are just commands that you get from an oc command line standpoint to basically establish, kind of like what that maven fabric8 plugin was doing. It basically establishes the Docker build environment, actually makes the Docker build happen and that’s what new-build and start-build, so this sets up our BuildConfig, and this actually runs the BuildConfig, and then you see deployApp. If I scroll down here and look at deployApp again, nope. Kind of system's running slowly here. Come on internet. There we go, deployApp. 

We look at the deployApp here. It’s taking a little while to load this particular screen, so we’ll look at the deployApp screen or groovy script as it loads that page very slowly. This is just a static page. There we go. You can see basically deployApp. There’s oc new-app command, oc expose service, and then it does a set probe. This applies the readinessProbe. So, earlier I think I showed you an example where you saw the yaml for how do you set up a readinessProbe and a livenessProbe. In this case you can actually apply their probe kind of after the fact. And it’s always important to actually have your livenessProbes and readinessProbes set correctly. So, just keep that in mind. And then it also does this little patch because it’s specifically going to patch the container, jolokia container, our port in this specific case. So, this concept that you see here is actually super important. 

The idea that you can run these different scripts, they’re just groovy scripts, they all run from your Jenkinsfile, they aren’t run from this master file. You can see the different stages as they’re defined here, and that’s what shows up inside the user interface, inside of OpenShift itself. So, if we come back to OpenShift here, and go back over here, it’s still waiting for those resources but let’s actually, you can kind of see we’re here, SCM checkout, Maven build, Dev -Image build, Automated test, those specifically map to these guys right here, right? See? SCM checkout, Maven build, Dev - Image build, Automated tests. 

That’s where those guys are coming from. So, that’s how you know that your pipeline is actually making progress. Okay, it looks like our pipeline is completed. It’s run through its stages. We come over here and we ensure we are fresh and you can see it says Aloha. So, we fixed that bug that was in our environment basically, you know, we had made a mistake and now it's run through the pipeline, and executed. And you can kind of see that we have the Aloha corrected there. 

It specifically pulled that back out of GitHub repo where it was upstream, but that’s it for pipeline, super simple. Remember your Jenkinsfile, your Jenkins pipeline strategy, and then of course whatever groovy scripts you need to actually interact with Kubernetes APIs and OpenShift APIs. And there’s even some default groovy scripts that we provided for you for we’re interacting with the Jenkins pipeline. 

That’s all for this section. We’ll see you in the next video

About the Author
Students36581
Labs33
Courses93
Learning paths23

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.