image
API: Building and Deploying a Microservice
Start course
Difficulty
Beginner
Duration
2h 24m
Students
2495
Ratings
4.5/5
Description

Do you have a requirement to identify the right frameworks and tools to build your own Microservices Architecture? If so this course is for you!

In this course we'll teach you how to combine different frameworks and tools into a microservices architecture that fits your organizational needs.

You’ve no doubt heard about the microservices architecture, but understanding and executing it can be a bit of a challenge. Through a series of videos, this course will introduce microservices, review multiple microservices frameworks and runtimes, and show you techniques to deploy them through a hassle-free DevOps pipeline. We’ll discuss containers, Docker, Spring Boot, NodeJS, .NET, OpenShift, Jenkins, Vert.x, Kubernetes, and much more.

Transcript

Okay, well let’s dive into the details at this point. Let’s really try to drill down on some of these key capabilities, key properties of microservice. You can see my little microservice here surrounded by these kinds of tiles on the outside here, maybe a Vert.x, Spring Boot, or WildFly Swarm, or Node.js, or even a Go microservice, as an example. All these things are very possible. It could be Python, PHP, Ruby, it doesn’t really matter. We want to show you what it means to actually build, not only a thing inside the circle, this individual microservice which is fairly easy, but also all the capabilities around it that are super-critical to actually managing that thing at scale, managing it at run time. So, let’s get into the API side of it, right? 

Making an individual microservice is super easy. There’s all these great frameworks, especially in the Java community but even in the Python community, Node community. If you want to Node.js with Express no problem, super easy. You know, if you want to use Python with Flask, or something like that, no problem. But in the Java community, we see a ton of things like Java EE, traditional Java EE using JAX-RS, JPA, very common, as an example, for building a new microservice, or maybe something like Spring. 

Spring can be seen and the Rest template is another good example too. Also, we’ll show you a little bit of Vert.x today, because Vert.x is one of my more favorite particular technologies to doing reactive programming, and reactive systems. But you’ll see all of this and a lot of this presentation throughout, as we get throughout this presentation here. Now this is the easy part. 

Now I did make a note here making high-quality API that another business partner is happily consuming, is a much harder thing. But building an individual microservice, getting it stood up, is actually super easy. All these components you see here, like Dropwizard, Spring Boot, WildFly Swarm, Vert.x, they may get so easy that it isn’t really hard to do anymore. One thing you should be noted though is Dropwizard really was at the forefront of this. They are the ones, the team that kind of got it all started. And we don’t actually have our Dropwizard demo today, but you would will see it in the book by Christian Posta, we mentioned earlier, but we’re not going to focus too much on Dropwizard, but it was definitely the team that kind of got this party started for a lot of people in the Java ecosystem. 

Spring Boot came out several years ago, originally for building web apps and websites, and then of course they expanded it with Spring Cloud and incorporated microservices architecture also, but it is super popular and certainly the original Spring NBC programming model and the RestController, RestTemplate model is super popular also. We also have WildFly Swarm. We have a demonstration of that today, so we’re going to be specifically demoing Spring Boot, WildFly Swarm and Vert.x all side-by-side, so you kind of get a feel for how those three different components build up a new API, build up a new microservice, and get deployed into the same back plane of Docker, Kubernetes and OpenShift, and manage in the same way. So, just keep that in mind. We’ll focus on all of these today. So, the latter three specifically, okay? 

Now here is your Spring Boot endpoint. This is just a code snippet. It’s not -- you know, we’re not going really to study it right here. We’re going to actually show you this live in a second or two, but look at it from this perspective of a simple API, this one just simply is hello. And it says hello. It just returns hello. So, it’s a simple as that. And of course, there’s two files in case of a Spring Boot application. There’s the application file. The one with the main method, if you will, and then of course, there’s specific controller. Here is what WildFly Swarm looks like. 

You have a couple more import statements, but you kind of see that the endpoint here specifically runs and this is all you need, just that amount of code, it is really not that hard. And here is the Vert.x endpoint. You notice that a Vert.x endpoint is different than the former two. The other two look very much alike the only difference is the difference in annotations. In this case, we actually have a reactive programming model, and we specifically have this concept of the router. And the router, well, you can see the actual router.get, that’s the http verb that you’re going to use. So, get, put, post, delete, if you’re familiar with those verbs from an HTTP standpoint, that’s how you set those routers up, or routs up, and then, of course, you have the mapped-in handler, and you can see my Lambda invocation so this is using Java 8, and you might not be familiar with Java 8, but this is where, you know, you want to break out that old world of Java 6 and old school app servers, into the new world of microservices. 

You can now leverage Java 8 APIs and Java 8 capabilities. So, all that is basically how a set of endpoint and any of these servers. This actually is a single file. You can just run directly from your command line which is very nice also. So, we actually treat, from a Vert.x standpoint, we treat Java like it’s a dynamic language. Here is your Node.js component. So, again, Node.js super straightforward. Any of these are fairly straightforward at this point. Building a new API, building a new REST endpoint is super easy in most different technologies. It’s really a matter of how you run them, and how you execute them, and how you wire them together, to build an overall system. That’s really where the magic of microservices really comes in. Okay, here is a key tip for you. Look at the fabric8 Maven plugin. 

So, there’s the URL here specifically, and you can actually use the fabric8 Maven plugin, to go into an existing Spring Boot, WildFly Swarm, Vert.x, or even a Java EE application. It’ll look at your pom.xml and it’ll instrument itself in there for easy deployment to -- for easy docker build, easy Kubernetes deployment, or easy OpenShift deployment. You just basically type in, you know, essentially fabric8 set up. It’ll update your pom.xml and from that point forward you do fabric8 deploy and you’re now deploying into the backbone of an existing, let’s say cloud, right? Kubernetes base cloud or OpenShift base cloud. Now, there’s a debate around thin WARs vs. fat JARs. 

If you noticed earlier I talked about Dropwizards, Spring Boot, WildFly Swarm or Vert.x. Those are all fat JAR or uber JAR architecture. We’re not going to focus too much on this particular debate. I recognize people certainly love their thin WARs versus their fat JARs, and definitely if you’re interested in the thin WAR architecture, WildFly Swarm does that as one example. It offers both fat JAR and thin WAR and you can of course do a thin WAR architecture with your Dockerfile if you’re familiar with the Docker technology and how it does its layered file system. 

You can easily add a WAR into the existing backbone of an application server where all you’re paying, if you will, all you’re paying each build is for the few kilobytes of that WAR file versus let’s say the multi megabytes of the fat JAR file. So, it goes either way. Some people like the fat JAR. Some people like the thin WAR. I’m indifferent to it, and it doesn’t really matter because all the other principles still apply. You can do it either way. Okay? Okay, at this point I want to jump into a demonstration of showing you how do we build some of these endpoints, specifically Vert.x is where we’ll get started. But we’ll also interact with a little bit of Docker and Kubernetes along the way so you can kind of get a chance to see how that feels for wrapping up your microservice and deploying it. So, let’s jump out of here and then get over to our command line where we see some things. You can kind of see I actually have my minishift directory here. If I do a minishift ssh, as an example, I can basically jump into the virtual machine. I can kind of show you that here /etc/os-release. 

All right, you can see I’m running a Red Hat Enterprise Linux virtual machine that’s running specifically under Virtual Box. So, you can see minishift running here. I have about 8 GBs of RAM and two CPUs specifically dedicated to running this specific process. There’s a lot happening in this particular virtual machine, that’s why I have so much going on here. So, if I say docker images, you can see I have a lot of them. Let’s just do that. If I do docker ps I got a bunch there. If I come back to my virtual machine and do a ps -ef grep and java I got a bunch of Java running. 

So, there’s actually a lot running inside this virtual machine, a lot of microservices already previously deployed. But in this case, we’re going to show you how to kind of build one from scratch if you will, just to actually get started. So, if you come over here to my vertx-demo directory and if I open up my editor, I have it running over here, yes, running over here. So, I have this particular Vert.x endpoint, right, very straightforward. It says hello right now. Let’s actually make it say something interesting like, you know, Namaste, how about that? Boom. 

All right, so we’re not going to say hello any longer. We’re going to give a different greeting, but I’m going to basically show you that URL, that endpoint, so basically, we’re looking for /hello and it’s going to run and return this particular piece of text, right? Hello from Vert.x with the current date and the hostname. You’re going to get a feeling for why the hostname is so important to me as we go throughout the presentation. I like knowing where my application thinks it’s running, even if it’s not really running there. So, we have that little piece of Java code. 

If I come back to my directory over here I can do a mvn clean compile package, right, we’re just going to do a maven build at that particular application and if it compiles and builds for us, we’ll get a nice fat JARs result. You kind of see there is my fat JAR right here, okay? So, that JAR file is basically the all-encompassing component. Meaning it has the app server like capability. In this case, the listener for HTTP built right into it, based on Netty. Vert.x is based on Netty, and then that will get deployed as we wrap it with our Docker component. So, we have Dockerfile. Let me show you the Dockerfile we have right here. There we go. You can kind see that it’s based on this fabric8 base image. We can talk more about that later, but specifically fabric8 base image is well tuned for running microservice style workloads, really Java applications in particular. 

There’s a lot more to be set in that category, but we’re going to take this specific Vert.x demo snapshot JAR that you saw earlier and we’re going to put it in deployments and we’re going to run it from there. So, that’s really what this Dockerfile is doing. I can come over here now and say docker build -t burr, let’s do this, myvertx:v1, and so that’s basically kind of the namespace if you will, I’m putting it in, and myvertx:v1 and if I got the period there correct it’s going to find that Dockerfile, it’s going to -- it already has pre-cached his previous layers but it does know to add the new JAR file, as an example. And I can do a docker run now, so docker run -it -p 800:8080 burr/myvertx:v1, and if we did all that correctly it’ll open up and, all right, looks like it’s running. But we need to kind of come over here back now, and figure out what our IP address is. 

So, if I come over here and say minishift ip I can find out the IP address of that virtual machine. You can see it right there. And because this virtual machine again is running the whole OpenShift backplane, but it’s our docker daemon, it’s our Kubernetes environment, it’s everything. If I come over here to my screen, okay, let’s go here and paste this in, and we’ll go 8080 and hello is its URL, and there it is, running right there. Okay? So, you can see we made the code change and that might be what we want in our production environment, as an example. So, building a microservice, super simple, easy to get started, and then you can deploy it using Docker or in our case we can deploy it using Kubernetes. So, let’s actually do that with Kubernetes now. So, we’re going to take this guy down. We’re not going to have it running anymore, and now that we’ve killed it we kind of see it’s gone away, okay? 

Let’s back up over here. All right. So, we have our docker image now, so docker image grep myvertx:v1, it helps if you get plural, docker images, there. And let’s see, let’s see, there we go. All right, so there’s the v1 running right there that we just created. And let me also show you now, how we kind of set up the rest of this. I have a services scripts that I use to make sure that I keep myself honest. I can actually say query the namespaces. I can see all the different namespaces that are out there. You can see what we have to show you there, but I can create a namespace. So, this is like creating a project in OpenShift, it maps one for one, but I can say now create-namespace and I can add the kubedemo namespace right there. So, the kubedemo namespace has been added, and actually just kind to prove that point. In fact, come over here to my OpenShift web console because I’m really using OpenShift here under the covers. So, I like this to be bigger. This is the problem when having too many windows open, but you know, you kind of get use to it when you be-bopping around like that. I had basically a whole cloud environment running here on my laptop. Let’s try to get to the top level and, come on let me out of here. Here we go. Let’s just actually take this project away. There we go. 

Okay. Now, so we have a ci system running here. We have helloworld-msa. You’re going to see a lot more about that. We have the Dev and QA environment for helloworld-msa, and we have a kubedemo. So, that’s the guy we just created from the command line. So, I could have used the user interface here inside of OpenShift to actually create that. Let me make this a bit smaller so it’s a little more sane. But I use the command line so kubectl to create that component, as an example. If I look at my other scripts here I can create the docker image, which we’ve already done. I can also then create the pod and we’re going to specifically use the create_pod_yaml, and let’s just look at what script does. Okay? It specifically saying create kubetcl --namespace, create -f, that’s the common pattern you see from the kubectl Kubernetes standpoint. I’m going to give it the deployment.yaml, and if you look at the deployment.yaml, let’s go look at it real quick. Okay? You can kind of see this one is a little bit complicated, because there’s a lot going on here. 

We specifically have myvertx what we're going to end up calling it, and the namespace kubedemo. You can see we have our selectors here, the labels that have to match. That’s important. You can see where the image comes from, the myvertx:v1 coming from my docker daemon. Also, I have the livenesProbe and the readinessProbe, and there’s a lot to be set around these two things but they’re super-critical to when you actually get into your rolling updates and your canary deployments and blue/green deployments, that’s really where you see these guys shine. So, just keep that in mind. So, you can kind see that you can specify, I want to interact with hello, because that’s the only endpoint I have, and I want to basically get a 200 back. If you get a 200 back we’re good. 

You pass the livenessProbe, you pass the readinessProbe. If you get something like a 400 back, it’s going to fail. And in case of a failed readinessProbe, it removes it from the load balancer; and then in case of a failed livenesspProbe, Kubernetes will actually restart it, potentially in another node in the cluster. So, keep that in mind, okay? You can kind of then see what else we have here. So, that’s the deployment guy. And, so, we can actually let’s create it. Let’s go. All right, so there we go. kubectl get pods, and we’re in the wrong namespace so I can say -- actually let’s do this real quick. One nice thing about the oc command line which I’ve also used is I can fix, I can make the namespace sticky. So, all right, kubectl get pods, all right, there it is, and you can see myvertx up and running. If I look back at my OpenShift console too, okay, there it is, it’s up and running. 

So, now that pod is live and running just like a -- think of it as a virtual machine I just started. In this case, it’s just a docker container. Just as you saw earlier with docker run. In this case it’s done with Kubernetes. The nice thing about this is, it doesn’t matter where in the cluster this runs, right? In the case of docker run I’d have to go into each node in a cluster and say docker run, docker run, docker run. In case of Kubernetes, it schedules it out across the entire cluster. I don’t have to think about it anymore. And I know this is the conversation about microservices, but this is how you run microservices at scale. So, let’s see here, what else do we have. We have, so we’ve already seen how we get our pods. Let’s go and expose the service here. Okay? 

That exposes the service with the front-end load balancers, specifically on port 8080 and now it’s accessible, it's available. So, if I come here and say curl that service, right. We can curl that service. Let’s see how this works out. There it is. Now, we’re hitting that endpoint, okay? So, we can kind of interact with it in a fairly easy way. You can kind of see what I had to do here. We basically construct it form the minishift IP address, and the node port are associated with it. 

By default, a Kubernetes service is not visible to the whole world. It’s only visible within the context of the cluster itself. And these services are just like any kind of DNS resource, right, at this point. It’s just a specific service name that is available throughout the entire cluster. And the node port is how I expose it from the virtual machine to my host OS, which in this case is this computer I’m using right now. Okay, so we got the thing exposed now. 

I can also scale the replicas out. So, let’s look at this command. You can kind of see scale deployment myvertx --replicas=3, and now if I come back and curl it some more, watch what happens. Okay? You’re going to see something happening here, and I’ll show you what’s happening on this side. This is it scaling out. You can kind of see there’s only one that’s been active. That’s one we were working with earlier, and now there's three active, and you can kind of see that the hostname has changed. Remember when I said earlier: Focus on that hostname. In this case, it allows me to see that my application is now load balanced. Let’s just go back and look at the code real quick. I know I’m jumping around somewhat, but let’s look at the code. Right here, HOSTNAME. 

I can see that my application code, it’s the JVM, thinks it’s literally running on three completely different computers, in this case. It’s not aware that the host is a highly virtualized container running with a load balancer built in front of it based on Kubernetes. So, it doesn’t matter where in the cluster it is. I'm basically, I’m load balanced across those components, those three different JVMs, those three different vertx components. So, that’s actually super cool. If I come over here now, and what I’ll do, let’s do this. Let’s actually come over here. I’m going to open up this other window here, and get over here. So, let’s see if this will work for me. 

Let’s see if we can run the curl from this window. Okay, there we go. And now let’s actually make a change to it. We’re going to have a little fun with it. So, we have the Namaste. I’m going to actually come in here now and make this Bonjour. So, we’ve changed the actual business logic, okay. docker build -t burr/myvertx:v2. All right, we’re going to do our docker build, we’re going to make the version 2 of that, though. Before we do this, let’s actually get this -- let’s be smart about this. You got to do mvn clean compile package. It is a Java project after all. Sometimes I forget that, but node.js projects, you can always kind of skip this part even though NPM install is a good idea, but in this case let me my docker -- my mvn build, before I do my docker build. 

So, docker build -t burr/myvertx:v2, and if I did all that right, all right, we do our docker build there, that’s going to build our new docker image. Let’s go check out back our curl here, it is still cruising along. No problem there. And you can see it’s still the old version there. We’ve done our docker build. I could do the docker run like you saw earlier. In this case, I’m going to actually check this out. I’m going to come over here and say if you look at prepare_update, that’s essentially what we just did, all right, we did our docker build right there. And then if we look at number 10, all right, we’re going to roll the update. And so, we’re going to roll the update. 

If I go look at my console again you kind see it’s rolling the update there. Okay, we’re taking down the old one, we’re building up the new one, and doing a rolling update and it’s taking a little bit of time to actually get that new image ready. You can see it says not ready. And there it goes. Let’s see how it came out over here. And now we have Bonjours. So, we basically took an old component and completely eliminated it with a whole new version 2 of that component, and rolled it out. And in this case, with no downtime to the environment. So, this is just a Kubernetes capability that we believe is fundamental to how you would want to manage your actual applications. In this case a Vert.x microservice, running inside a docker image on a Kubernetes backplane. 

I can also come over here now and look at this one last time. If I say look at the update history, okay, right, we also can see the update history, you can see what happened there. And I can even undo the update if I wanted to, so we can undo that update and there the rollout undo deployment/myvertx and if we look back at our console over here, you kind of see it’s going backwards now. It’s going from v2 back to v1, and that’s the same process you saw earlier. It went from v1 to v2, and now we’re going v2 to v1, as a simple example. Okay? So, there we go. And if we did everything correctly, all right, we’re back to Namaste. So, if you make a change you don’t like in production, it’s easier to roll it right back, and again your impact to your users is minimal. In this case I could've even had specialized routing rules to determine which users actually saw that change before I rolled it out to the world at large. Okay, now we’ve shown you Vert.x I want to show you a couple of other things while we’re in this particular component, this particular demo. There’s actually a lot inside here that you’ll see.

Let me actually go to the code editor. We have not only the vertx-demo like you saw, we have WildFly Swarm, node, we even have a go-demo that might be very interesting to you, but also Spring Boot demo. Let’s go and show you how the Spring Boot demo get set up, okay? I want to kind of just walk you through here. We can kind see we can just have a basic RestController like you saw in the slides earlier, but let’s just actually start from scratch. Let’s see how this plays out. I’m game if you guys are game, so let’s give this a try. We’re going to come over to our browser, we’re going to do a start.spring.io. So, this is often how you get started, and I can leave it as com.example. 

I think that’ll be fine. Demo, let’s call this demo24 to give it a kind of unique name. And I’m going to pick the web dependency right there and make sure I pick it. I’m going to hit generate project. So, that’s all I need to do to build a basic project. And if I come over here and actually open this guy up. Show in Finder, and open this guy up here, you kind of see we have a little project here, and of course we have a microscopic font. So, it's going to be actually hard to see here, but what I want to do is actually open up a terminal right here. Open Terminal at this Folder. All right, so here we go. So, here we have it. Let’s do a mvn clean compile package, just make sure we’re in good order and yes, it’s got to do some downloads, it makes some sense. Let me go back to my old window here. 

Okay, this guy was my minishift. I’m going to do minishift oc-env. This gives me my OpenShift environment. All I’m going to do is make sure I grab that export command, you know, if you have -- and make sure that we actually apply to this other terminal session so we have it in its path. After we get the mvn build done. You have to download the internet first, as always. Apparently, I had not done the specific version before so otherwise just normally would have been cached in my local mvn repo. 

So, we just need to let that come down and also, while we’re over here though, we’re going to need the minishift docker-env, okay? Because as I said earlier the virtual machine we’re providing to you with the minishift cdk, specifically, is both Docker daemon, Kubernetes backbone, OpenShift all in there, and of course all based on Red Hat Enterprise Linux. So, that gives you a complete working cloud environment right here on your laptop to work with. And then we just need to let this maven build happen here. So, let’s just give it a few more seconds. Okay, now we’ve completed our maven build. We’ve downloaded the internet and now we’re ready to do some work here. I’m going to come over here though, and actually make sure that these export statements from earlier, let me grab the screen correctly, grab it on here. I’m going to get my docker export statements from my minishift docker-env. 

Now I can docker ps, I can do docker images, right? I’m seeing now the same thing we saw earlier in the previous window and let me also grab OpenShift here, right. So, we have OpenShift. There. And we can do oc status and we’re good to go. So, let’s do this, oc new project spring-demo, just to give this thing a name. So, we’re going to be in the spring-demo now. Let’s bring up our code editor here for this little demo project. And you kind see what it looks like, it has a main, this is the one we just downloaded, right? And it has an application. The thing it doesn’t have is actually a controller. So, we can add a new controller right here. So, MyRESTController.java and so we have our little RestController, and to make this super simple I’m just going to copy and paste it from the other one just so I don’t make a mistake, but you can kind of get the feel for it this is the same stuff you saw in the slide earlier and the only difference is the package name, in this case. So, let’s go up here to the demo24, okay? And the package was com.example.demo24, okay? 

Obviously, I made that up on the fly. We hit Save here. Hello from Spring Boot. You can kind of see it has the date and it has the hostname as we saw earlier with the Vert.x example, and I can now say let’s do a mvn clean compile package to make sure that builds correctly. And, so, we’ve now built our little Spring Boot based application or Spring Boot based microservice, our little API. I can do things like mvn spring-boot:run. I think that’s the way you run it. Let’s see if that runs for me. There it goes, up and running. So if I come over here now to my browser and I go to localhost 8080, not that guy there. 

There we go. So, there it is running. And you can see there is Hello Spring Boot. So, we’ve got Spring Boot running and you can kind of see there it is. And if I close this and come over here now, Ctrl+C. It also built for me a fat JAR. You can kind of see it there’s the fat JAR right there, also just like before. And in the previous demo we actually showed you how we took the Dockerfile, we did the docker build. In this case I want to make it even easier, right? I mentioned the fabric8 plugin earlier. Let’s give that a try. I actually have it over here in a little buffer. I keep it -- it’s so handy, I keep this command with me all time, okay? 

So, I’m going to run that, and we’re going to actually instrument that pom.xml now with the fabric8 deployment plugin. So, at this point let’s go look at it. If you look at the pom.xml associated with this guy here, okay? We’re going to see the fabric8 plugin now introduced, and if all is gone well, I can simply come in here now and say mvn fabric8:deploy. All right, and that’s going to actually do our build and actually push it up into the project we just created with the oc project commands. If I come over here to my OpenShift console. Okay, let’s go from here, to all applications, okay, come on. There’s a problem with this window being too big. I can’t make it behave correctly. 

There we go, I want to go Home. Now we go to spring-demo. Go down there and let’s see here. Okay, it’s still pushing. So, in this case it’s done essentially what you saw on the previous demonstration. We don’t have a Dockerfile but that’s okay, we’re using something called s2i underneath the covers, Source-to-Image. In this case all it needs is our fat JAR, which is the fat JAR I showed you earlier when we did our mvn clean compile package. And then it takes that, wraps it with that Source-to-Image, which basically says here is a Dockerfile that you don’t have to worry about any longer as developer. It bundles that up and then pushes it into the environment. So, you can kind of see here it’s now deployed. 

That little spring-demo application I just kind of created on the fly for you, and it’s going through the process now of bringing up that container. So, if I come over here now and say oc get pods -w we should see that the pod is pending, kind of see that it’s going through its deployment cycle here, the -w by the way is a watch so we can kind wait for it, and watch its status change. But at this point it is literally deploying across my cluster. In this case it’s the cluster of one here on my laptop, but if I had a cluster of 25 nodes running thousands of containers, it can still deploy across that too. So, that’s one nice about the fabric8 maven plugin and makes on-ramping your Spring, Vert.x, WildFly Swarm application super easy, as an example. And so, and actually we’ll just let that continue running. 

Let me show you one other thing while we’re here. Let’s go to wildfly-swarm.io, all right? Same idea applies here. We will go to their generator. We’ll take com.example and we’ll make this 25, all right? Make sure it’s unique, and we’ll say JAX-RS, because in the case of a Java EE we use JAX-RS instead of Spring MVC. And then we’ll say generate project. So, WildFly Swarm, our micro profile implementation, our Java EE implementation, same kind of concept there. I can open it up and unzip it and as before, I can bring up a terminal window as well here, okay? I can do the mvn clean compile package here and while that’s running, okay? I’m curious to see, let’s go back and check on the other guy there, and see how it’s running along. Web console, okay, the Spring component is still deploying, not done yet. But let’s come over here, and actually we'll leave this window open, and go back over here, okay? So, there is -- so our maven -- we just did our compile and WildFly Swarm. 

If we look here, you can kind of see there is the demo.war, right? This is the file that’s created when we compile the WildFly Swarm application. Let’s bring up the code editor, so you can kind of see what a code looks like. And in case of WildFly Swarm's generator, one big difference is it does actually give you the business logic endpoint for hello world right out the box, so we can kind of leave that as it is. You can see where it says hello there. And then the same thing applies. We can run the fabric8 mvn plugin against it, and then get that to deploy. Now, I’m curious to see if my Spring component has deployed at this point. Let’s see here. No, still deploying. Let’s look at the logs to see how it’s coming along. Okay? 

It’s going through the process because it does build the docker image, it pushes that into the docker daemon, and in this case it has to then get that bootstrapped into the actual pod, right, in the actually running instance. If I look back over here at my demo24, you kind of see the status is still pending. So, let’s go into the process of doing its deployment. Often once you got everything cached things do operate at a vastly greater pace. It’s certainly faster than this. 

In this case obviously, I’ve not set up this before and not had cached this particular image before. And of course we created it kind of on the fly. We don’t have to wait for it though, but let’s show you the fabric8 against the demo25. Right? The WildFly Swarm one. Just so you get a feel for it. Okay, if I come over here now and say I want to do the fabric8 set up, like we saw earlier. Same idea as before, okay, we’re just adding fabric8 to it. I can just do oc new-project swarm-project, and -- oops, it helps if you spell project correctly. And we’ll go back over here and you can see our -- well, let’s do a refresh, swarm-project. Oh, let’s see what is oc new-project swarm-project. It says oc command not found. Because I forgot I got to actually have this export command in this new terminal window. 

When you open a new terminal window you got to actually take things over to it, right? So, in this case now it’s added to the path. Now we have the swarm-project. There we go. It’s often more of fun when you see the demo messed up, right? So, now we have the swarm-project and now I can come over here and so mvn clean, we already did clean compile so we do fabric8 and a deploy. All right, so that is going to then be our fabric8, our, sorry, our WildFly Swarm component actually going out there, so it’s going through this process. It’ll build the docker image again and of course deploy that against the docker daemon and do the same process of rolling it out against Kubernetes. 

So, that concept of basically building a Vert.x little application, adding a Dockerfile, or using something like Source-to-Image and a fabric8 maven plugin, are different ways to on-ramp any kind of Java-based API, or for that matter Node.js API, Python API, doesn’t really matter what it is, building an initial microservice and getting that hosted into the backbone is really the interesting part of it is. In this case, running it in our local cloud. 

All right, we have more to show you but please stay with us. 

About the Author
Students
143692
Labs
71
Courses
109
Learning Paths
209

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).