image
Canary
Start course
Difficulty
Beginner
Duration
2h 24m
Students
2365
Ratings
4.5/5
starstarstarstarstar-half
Description

Do you have a requirement to identify the right frameworks and tools to build your own Microservices Architecture? If so this course is for you!

In this course we'll teach you how to combine different frameworks and tools into a microservices architecture that fits your organizational needs.

You’ve no doubt heard about the microservices architecture, but understanding and executing it can be a bit of a challenge. Through a series of videos, this course will introduce microservices, review multiple microservices frameworks and runtimes, and show you techniques to deploy them through a hassle-free DevOps pipeline. We’ll discuss containers, Docker, Spring Boot, NodeJS, .NET, OpenShift, Jenkins, Vert.x, Kubernetes, and much more.

Transcript

Okay. You saw the Blue/Green deployment model. Now we are going to show you the Canary deployment model. This is even more sophisticated, and hopefully you will understand why it is so. Now there is a little history around, why do we call this a canary? These little images here hopefully tell the story but if you live in a mining culture, a mining town, like coal mining, these folks have to dig well under the earth you know many, many feet, meters deep in the earth. And in some cases, they encounter, let’s say bad things, like poisonous gasses. So, they used to actually take a canary down there with them. 

The canary, in this case, you can see the little song bird here in this little cage. The little song bird would actually be down there with the miners who would sing for them and entertain them but at any point that canary fell off its perch dead, you know you need to get the hell out of the coal mine. Right? Get out of there quickly. That means that there is something bad happening. Because in the case of coal mines, I let's say natural gas, or other forms of gasses. They're, let’s say, odorless and you might breathe them in and then die. 

The good news is that the canary died before the miners did, and that was a solution. So, we call this the canary deployment the same thing applies here in software, we are going to float our little canary out of production and if it goes badly, we know the bail and get out of there. And so, this even, is even faster. Because in this case, unlike the Blue or Green, where we actually set everything to you know one, Blue or Green. 

We are actually going to send a tiny fraction of the user audience to a specific build. Same idea as before, we have our pipeline, we are going to do the demo without the pipeline. The idea being that the little change goes through Development, QA, Staging, and then lands on the current active environment, let’s say it is Blue in this case, and it is going to basically take a fraction of the user audience, a fraction the user transactions, from the overall system, and if it is good we grow it over time. And to eventually it fills up the whole that is the only runtime microservice that's out there, and if it fails at any point we will roll it back. So, let’s kind of show you that from a demo standpoint. 

This is a super easy demo, again it is all nicely documented in our, in our document out here, if you want to run it for yourself, you can see how to use Canary deployments right there, and again it makes it super easy when it comes to setting up in a Kubernetes-like world. So, let me go back to my user interface over here and let’s see, okay. And actually, let’s go over here. All right. So, we are going to mess around with Bonjour here, all right. 

So, Bonjour right now looks like it is okay. And let’s actually run our --we have a polling script for that too. I like these little polling scripts to kind of see what is going on. So this is just an external client connecting and getting back a response every second. You can see it is actually returning three different host names because there is actually three, I have three Bonjour’s running at this moment and if we look here, there is three pods running so basically it is doing round robin load balancing and, you know every, you know, third of the audience is getting each different pod. So, that is working out very nicely, but what I want to do now is come over here too my editor, and let’s find Bonjour, all right. 

Right here, and this is Bonjour and actually I have it saying Bonjour Nouveau. Let’s not have Bonjour Nouveau let’s have Bonjour Cool Stuff how about that? It doesn’t matter what that is, in this case we are simply making a code change. In this case it is a Node.js based microservice and I am going to come over to my terminal again and you can kind of see here I can do a start build and actually though, let’s do this. 

Let’s do a npm install. So, unlike, you know earlier, where we have to do a maven clean compile build, we'll do npm install, and then we will do our start build. So, you can see, npm start build. There we go. It happens very fast, so again like the Blue or Green, we are doing the build but against the not active environment. Okay. This is kind of, watch what happens over here on our console, there is the build happening right there, okay. 

On the Canary deployment, but it is not happening against the main one, so our users over here are unimpacted. So, the users are still seeing the old code they aren’t seeing the change I just made, because it is not yet active. So, we just going to watch this Bonjour Canary deploy, and look at the logs, we can dig around here in the console. You can kind of see, there is this process, going through right there. If I go back and look at my start build, you can see the start build is still running here. 

Interesting how the command line terminal over here and the log matches on the console side I've always enjoyed that, as an example. But there it goes, it's doing a docker push now, it is doing, pushing out layers out there. Again, that deployment is happening, and almost done, almost done and it is deploying. And let’s see here. Looks like it is pretty good. okay. 

We are done here, we are done here, and now here is the trick. You notice nothing actually happened from a user’s standpoint, we are still seeing the old stuff and that is because the current Canary is at zero pods and this is just the super easy way to do this. It is basically saying, look, I am not -- the new code is out in production but it is essentially dark launched, right? Meaning the code is in production, but no one is actually seeing it until I turned it on and basically, I turn the load balancer on, so it actually routes traffic to that new pod. So now we have one of those new pods in production, let’s go see what our client looks like over here, and there we go. 

There is the Bonjour Cool Stuff, right? Notice that it is one of every four because that is the ratio of the pods that we have in action right now. So, we have four pods total and one of them, 25%, is the new stuff. And if I go over to my user interface. and we refresh here, there is the Cool Stuff one, as an example. And I can keep rolling this up, I can roll it up to two and I can roll this down to two, so now it is 50/50. 

So, we have a 50/50 split of transactions going to the new, the new Cool Stuff that we have out there. You can kind of see the readinessProbe working force right here. Now it is ready, again that readinessProbe is super important. Let’s see how this looks, okay. Every other one fantastic. And then again Marketing can come and say Oh, My God. We didn’t want that change. 

Again, well requirements thought you want that change. Well, it is okay. we can roll it right back and just take it down to 0 and the canary has died. Okay. In that case the canary is no more, and we are going to just live with the old stuff, as an example. So, that concept again allows you to do deployments super-fast, if you are going to move it that breakneck speed of we're deploying, faster than once a week. Therefore, we have broken up our monolith into many different microservices, and now we want to deploy five times a day, or 50 times a day, you can do with some security and assurance that it is going to be okay. And that is really the beauty of what we see in a microservices architecture based on Kubernetes and based on OpenShift. 

Well that concludes this demonstration and now we are going to actually show you one more segment specifically around monoliths to microservices

About the Author
Students
132607
Labs
68
Courses
112
Learning Paths
183

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).