image
Microservices Overview: What and Why?
Microservices Overview: What and Why?
Difficulty
Beginner
Duration
2h 24m
Students
2382
Ratings
4.5/5
starstarstarstarstar-half
Description

Do you have a requirement to identify the right frameworks and tools to build your own Microservices Architecture? If so this course is for you!

In this course we'll teach you how to combine different frameworks and tools into a microservices architecture that fits your organizational needs.

You’ve no doubt heard about the microservices architecture, but understanding and executing it can be a bit of a challenge. Through a series of videos, this course will introduce microservices, review multiple microservices frameworks and runtimes, and show you techniques to deploy them through a hassle-free DevOps pipeline. We’ll discuss containers, Docker, Spring Boot, NodeJS, .NET, OpenShift, Jenkins, Vert.x, Kubernetes, and much more.

Transcript

One thing I’d encourage you to look at is the two free eBooks we have at developers.redhat.com. So, a lot of the content you’ll see here today is also in written form and available to you. In this case the Java microservices book, written by Christian Posta, is very popular. It includes things like Dropwizard, Spring Boot, and Wildfly Swarm to kind of get started from the Java microservices perspective. And we also have the Vert.x book, specifically the reactive microservices where you can see the reactive programming model, and building reactive system specifically, microservices systems with Vert.x. So, two great resources for you to dive in right there. 

But, we’re going to keep moving along. Now, let’s start with this definition. This is the definition provided by James Lewis and Martin Fowler and you can it out there on the public internet, but it specifically focused on the concept of mciroservices as an architectural style. Now styles is very keyword here. Styles means that like all styles, they will tend to fade over time. That doesn’t mean this one specifically will, but it is something that’s kind of hot and nouveau and cool right now, and you should just factor that into your decision making as it relates to adopting this particular type of architectural style. Another thing that is super important, is the concept of a suite and small services. The idea is you don’t have a single code base, you have many code bases, otherwise known as microservices, and many independent teams managing those independent code bases. 

That’s super-critical also. Everyone of these runs in its own process. That’s its own operating system process, the easiest way to think of it, when you do a ps -ef you should be able to see it. They are built with lightweight mechanisms, they communicate with things like HTTP, as an example, that’s a fairly common paradigm. You can use other invocation mechanism whether be AMQP or MQTT, or whatever you might want to use, UDP, TCP, but most people use HTTP, as an example. They’re typically designed around business capabilities. They’re definitely independently deployable. That’s rule number one in my book, and you also want to think of it from a fully automated perspective. 

So, this definition actually holds very true to this day. This has been out there for a couple of years now and I think it’s appropriate that we actually talk about it briefly. But let’s go ahead and keep moving along here. Now, I know microservices is super, super hot. The reason you’re watching it today is because you’ve heard of it too, and you’re really particularly worried about it from a, should we be doing it standpoint. Well, everybody is talking about it, and in theory everybody is doing it. When I actually go out and speak to large groups of developers and I asked for a show of hands in my microservices presentation, how many people are already going microservices? About 80 - 90% of the hands go up. 

Everyone is doing it already. That’s how hot and how hyped this thing is. But do notice if you look at the Gartner hype cycle, it will eventually go through it’s trough of disillusionment and it already has started that process at this point. Now, I know as a developer you’re probably thinking, hey, I got to be a Silicon Valley unicorn too, just like my pink unicorn with a rainbow mane, here. That’s specifically the github unicorn. But we know everybody wants to be a microservices developer. It is that hot a topic. And in this presentation, throughout all this content you will actually see, some of the key characteristics that you’ll have to apply to your talent set, and your application architecture to determine if you’re ready for microservices or not.

Now microservices is fundamentally about agility. Now, I know a lot of people they are thinking: well. I want to do microservices because I’m tired of using vendor X's big old monolithic Java application server, and I’m tired of being stuck on Java 6 or Java 7. I’d like to move to Java 8, as an example. We hear this all the time and it is a common concern and you probably think, Oh. if I just use Spring Boot, I can break with that old architecture and actually do some microservices. So, we see a lot of people focusing on that concept of a certain technology, and adopting a particular paradigm or something. But really microservices is fundamentally about agility, and agility breeds speed. I actually used to coach soccer for many, many years. 

I coached soccer, football for a lot of people, if you’re international. I coached it for about 14 years and one thing we learned as we’re building elite soccer players, at this point, was if you give them greater agility it actually creates greater speed for the player, their ability to cut, turn, change direction, move where the ball moves is fundamentally impacted by their agility more so than their straight sprinting speed. So, we’re going to focus on their agility part of it, because that is where you’ll have your biggest win. Okay? So, agility is number one. One way you get greater agility, though, is by breaking this three months deployment cycle. Now I’m using a three-month deployment cycle, because that is the primary metric to know how fast your organization is delivering value to the business itself. Now three months is actually pretty fast. 

You might be thinking: our organization is more like five months, or our organization is more like nine months. I’ve even seen some organizations who only deployed a production every two years. So, that’s relatively slow, and that’s time to business value. Think of it like that. Your code offers no value to the actual business until it lands in production, and if it only lands there every three months, that’s the only time you’re offering value that's once every three months. Four times a year. So, you want to actually shorten this cycle, this deployment cycle, shrink it down and you can do that with a numerous techniques. We’ll talk about those briefly. One, you need to re-org to devops. 

If you’re still fundamentally in the mode of where the developer throws it over the wall to the operations team and basically runs away from their software and hopefully the operations folks can actually make that mess run, right? And they’re responsible for it. That’s a fundamental problem, and it’s a part of your culture you’ll have to change as you go forward. So, keep that in mind that the developers have to also be part of the ownership of the actual software package. 

There’s a lot to be said in the Devops category, and we don’t have time for it today. We’ll keep moving on though. Now, elasticity is also very critical. And by elasticity in this case I mean infrastructure as code. This is where we use software-defined everything. So, I should have an API that allows me to provision a new virtual machine, at will. Right now, not waiting three weeks for a virtual machine, but doing it right when I need it, as an example. You know, getting the new container stood up right away, as an example of that. Also, think about your automation. If you’re still manually SSHing into servers and actually setting up different initd, systemd, scripts and things of that nature, you have a problem. You got to fix that. You need to have that thing fully-baked out as an immutable image, as a gold standard if you will, rather it be th virtual machine, or the container image itself, and ensure that you can actually recreate it at will, right? Using a tool like, whether it’d be, you know, ansibible was a good example, ansible playbooks do that really well. 

You can also just script around Kubenetes or script around OpenShift to do the same thing. Also, CI and CD, right? Most people practice CI in some form or fashion, in other words, they used Jenkins or Team City or another product like it. But in this case, we want to talk about the practice of CI, not the tool. So, the practice of CI means that you can actually deploy from your trunk, right to production, right now, as an example. It’s always ready to go. Your trunk is always ready to go, and more importantly all developers on that project check in the trunk every day, as an example. So, are those two rules part of the world you live in from a CI standpoint? Are you truly continuously integrating? 

Or do you have feature branches that last for like three weeks at a time? So, if you have a developer who’s on a feature branch, and only integrates once every three weeks, that’s not continues integration. That’s every three week integration, as an example. All right, now, advanced deployment techniques are going to be super-critical, and you’re going to see examples of those as we go throughout the presentation, especially as we get later into it but this is super-critical if you want to go fast. How do you go fast without actually injuring yourself? You do that through like a blue/green deployment or a canary deployment, and we’ll those two things in action, as an example. And if you do all these things you can be like a super cool unicorn yourself, and of course, ready for microservices is the ultimate example here. 

Now I’m not the only person who’s talked about this, and it’s actually Martin Fowler who actually talked about this at a great length. And I do like to think of these key attributes as -- and I really love this concept of you must be this tall, because I’m not a particularly tall fellow and I was always that kid who didn’t get on the roller coaster when he was smaller, because I wasn’t tall enough to get on the ride. That’s kind of the point of this example here too, and that’s the point Martin is trying to make. If you’re not mature enough, and you’re not big enough, and you’re not skilled enough, you probably don’t want to get on the microservices ride. That’s the key thing to understand. 

But really think about these key tests here, right? How many days a week does it take to get a new VM provisioned? Do you have an expensive resource like a software developer waiting three weeks for a very inexpensive resource like a VM, that sends a signal? It actually changes the culture of the organization. We got to think about that. How about your Dev and Ops, right? Who’s on that pager? Who owns the actual application? The person who architect and created it, and knows everything about it, or the operations team who has never even seen it before and only has the binary to look at? Really it ought to be the developer on the pager. They own the application, as an example. So, these kinds of thing are, you know, just to think about as you get ready to enter into the microservices category, okay? Now if you want to get started from microservices standpoint, we have everything that I have running on the laptop today available to you for download. 

You can just go to developers.readhat.com and get started with downloading the Container Development Kit. You will see it in action as I get into the demonstrations. But these are just the steps the I use to get set up, as an example. So, what I set up here on this laptop, you will also have available to you and you can try some of these same technique, same ideas, these rolling updates, these blue/green deployments, the concepts of deploying, a nodejs application or a Java based application as a microservice, all that can be available to you also. Now, let’s talk about some of this, all right? 

This is the water scrum fall. Maybe you’ve heard that before, maybe you haven’t heard it before. But most of this at this point probably practice some form of agile sprint cycle. You know, we have a two-week sprint cycle, a three-week sprint cycle. We know when our little, bitty development tea will actually go out and do our sprint, right? We even have our daily stand up meeting, we have our burn down chart, we do all the Agile kinds of things. The problem is with water scrum fall, on the front end of that, the water side of it, we might have a six-month planning cycle. Basically, we’ve planned in advance for budget and we’ve actually had a budgetary committee that had to weigh in on what projects we’re going to do this year. We’ve had an enterprise architectures team work on it. We’ve had a PMO design it, sign off on it, and of course all that’s part of the up front that takes months in many cases before we can even start doing our development. You also have the back end of that. 

So, on the other end you have maybe a centralized QA team, an integration team, they’re responsible for ensuring that the package delivered to production is exactly what they thought it should be, right? And they’re responsible for all that sort of QA and manual work that’s involved with that. There might be some form of IT Ops who then sets it up on UAT, right? The user acceptance testing, and then of course the business gets back involved and looks at it downstream and goes oh yeah, yeah, that’s kind-of sort-of what we meant when we set those requirements six months ago, as an example. And then it goes to staging of production. 

And all this takes a long time. Right? Now you’ve added months to the actual delivery of business value from a application standpoint, from a microservice standpoint. So, this is a big cycle in the end, and that’s what I call water scrum fall. In this new world that we’re going to be dealing with, though, we want to talk more about actual delivery at the end of each sprint cycle. So, here, if we think of our monolithic system that we have today, it might be that it takes every 12 weeks to actually build it, have as much automated testing run as possible, and then deliver it. Ideally, we do that in a 12-week window, and a lot of people do that in a 12-week window, as an example. 

Now, if we bring in a little bit more agile thinking though, we can think of the 12 weeks as four three week sprints or maybe six two week sprints. It really depends on each organization. And I’ve seen different groups do different things, but you think in terms of breaking that up, and then now you have to think in terms of how do you make it faster. For instance, if I actually bring in tons of great automated testing instead of manual testing. I can shrink that original 12-week bucket right down to six weeks, as an example, just through automation of simple things like testing. And it could just be great unit testing, as well as a nice overview integration testing, and other kinds of exploratory testing, as an example, functional testing. 

You also have to think in terms of automating your continuous integration and build automation. There are still too many people out there doing manual builds of the software. And I’m not even talking about people who still email their software back and forth to each other, as their form of version control. Yes, I’ve run into a few of those, and I'm --hopefully you’re not one of those here today, but for people who are using proper version control, getting their continuous integration correct, getting their build automation correct is still a challenge for them, as an example. Now, if you start thinking in terms of really automating it and adopting things like, either great VM technology, specifically where you can manage and orchestrate your virtual machines at scale, or in containers, right? 

So, little containers are even lighter weight, easier to manage, and we have something like, let’s say an OpenShift or Kubernetes to work with that, then it makes that automation even easier and you can actually shrink that cycle time. Now your developer no longer waits for a week, or two, or three for that resource, right? They wait, maybe 30 seconds to get that resource, because that’s how long it takes, the quota is assigned, they ask for it through an API, their quota is assigned and ensures that they don’t ask for too much, and they get the resource they need to start doing the development. Then you can start thinking in terms of continuous delivery pipeline, okay? So, CI, on the continuous integration side, and continuous delivery means that we are now automating all the other task. We’re not just doing automated builds for every check-in like we do with CI. 

We’re now actually saying, let’s actually take the package that went through some form of pull request ideally, did the automated testing, automated testing, did a code review, right? Went from the code review, and then went into the actual rest of the pipeline from a staging -- well, in this case let’s say it goes into the actual QA team, it goes into the staging environment, and potentially it goes into production, but everything prior to production is really part of the continuous delivery pipeline, and continuous deployment is when you actually get to production. Also, start thinking in terms of zero-downtime deployment strategies. One thing that’s always a concern for people is when they actually are deploying really, really super-fast, because I talk to people a lot. 

They’re like well, if we deploy in three months. That’s actually quite fine for us. That’s fast. Because if we actually broke that speed barrier, and start going even faster than that we’d be worried that we’d be breaking things in production. Well, the concept of the blue/green deployment and the canary deployment allow you to break thing in production. You basically deploy to production and when it’s broken you roll it right back instantaneously. Yes, you might have some users that are impacted by that, and if you’re clever about it, you can actually make sure those users are ones you know. For instance, like just your employees, or just a certain department within your organization, sees the new change and if they say, hey, hey, something is not quote right, so you can roll it right back very easily. You’ll see that when we get to the demonstration. And over time you’re going to be building a high trust environment, right? 

If everybody has this great automation, great tooling, great scripting, great synergy between Dev and Ops and Security and QA, and all of the appropriate parties associated with that overall package, and delivery of that package, you will have a higher trust environment, and you will deliver even as fast as one week increments. So, we’ve seen organizations adopt many of these strategies and actually deliver, from what it was a three-month deployment cycle, now is a one week deployment cycle. Now there’s something odd about this chart. You’re probably thinking: Wait a second. If we have three week sprints, how do we actually go faster to production than every three weeks? Right? And in this case, I said we can get down to one week intervals. 

So, you have to start thinking in terms of all your deliverables, right? It’s not just the one and done software package that you’ve created, the one microservice, the one application you’ve created, but it’s also all the patches over time. So, it’s not just the functional patches that developers have to create, it's even patches at the operating system level, patches at the container level, virtual machine level, patches at the JVM level. Yes, the JVM has bugs and gotchas in it, too. Maybe the framework that you’re using, whether it be Struts or Spring or Hibernate, needs a patch, you know, the JBoss enterprise application platform. 

Your web server, WebLogic, it doesn’t really matter what it is. Your Metaware also needs patches, and then your software needs patches. And all of those could be part of those deliverables. So, that would help you get faster than three weeks, as an example, too, because those could happen out of band. Now, there are organizations out there, and specifically this book. If you’ve not read this book I encourage you to read it. That talk about doing ten deploys a day. And that was true when this book was written, and the organization they wrote it about did ten deploys a day, and they’re notable for that. 

Now they’re about 50 deploys per day. So, things have evolved ever since then. So, the concept of going even as slow as one week is still too slow for a lot of these organizations. So, how do you actually move at the speed of, let’s say, a Silicon Valley unicorn. How do you get to ten deploys per day? Well, you have to take that component, that one monolithic system and break it up into let’s say ten individual components, with ten individual teams, that can deliver at their own pace, at their own interval, patching what they need to patch when it should be done, and actually deploying the functional aspects of that to their business, building new business capability with the pace the business needs them to match. So, in this case, now we could go faster than once a week. 

We could actually go every day if we needed to with the new deployment into the production environment, just simply by breaking up that one big code base into smaller pieces. Now we got more to discuss here. If you’re familiar with a Java application, right? And I’m very much a Java traditional J2EE, javee person from old school Java universe, and so we had this concept of, you know, the operating system was always part of it. We had the Java virtual machine of course sitting on top of that operating system whether that had been on top of old-school Solaris, AIX, or even Windows, right? Now it’s Linux. I can run my JVM on top of that and inside the JVM I package in my application server of some sort, right, and we’ve had many of those throughout the years and now it seems like there's essentially three big ones left in the universe. But I would package my application into an EAR, right? 

The enterprise archive is one of the things we would use on a regular basis. And in my EAR, I would pack in a WAR or two, or more, and I’d pack some JARs in there too, some or the JARs in the WAR, and some of the and some of the JARs in the EAR, and of course all that gets put into this one monolithic application. And here is the real problem with this architecture. Everyone has to agree on the version of the operating system, the version of the Java virtual machine, the version of the app server, the configuration of all these things, including the patch level, and then we all have to agree on all our third-party dependencies. We have to agree on what version of Spring we’re going to use, what version of Hibernate we’re going to use, and I’ve seen customers who have stuck two versions of Spring or Hibernate into their application by accident and that didn’t work out too well. 

I can tell you that. So, you want to make sure that you actually do agree on all of these dependencies. And you might have a relatively large team to actually babysit this overall monolithic system. Like in this case I have 40 plus individuals, there's 18 programmers, six operations professionals, DBAs, you got to remember the DBAs, security & compliance people, business analyst, all these folks are part of that extended team. And this is the challenge right here. How long does it take to get 40 people to agree to anything. And that’s one reason that this overall monolithic system actually move so slow. Think about organizing the local Girl Scouts or Boys Scout troops. 

If you have to actually pick what snack to bring on game day for your little soccer team for your five-year-old, it takes time to organize those people. Can you imagine organizing 40 different technical people to get something done? So, this is why a deployment take so long in many cases. Now, you might break up those teams into individual delivery teams that have their own individual product, their own individual component, and capability, known as the microservice. and they actually are cross-functional teams. Meaning they are -- they have all the skills they need to deliver their capability. They have operations professionals, developers, QA people, project managers, design people. You know, whatever it takes to actually deliver their capability. And you can even see here, I have, you know, some of them more complex than others, some teams are larger than others, all the original people are still, here but we’ve kind of broken them up and organize them differently, including this team in the lower right over here that specifically has three different microservices. 

So, it’s not like it's one team per microservice. It might be one team for four microservices, as an example of that, but fits within those -- fits within that overall team structure. The phrase here, the two-pizza team, you build it, you own it, is super-critical. That’s actually came from Amazon, as an example. The idea of two-pizza is really straightforward, right? Two American-size pizzas for those folks outside of America. I know. this doesn’t translate, you’re thinking a pizza is this big, no. This is an American-sized pizza. It’s the size of this table that I’m working with here. But you can actually have a two-pizza team, meaning you have to feed the whole team on two pizzas, so let’s call that six to eight to ten people or so, and then with that group of people, you actually have to deliver the product you’re going to deliver. And you own it, right? So, that team, the delivery team, literally owns the product. If it goes down they’re on the pager. That includes all the developers and all the operations people associate with that team. They own it. So, that’s super-critical. And this concept of accountability is one of the reasons these teams actually build better software faster. 

Accountability really matters. Now here are some key principles and characteristics. We don’t have time to go through them all today but I’ll just call out a couple of them for you. One is deployment independence. You have to better deploy each of these components completely independent of each other. That’s a speed enabler. That’s an agility enabler. Basically, as each component comes into the system, I can do that rolling that out into production with zero downtime to the rest of the system. So, that’s critical. We’ll show you some examples of how we do that when we actually get into the Kubernetes and OpenShift demos, but that is an example where, from the microservices perspective, this is a key enabler of agility, a key enabler of speed. Is knowing that when I do roll a change with team over here, versus team over there, I have zero impact to the rest of the organization. We can just roll it into production with no problems. If you look around number two here: organized around business capabilities. 

Typically these folks are organized around a specific business unit or business capability. Some aspect of the system that really matters to the business, and really matters to the end users, you wouldn’t necessarily put all this effort into some little utility component that no one really cares about. You got to really think about what is offering the best business value. What does it mean to offer that business value faster, by getting a new software components and new software capabilities to that business faster? Do things in terms of being API focused. We’ll show you a little bit about APIs in this presentation but be very API focused. 

You’re married to your public APIs, a microservices developer. Each of your two-pizza teams is now responsible for their API to the rest of the organization, who are all their customers. You have to think in terms of having these internal customers, and if you break your API, you break backward compatibility, that’s shame on you as microservices developer. That’s something you cannot do. Two bullets here though that are the deal breakers for so many organizations, number six and number seven. 

Decentralized governance. Each team, each two-pizza team is responsible for their microservice or microservices, they are responsible for building it with the technologies that matter to that specific capability. If they have to have mongo, because that’s what they want and that’s what fits their, you know capabilities correctly, they will use it. They are not a one-size-fits-all like we’ve had in the past. It used to be, you know, the enterprise architecture team would dictate from on high, look we’re all going to use this type of configured application server and database environment for our worst-case workloads. Well, a whole lot of workloads within an enterprise don’t look like worst-case. 

They actually are maybe smaller, medium, not always large. And, so, decentralized governance means that we can actually have teams responsible for their business capability, responsible for their delivery cycle, pick the technology that they want, and quickly put that out into production, as an example. Also on decentralized data management. This is a major gotcha. So, if you have a centralized database administration team, DBA team like many people do, you know, they won’t even let you change your schema when you want to. If you’re a developer, you know what I’m talking about. 

You’re like what, I can’t even bribe this guy to change a schema. What am I going to do? Well, you have to decentralized that too because that of course is a big problem when it comes to any kind of speed, any kind of agility. You want a bit of change the schema based on the business requirement that you see, right? Now. And you decentralized that too. So, in the case of decentralized data management your DBA will hunt you down and try to kill you. This is, you know, I’ve seen this happened before. I’m only kidding, of course. But it could be a problem One tip I’d give you though, the DBA probably likes light sabers also, so just keep that in the back of your mind. 

Take them to lunch, it is actually a good thing. But there‘s a great book on refactoring databases. I highly encourage you to read it. It’s actually been out there a long time and you should actually check that out for understanding decentralized data management. But we also have a book that we’ve written and available to you at our public website and so you can go the -- this bit.ly link here, mono2microdb, and actually get one written by Edson Yanaga on specifically strategies around how to deal with the monolithic database architecture. 

Well, that completes this topic and we’ll see you in the next video

About the Author
Students
133919
Labs
69
Courses
111
Learning Paths
191

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).