Microservices is a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple APIs.

There’s been a great deal of talk in the world of web applications about microservices over the past few years. The modular architectural style seems particularly well suited to cloud-based environments and its popularity seems to be rising. Before going too deeply into specifics, let’s get the big picture.

Microservices is a way of breaking large software projects into smaller, independent, and loosely coupled modules. Individual modules are responsible for highly defined and discrete tasks and communicate with other modules through simple, universally accessible APIs.

In other words, microservices is really nothing more than another architectural solution for designing complex – mostly web-based – applications. But what’s wrong with existing architectural solutions like the widely adopted SOA (Service Oriented Architecture)? Most of the thousands of modern enterprise applications built on SOA seem to be running well enough.

Perhaps this is the good time to talk about some of the challenges the industry currently faces with the available architectural solutions. Let’s start with a simple example.

Suppose I need to build a classic web application using Java. The first thing I will do is design a Presentation Layer (the user interface), then an Application Layer handling all business logic, an Integration Layer to enable loose coupling between various components of the Application Layer, and finally a Database Layer that will be accessible to the underlying persistent system.

Now in order to run the entire application I will create either a WAR or EAR package and deploy it on an application server (like JBoss, Tomcat, and WebLogic). Now, because I have packaged everything as an EAR/WAR, it becomes monolithic, which means that, even though we have separate and distinguishable components, all are packaged under one roof. Here’s an illustration:

Microservices architecture - SOA

The odds are that you’re already familiar with all this, but the idea is to use it to highlight some of the challenges developers and architects face with this kind of design.

Monolithic architectures: challenges

  • As your application grows, the code base grows with it, which can overload your IDE every time it loads the application. This definitely reduces developer productivity.
  • Because you have packaged everything in one EAR/WAR, you will be hesitant to change the technology stack of the application. I mean, suppose you wrote your entire application in Java, and tomorrow you feel that some of the components in the application can be better handled using other languages like Groovy or Scala. With this kind of architecture, I doubt you will even consider refactoring your code base, because you really can’t predict how it will impact your current functionality. Today I see many applications using EJB or Struts, because that’s how they started, and their code base has grown so much that they can’t even imagine refactoring.
  • If any single application function or component fails, then the entire application goes down. Imagine you have a web application with separate functions handling tasks like payment, login, and history and, for some reason, a particular function starts consuming more memory or CPU. The entire application will feel the pain, even though the issue is really only based on a single component.
  • Scaling such a monolithic application can only be accomplished by deploying the same EAR/WAR packages in more servers – also known as horizontal scaling. Each copy of the application in various servers will utilize the same amount of underlying resources, which is often not an efficient way to design.
  • This can have an impact on the development stage as much as application deployments. As applications get bigger, it’s even more important that developers should be able break things down to smaller and more workable units. Because everything in the monolithic approach is tied together, developers cannot work independently to develop/deploy their own modules. And because developers remain totally dependent on others, development time increases.

With all this in mind, we’re ready to try to understand the value of microservices, and how it can used to restore flexibility that may have been lost in SOAs.

Microservices explained

One of the major driving forces behind any kind of architectural solution is scalability. While I was first exploring microservices, I saw that everyone seemed to quote from a book called The Art of Scalability. So that might be a good place to begin our discussion.

The book’s defining model was the Scale Cube, which describes three dimensions of scaling:

Microservices architecture - cube

As you can see, the X axis represents horizontal application scaling (which we have seen is possible even with monolithic architecture), and the Z axis represents scaling the application by splitting similar things. The Z axis idea can be better understood by using the sharding concept, where data is partitioned and the application redirects requests to corresponding shards based on user input.

The Y axis is the one on which we’ll focus. This axis represents functional decomposition. In this kind of strategy, various functions can be seen as independent services. So, instead of deploying the entire application only once everyone is done, developers can deploy their respective services independently without waiting for the others to finish their modules. This not only improves developer time management, but also offers them much more flexibility to change and redeploy their modules without needing to worry about the rest of the application’s components. Compare this diagram with the earlier monolithic design:

Microservices architecture - design

Microservices: advantages

The advantages of microservices seem strong enough to have convinced some big enterprise players – like Amazon, Netflix, and eBay – to begin their transitions. As opposed to more monolithic design structures, microservices…

  • Improves fault isolation: larger applications can remain largely unaffected by the failure of a single module.
  • Eliminates long-term commitment to a single technology stack: If you want to try out a new technology stack on an individual service, go right ahead. Dependency concerns will be far lighter than with monolithic designs, and rolling back changes much easier. The less code in play, the more flexible you remain.
  • Makes it easier for a new developer to understand the functionality of a service.

Microservices Deployment options and Virtualization

Now that we understand microservices – and particularly the fact that the greatest advantage is that they’re not deployed in integrated WAR-like packages – how are they deployed?

The best way to deploy microservices-based applications is inside containers. Containers – as you can see from my earlier post on Container Virtualization – are complete virtual operating system environments that provide processes with isolation and dedicated access to underlying hardware resources. The biggest name in container solutions right now, is Docker.

Virtual machines from IaaS providers like AWS can also work well for microservices deployments, but relatively lightweight microservices packages may not leverage the whole VM, possibly reducing their cost effectiveness.

You can also deploy your code using an OSGI (Open Service Gateway Initiative) bundle. In this case, all of your services will be running under one JVM, but this comes with a management and isolation tradeoff.

Microservices: drawbacks

Just because something is all the rage around the industry, doesn’t mean it has no drawbacks. Here’s a list of some potential pain areas associated with microservices designs:

  • Developing distributed systems can be complex. By which I mean, because everything is now an independent service, you have to carefully handle requests travelling between your modules. There can be a scenario where one of the services may not be responding, forcing you to write extra code specifically to avoid disruption. Things can get more complicated when remote calls experience latency.
  • Multiple databases and transaction management can be painful.
  • Testing a microservices-based application can be cumbersome. Using the monolithic approach, we would just need to launch our WAR on an application server and ensure its connectivity with the underlying database. But now, each dependent service needs to be confirmed before you can start testing.
  • Deploying microservices can be complex. They may need coordination among multiple services, which may not be as straightforward as deploying a WAR in a container.

Of course, with the right kind of automation and tools, all the above drawbacks can be addressed.

Have you worked in both worlds? Do you have any of your own experiences you’d like to share?