Skip to main content

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).

Microservices have become increasingly popular over the past few years. The modular architectural style, based on the philosophy of breaking large software projects into smaller, independent, and loosely coupled modules, has gained prominence among developers for its dynamic and agile qualities in API management and execution of highly defined and discrete tasks.

Simply stated, microservices are really nothing more than another architectural solution for designing complex – mostly web-based – applications. Microservices have gained prominence as an evolution from SOA (Service Oriented Architecture), an approach that was designed to overcome the disadvantages of traditional monolithic architectures. In this blog post, we’ll explore the evolution of development from monolithic architectures towards microservices and its underlying justifications.

The Evolution from Monolithic Architecture

Let’s start with a simple example.

Suppose I need to build a classic web application using Java. The first thing I will do is design a Presentation Layer (the user interface), followed by an Application Layer, which handles all of the business logic. This is followed by an Integration Layer to enable loose coupling between various components of the Application Layer. Finally, I will design a Database Layer that will be accessible to the underlying persistence system.

To run the entire application, I will create either a WAR or EAR package and deploy it on an application server (like JBoss, Tomcat, or WebLogic). Because I have packaged everything as an EAR/WAR, my application becomes monolithic in nature, which means that even though we have separate and distinguishable components, all are packaged together.

Here’s an illustration:

Monolithic Application Diagram

You may already be familiar with the characteristics of monolithic applications depending on your development experience. However, this example also stands to illustrate some of the challenges developers and architects face with this kind of design.

Here are the flaws:

  • As the application grows, so does the associated code base, which can overload your development environment each time it loads the application, reducing developer productivity.
  • Because the application has been packaged in one EAR/WAR, changing the technology stack of the application becomes a difficult task. With this kind of architecture, refactoring the code base becomes difficult because it’s hard to predict how it will impact application functionality.
  • If any single application function or component fails, then the entire application goes down. Imagine a web application with separate functions including payment, login, and history. If a particular function starts consuming more processing power, the entire application’s performance will be compromised.
  • Scaling monolithic applications such as the one described in the example can only be accomplished by deploying the same EAR/WAR packages in additional servers, known as horizontal scaling. Each copy of the application in additIonal servers will utilize the same amount of underlying resources, which is inefficient in its design.
  • Monolithic architecture impacts both the development and application deployment stage. As applications increase in size, it’s even more important that developers be able to break their applications down into smaller components. Because everything in the monolithic approach is tied together, developers cannot work independently to develop or deploy their own modules and must remain totally dependent on others, increasing overall development time.

With these thoughts in mind, let’s explore the value of microservices and how they can be used to provide the flexibility that’s lacking in monolithic architectures..

Exploring Microservices

One of the major driving forces behind any kind of architectural solution is scalability. While I was first exploring microservices, I observed that peers seemed to gravitate towards a book called The Art of Scalability. The book’s defining model was the Scale Cube, which describes three dimensions of scaling:

Three Dimensions of Scaling an App

As you can see, the X-axis represents horizontal application scaling (which we have seen is possible even with monolithic architecture), and the Z-axis represents scaling the application by splitting similar things. The Z-axis idea can be better understood by using the sharding concept, where data is partitioned and the application redirects requests to corresponding shards based on user input.

The Y-axis represents functional decomposition. In this approach, various functions can be seen as independent services. Instead of deploying the entire application once all the components are available, developers can deploy their respective services independently. This not only improves developer time management but also offers greater flexibility to change and redeploy their modules without worrying about the rest of the application’s components. You can see how this is different from the earlier diagram which showed a monolithic design:

Microservices - Functional Decomposition

The Advantages of Microservices

The advantages of microservices seem strong enough to have convinced some big enterprise players such as Amazon, Netflix, and eBay to adopt the methodology. Compared to more monolithic design structures, microservices:

  • Improve fault isolation: Larger applications can remain mostly unaffected by the failure of a single module.
  • Eliminate vendor or technology lock-in: Microservices provide the flexibility to try out a new technology stack on an individual service as needed. There won’t be as many dependency concerns and rolling back changes becomes much easier. With less code in play, there is more flexibility.
  • Ease of Understanding: With added simplicity, developers can better understand the functionality of a service.

Deployment of Microservices

Now that we understand microservices, how are they deployed?

The best way to deploy microservices-based applications is within containers, which are complete virtual operating system environments that provide processes with isolation and dedicated access to underlying hardware resources. One of the biggest names in container solutions right now is Docker, which you can learn more about in our Getting Started course.

Virtual machines from infrastructure providers like Amazon Web Services (AWS) can also work well for microservices deployments, but relatively lightweight microservices packages may not leverage the whole virtual machine, potentially reducing their cost effectiveness.

Code deployments can also be completed using an Open Service Gateway Initiative (OSGI) bundle. In this use case, all application services will be running under one Java virtual machine, but this method comes with a management and isolation tradeoff.

The Disadvantages of Microservices

Microservices may be a hot trend, but it does not come without its drawbacks.

Here’s a list of some potential pain areas associated with microservices designs:

  • Developing distributed systems can be complex. Since everything is now an independent service, you have to carefully handle requests traveling between your modules. In one such scenario, developers may be forced to write extra code to avoid disruption. Over time, complications will arise when remote calls experience latency.
  • Multiple databases and transaction management can be painful.
  • Testing a microservices-based application can be cumbersome. In a monolithic approach, we would just need to launch our WAR on an application server and ensure its connectivity with the underlying database. With microservices, each dependent service needs to be confirmed before testing can occur.
  • Deploying microservices can be complex. They may need coordination among multiple services, which may not be as straightforward as deploying a WAR in a container.

Of course, with the right kind of automation and tools, all the above drawbacks can be addressed.

Closing Thoughts

As application development trends continue to evolve, the debate between using microservices or leveraging traditional monolithic architectures will only become more pronounced. In the end, developers must do their due diligence and understand what works for their specific use cases.

If you’re looking to use microservices, get started today with these resources on Cloud Academy:

Albert Qian

Written by

Albert Qian

Albert is the Product Marketing Manager at Cloud Academy, responsible for supporting sales, marketing campaigns, and more. When not at work, you'll find him listening to an audiobook, at spin class, or traveling.

Related Posts

Sam Ghardashem
Sam Ghardashem
— May 15, 2019

Aviatrix Integration of a NextGen Firewall in AWS Transit Gateway

Learn how Aviatrix’s intelligent orchestration and control eliminates unwanted tradeoffs encountered when deploying Palo Alto Networks VM-Series Firewalls with AWS Transit Gateway.Deploying any next generation firewall in a public cloud environment is challenging, not because of the f...

Read more
  • AWS
Joe Nemer
Joe Nemer
— May 3, 2019

AWS Config Best Practices for Compliance

Use AWS Config the Right Way for Successful ComplianceIt’s well-known that AWS Config is a powerful service for monitoring all changes across your resources. As AWS Config has constantly evolved and improved over the years, it has transformed into a true powerhouse for monitoring your...

Read more
  • AWS
  • Compliance
Avatar
Francesca Vigliani
— April 30, 2019

Cloud Academy is Coming to the AWS Summits in Atlanta, London, and Chicago

Cloud Academy is a proud sponsor of the 2019 AWS Summits in Atlanta, London, and Chicago. We hope you plan to attend these free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events are all about learning. You can learn how t...

Read more
  • AWS
  • AWS Summits
Paul Hortop
Paul Hortop
— April 2, 2019

How to Monitor Your AWS Infrastructure

The AWS cloud platform has made it easier than ever to be flexible, efficient, and cost-effective. However, monitoring your AWS infrastructure is the key to getting all of these benefits. Realizing these benefits requires that you follow AWS best practices which constantly change as AWS...

Read more
  • AWS
  • Monitoring
Joe Nemer
Joe Nemer
— April 1, 2019

AWS EC2 Instance Types Explained

Amazon Web Services’ resource offerings are constantly changing, and staying on top of their evolution can be a challenge. Elastic Cloud Compute (EC2) instances are one of their core resource offerings, and they form the backbone of most cloud deployments. EC2 instances provide you with...

Read more
  • AWS
  • EC2
Avatar
Nitheesh Poojary
— March 26, 2019

How DNS Works – the Domain Name System (Part One)

Before migrating domains to Amazon's Route53, we should first make sure we properly understand how DNS worksWhile we'll get to AWS's Route53 Domain Name System (DNS) service in the second part of this series, I thought it would be helpful to first make sure that we properly understand...

Read more
  • AWS
Avatar
Stuart Scott
— March 14, 2019

Multiple AWS Account Management using AWS Organizations

As businesses expand their footprint on AWS and utilize more services to build and deploy their applications, it becomes apparent that multiple AWS accounts are required to manage the environment and infrastructure.  A multi-account strategy is beneficial for a number of reasons as ...

Read more
  • AWS
  • Identity Access Management
Avatar
Sanket Dangi
— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
  • CloudFormation
Badrinath Venkatachari
Badrinath Venkatachari
— February 1, 2019

10 Common AWS Mistakes & How to Avoid Them

Massive migration to the public cloud is changing architecture patterns, operating principles, and governance models. That means new approaches are vital to get a handle on soaring cloud spend. Because the cloud’s short-term billing cycles call for financial discipline, you must empower...

Read more
  • AWS
  • Operations
Avatar
Andrew Larkin
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...

Read more
  • AWS
  • AWS Certifications
Avatar
Andrew Larkin
— January 15, 2019

2018 Was a Big Year for Content at Cloud Academy

As Head of Content at Cloud Academy I work closely with our customers and my domain leads to prioritize quarterly content plans that will achieve the best outcomes for our customers.We started 2018 with two content objectives: To show customer teams how to use Cloud Services to solv...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Avatar
Jeremy Cook
— November 29, 2018

Amazon Elastic Inference – GPU Acceleration for Faster Inferencing

“Add GPU acceleration to any Amazon EC2 instance for faster inference at much lower cost (up to 75% savings)”So you’ve just kicked off the training phase of your multilayered deep neural network. The training phase is leveraging Amazon EC2 P3 instances to keep the training time to a...

Read more
  • AWS
  • Elastic Inference
  • re:Invent 2018