image
Accommodate Failure
Start course
Difficulty
Beginner
Duration
20m
Students
104
Ratings
4.6/5
Description

In this course, we will learn the concepts of microservice and Spring framework.  

Learning Objectives

In this course, you will gain an understanding of the following concepts, with regards to microservices:

  • Performance
  • Failure
  • Integrity
  • Service Version Management
  • Common Code for Microservices

Intended Audience

  • Beginner Java developers
  • Java developers interested in learning how to Build and Deploy RESTful Web Services
  • Java Developers who want to develop web applications using the Spring framework
  • Java Developers who want to develop web applications with microservices
  • Java Developers who wish to develop Spring Boot Microservices with Spring Cloud

Prerequisites

  • Basic Java knowledge
Transcript

Accommodate Failure. Hello, my dear friends. In this lesson, I would like to mention probable fault sources and mitigate them in microservices. There is no architecture that is faultless, especially in an environment where systems are getting complex. So, we need to focus on designing microservices that are failure tolerant and are able to gracefully recover from failures or mitigate the impact of those failures on our system. First, we need to detect the sources of the failures and then we can focus on mitigation strategies. In a microservice environment, every point of interaction between services and components are possible points of failures. There are four main places failures could occur: Hardware, communication, dependencies, and internal. Let's inspect them briefly. Hardware. No matter what platform your app runs on, cloud, physical or virtualized, the primary source of fragility is hardware.

It affects multiple parts of the application. Communication. Communication plays a huge role in microservice architecture. Actually, it's said microservices are all about communication. The possible sources of communication failures: Firewall, network, messaging systems, DNS. Dependencies. There are multiple dependencies in a microservice application, both internal and external. Externally, a microservice application might depend on a third-party API, and internally a service might depend on a database. These points of dependencies are potential points of failure in a microservice. Some of the sources of dependency related failures include external dependencies, internal component failures, timeouts, non-backwards compatibility functionality. Internal. It is related to co-design and testing of each service. If any of the services that form a microservice application has poor design or inadequate testing, it will be a potential problem for the health of the overall system.

Now, it's our turn to talk about mitigation strategies. The strategies are some easy methods that each service of a microservice can carry out when coming across one of the failures we mentioned earlier. The methods are retries, timeouts, fallbacks, circuit breaker, and asynchronous communication. Retries. When a service cannot reach another service or component, it retries a number of times that is defined in the configuration. If it's successful, there's no problem, but if it still can't reach then the problem persists. Timeouts. If a service is defined to make a retry in the event of an access problem, we can define how much time it waits before a new retry. It enables us to control and regulate service calls in a case of a break. Fallbacks. Can be defined as a mock response in the case of a service break. In this method, if there is a service break, other services that rely on the broken service get a fallback response and the system goes on running.

Circuit breaker. Is a term used in the electrical industry. In electrical wiring, a circuit breaker interrupts current flow after protective relays detect a fault. Similarly, in a microservice's application, a circuit breaker is a pattern for pausing requests made to a failing service to prevent cascading failures. In circuit breaker patterns, there are three conditions; open, close, and half open. Open indicates that the failure is ongoing and that requests are automatically prevented. 

Close indicates that there is no failure and that everything is functioning normally and half open indicates that the failure is ongoing but that a portion of the requests are routed to the service. In open state, the situation of the broken service is  checked periodically and if it is detected to be working fine, then the status is made close. Asynchronous communication. Synchronous communication of all services can lead to a bottleneck in the system. Asynchronous communication using a communication broker like Kafka or RabbitMQ is a strategy you can use to improve the reliability of your system. Using the strategy allows you to easily scale the communication between services in your system. So, let's take a short break here, my friends and I'll see you in the next lesson.

 

About the Author
Students
3900
Courses
64
Learning Paths
5

OAK Academy is made up of tech experts who have been in the sector for years and years and are deeply rooted in the tech world. They specialize in critical areas like cybersecurity, coding, IT, game development, app monetization, and mobile development.