image
Introduction to Server Load Balancing (SLB)
Introduction to Server Load Balancing (SLB)
Difficulty
Beginner
Duration
37m
Students
264
Ratings
5/5
Description

This course provides an introduction to Alibaba's Server Load Balancer service, also known as SLB. The course begins with a brief intro to load balancing in general and then takes a look at Alibaba SLB and its three main components. We'll look at how SLB can be used for high availability, fault tolerance, and disaster tolerance. You will also learn about SLB instance clusters, traffic routing, and security, before finally moving on to a demonstration from the Alibaba Cloud platform that shows how to set up a Server Load Balancer with two servers.

If you have any feedback relating to this course, please get in touch with us at support@cloudacademy.com.

Learning Objectives

  • Learn about load balancing and Alibaba's Server Load Balancer (SLB) service
  • Understand the three main components of SLB
  • Learn about high availability and fault tolerance with Alibaba SLB
  • Learn about the running and operations of SLB
  • Set up a Server Load Balancer

Intended Audience

This course is intended for anyone who wants to learn about the basics of Alibaba's Server Load Balancer service and how to use it.

Prerequisites

To get the most out of this course, you should have a basic understanding of Alibaba Cloud. Some knowledge of load balancing would also be beneficial.

Transcript

Welcome to Session One, an Introduction to Server Load Balancing. In this session, we will look at the following topics, the concept of server load balancing, what is Alibaba's server load balancer? And, a brief overview of the server load balancer components. First, let's take a look at why we need something like a load balancer. There are potential issues when creating a web-based application service that needs to be considered.

Taking a high level overview, let's consider a website, for example. How popular will it be? How many requests or hits is it going to have on it? The first potential issue is what hardware platform are we gonna put it on? How powerful does your web server need to be to cope with all of the potential requests? Do you go with, build one big and powerful server to cope with any kind of load, which can be very expensive? Or, do you go with a less powerful platform, which would be cheaper, and hope it doesn't get overloaded with too many requests, and either slow down, or even crash?

Another potential problem is what happens to the web service if the hardware or server that it's running on, fails. This becomes a single point of failure. To get around these potential problems, you could provision two servers to alleviate the single point of failure scenario. And at the same time, use the cheaper platform to save the cost of one big or powerful server.

Then this creates another potential problem. How do you route requests to the different servers so that one server does not become overloaded with all of the requests, while the other sits idle? And at the same time, keep this complexity transparent to the users who are trying to access the website?

Well, the domain naming service has a function called, DNS Round Robin that could be used in this scenario. And very basic terms, this is where the request for a website's fully qualified domain name will be sequentially forwarded to each server in turn, but this also has potential problem. The DNS servers cannot tell if a server is down. So if one of the two servers fail, half of the requests for the website will be sent to a server that is offline and will not respond. In this case, we could use a server load balancer instead.

So, what is Alibaba's server load balancer? I'll mainly use the common term, SLB, from here on. SLB is a traffic distribution and control service that automatically distributes inbound traffic across multiple web-based applications, microservices or containers hosted on Alibaba ECS instances. It provides high availability when utilizing multiple availability zones. It prevents single point of failure when using more than one ECS instance in the same zone, and at the same time, provides high availability in the zone. It can be set up to elastically expand capacity, according to service loading.

Now, this requires autoscaling, which is a subject of another set of sessions and will not be covered here. And by default, SLB defends against denial of service attacks, preventing different kinds of flood attacks on the services running behind it. SLB components. The server load balancer consists of three major sets of components, a server load balancer instance, one or more listeners, and at least two backend servers. A server load balancer instance receives and distributes incoming traffic to backend servers, using one or more listeners that checks the client request and does a health check on the backend servers before forwarding the request.

SLB components will be fully explained in the next session. That concludes this introduction. I look forward to speaking to you in the next session, SLB components.

About the Author
Students
43997
Labs
168
Courses
1751
Learning Paths
45

A world-leading tech and digital skills organization, we help many of the world’s leading companies to build their tech and digital capabilities via our range of world-class training courses, reskilling bootcamps, work-based learning programs, and apprenticeships. We also create bespoke solutions, blending elements to meet specific client needs.