1. Home
  2. Training Library
  3. Alibaba Cloud
  4. Courses
  5. Alibaba Server Load Balancer (SLB)

Server Load Balancer Architecture

Developed with
QA

The course is part of this learning path

play-arrow
Start course
Overview
DifficultyBeginner
Duration37m
Students42
Ratings
5/5
starstarstarstarstar

Description

This course provides an introduction to Alibaba's Server Load Balancer service, also known as SLB. The course begins with a brief intro to load balancing in general and then takes a look at Alibaba SLB and its three main components. We'll look at how SLB can be used for high availability, fault tolerance, and disaster tolerance. You will also learn about SLB instance clusters, traffic routing, and security, before finally moving on to a demonstration from the Alibaba Cloud platform that shows how to set up a Server Load Balancer with two servers.

If you have any feedback relating to this course, please get in touch with us at support@cloudacademy.com.

Learning Objectives

  • Learn about load balancing and Alibaba's Server Load Balancer (SLB) service
  • Understand the three main components of SLB
  • Learn about high availability and fault tolerance with Alibaba SLB
  • Learn about the running and operations of SLB
  • Set up a Server Load Balancer

Intended Audience

This course is intended for anyone who wants to learn about the basics of Alibaba's Server Load Balancer service and how to use it.

Prerequisites

To get the most out of this course, you should have a basic understanding of Alibaba Cloud. Some knowledge of load balancing would also be beneficial.

Transcript

Hello, and welcome to session four. Server load balancer architecture. In this session, we will cover a description of the basic architecture of SLB, how the layer four SLB and layer seven SLB protocols work, how SLB handles the flow of network traffic, and an overview of how anti-DDoS is implemented in SLB.

Server load balancer instances are deployed in a region to synchronize sessions and protect backend servers from single points of failure. As a traffic forwarding service SLB forwards client requests to backend servers through SLB clusters, and receive responses returned by the backend servers over internal networks. This feature improves redundancy and ensures service stability.

For the architecture, two different technologies are used to support the four different network protocols that server load balancer supports. SLB operates at layer four where TCP and UDP protocols are present and layer seven where HTTP and HTTPS protocols are present. Each zone in a region has the following, at layer four SLB balances loads by using the open-source Linux virtual server or LVS software clusters that are adapted for cloud computing. And that layer seven SLB uses a cluster system called Tengine to balance loads.

Tengine is a web server project based on engine X. Tengine provides advanced features to support high traffic websites. Let's have a look at how the layer four and layer seven protocols work.

For layer four load balancing, SLB uses the open-source LVS cluster, which supports TCP and UDP. These protocols operate at the transport layer of the network stack. TCP is a connection-oriented protocol, which means that a connection must be established between a sender and receiver before data can be sent. TCP is used where data packet loss is not acceptable. This is achieved by three-way handshake between the requester and the backend server. The three-way handshake consists of SYN, SYN/ACK, and ACK messages. Short for synchronize, SYN is a TCP packet sent to another computer, requesting that connection be established between them. If the SYN is received by the second machine, the SYN/ACK is sent back to the address requested by the SYN. Lastly, if the original computer receives the SYN/ACK, a final ACK is sent. This establishes a persistent connection between the client and the backend server and the server can now serve its content.

UDP is a connectionless protocol, which means that no connection is established before sending data. UDP is used where some packet loss can be accepted. For example, audio and video streaming. The layer four TCP and UDP protocols deal with the delivery of messages with no regard to the content of the message. So incoming messages afforded directly to the backend server with no head of modification. Four layer seven load balancing, SLB uses the tension cluster, which supports HTTP and HTTPS. These protocols operate at the application layer of the network stack. Both protocols use TCP and require the three-way handshake.

Layer seven balancing can route traffic in a more sophisticated manner. The layer seven load balancer terminates the network traffic and reads the message inside. It can then route the traffic based on the content of the message. Now, this can be a URL or a cookie. It then creates a new TCP connection to the appropriate backend server. As a result, the header may be modified and the X-forwarded-for header will contain the IP address of the requesting client computer.

Traffic flow. All incoming traffic must be forwarded through the LVS cluster first, irrespective of which protocol is being used. For layer four listeners front end protocol is UDP or TCP, the node servers in the LVS cluster distribute requests directly to backend ECS instances, according to the configured forwarding rules on the listeners.

For layer seven listeners, that use the front end protocol HTTP, the node servers in the LVS cluster, first distribute request to the Tengine cluster. Then the node servers and the Tengine cluster distribute the request to backend ECS instances, according to the configured forwarding rules in the listener. For layer seven listeners that use the front end protocol HTTPS, the request distribution is similar to the HTTP protocol, however, before distribution requests to the backend ECS instances, the system calls the key server to validate certificates and decrypt data packets. Before any requests from the internet can reach a back end server, all requests must go through security.

Alibaba Cloud provides a five gigabit per second distributed denial of service attack protection service for SLB by using anti-DDoS Basic. As shown in the following diagram, all traffic from the internet must first go through Alibaba cloud security before arriving at a server load balancer. Anti-DDoS Basic scrubs and filters common DDoS attacks and protects your services against attacks, such as SYB/ACK, UDP, ICNP, and DNS query flood attacks.

Anti-DDoS Basic sets a scrubbing threshold and blackholing threshold according to the bandwidth of the internet SLB instance. When the inbound traffic reaches the threshold, either scrubbing or blackholing is triggered. Scrubbing is when the attack traffic from the internet exceeds the scrubbing threshold. Alibaba cloud security automatically starts scrubbing the attack traffic, the scrubbing actions include packet filtration, traffic speed limitation, and packet speed limitation. 

Blackholing is when the attack traffic from the internet exceeds the blackholing threshold and all inbound traffic is then dropped to protect the backend service. That concludes this session on SLB architecture.

In the next and last session of the series, I will demonstrate in the Alibaba portal, how to create a server load balancer with two servers in different zones. I look forward to speaking to you in the next session.

About the Author
Students2579
Labs19
Courses20
Learning paths22

QA is the UK's biggest training provider of virtual and online classes in technology, project management and leadership.