What is a Network Load Balancer and When Should I Use It?

Any business website depends on optimal performance and availability 24 hours a day, 7 days a week. Ensuring high performance to handle any kind of traffic — and especially spiky and high-volume traffic — can be a challenge if you aren’t using the right approach for managing requests.
Amazon Web Services recently expanded its load balancing options with a new service designed for latency-sensitive applications and extreme performance: Network Load Balancer. So what is a Network Load Balancer? In this post, we’ll explore the features and costs of this new service and help you navigate all of AWS’s load balancing options so that you know how to choose the best service for your applications.

What is a Network Load Balancer?

Until now, when you anticipated extremely spiky workloads or even instantaneous fail-over between regions, you would ask AWS to provision a load balancer in preparation for the surge in traffic. This meant the load balancer was “pre-warmed” for you by AWS, which is a wonderful example of AWS customer obsession. However, this process is dependent on several variables: you are responsible for creating the support ticket, knowing the dates for the traffic surge, the expected rate request per second, the size of a typical request. Finally, the process relies on AWS support to manage the pre-warming process for you.
The Network Load Balancer reduces some of these dependencies. Network Load Balancer has been designed to handle sudden and volatile traffic patterns, making it ideal for load balancing TCP traffic. It is capable of handling millions of requests per second while maintaining low latencies and doesn’t have to be “pre-warmed” before traffic arrives.
With Network Load Balancer, we have a simple load balancing service specifically designed to handle unpredictable burst TCP traffic. It makes a single static IP address available per Availability Zone, and it operates at the connection level (Layer 4) to route inbound connections to AWS targets. The target can be EC2 instances, containers, or an IP address. Network Load Balancer is tightly integrated with other AWS managed services such as Auto Scaling, ECS (Amazon EC2 Container Service), and CloudFormation.  It also supports static and elastic IP addresses and load balancing to multiple ports on the same instance.
Best use cases for Network Load Balancer:

  • When you need to seamlessly support spiky or high-volume inbound TCP requests.
  • When you need to support a static or elastic IP address.
  • If you are using container services and/or want to support more than one port on an EC2 instance. NLB is especially well suited to ECS (The Amazon EC2 Container Service).

Choosing the Right Load Balancer

There are three options for Elastic Load Balancing in AWS: Classic Load Balancer, Application Load Balancer, and Network Load Balancer. How do you know which one is the right fit for your applications?
Application Load Balancer is arguably the most protocol-oriented load balancing service. Because the service enforces the latest SSL/TLS ciphers and protocols, it is ideal for negotiating HTTP requests. Application Load Balancer also operates at the request level (layer 7), but provides more advanced routing capabilities than the Classic and Network Load Balancers. Additionally, its support for host-based and path-based routing, X-Forwarded-For headers, server name indication (SNI), and sticky sessions makes the Application Load Balancer ideal for balancing loads to microservices and container-based applications.
Here is another reason why it’s a great choice for containers: Application Load Balancer enables load balancing across multiple ports on a single Amazon EC2 instance. This is really powerful when you are using ECS as you can specify a dynamic port in the ECS task definition. This creates an unused port on the container when an EC2 instance is scheduled. The ECS scheduler automatically adds the task to the load balancer using this port, which is one less thing for you to worry about. The Network Load Balancer also supports multiple ports on the same instance, so you might consider using Network Load Balancer over Application Load Balancer if you need to support a static or dynamic IP address.
Best use cases for Application Load Balancer: Containerized applications, microservices, and anytime you need to support a static elastic IP address.
Classic Load Balancer is still a great solution if you just need simple load balancing with multiple protocols. Classic Load Balancer supports many of the same Layer 4 and Layer 7 features as Application Load Balancer: sticky sessions, IPv6 support, monitoring, logging, and SSL termination. Both the Classic and Application Load Balancers support offloading SSL decryption from application instances, management of SSL certificates, and encryption to back-end instances with optional public key authentication.
One plus with Classic Load Balancer is that it permits flexible cipher support, allowing you to control the ciphers and protocols the load balancer presents to clients. This makes Classic Load Balancer a good choice if you have to use or limit use to a specific cipher.
Best use cases for Classic Load Balancer: Simple load balancing or flexible cipher support.
So, should I consider upgrading from Classic Load Balancer to the new Network Load Balancer? The answer should probably be yes if you:

  • Want to support spiky and unpredictable TCP traffic without pre-warming.
  • Need to support an IP address or an IP target outside of the VPC.
  • Want to support and monitor multiple services running on ports on an EC2 instance.

Does Network Load Balancer cost more?

Costs vary per region so always check the AWS pricing page before using or changing a load balancer. Currently, all three load balancers attract a charge for each hour or partial hour the load balancer is running. Both Application and Network Load Balancers incur an additional charge for the number of Load Balancer Capacity Units (LCUs) used per hour. This cost is currently calculated based on the number of new connections, active connections, bandwidth, and rule evaluations made in an equation explained on the AWS load balancer pricing page. Classic Load Balancer has a simple charge for each GB of data transferred through the load balancer.
While each load balancing use case will be unique, here are the simple rules of thumb that I use when considering which load balancer to choose:

  • If you need to support a static or elastic IP address: Use Network Load Balancer
  • if you need control over your SSL cipher: Use Classic Load Balancer
  • If using container services and/or ECS: Use Application Load Balancer or Network Load Balancer
  • If you need to support SSL offloading: Use Application Load Balancer or Classic Load Balancer

Written by

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.

Related Posts

— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...

Read more
  • AWS
  • EC2
  • re:Invent 2018
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...

Read more
  • Aviatrix
  • AWS
  • VPC
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS
Albert Qian
— August 9, 2018

AWS Summit Chicago: New AWS Features Announced

Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...

Read more
  • AWS
  • AWS Summits
— August 8, 2018

From Monolith to Serverless – The Evolving Cloudscape of Compute

Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...

Read more
  • AWS
  • AWS Summits
  • Containers
  • DevOps
  • serverless