Load Balancing with GCP
Start course
1h 34m

Google Cloud Platform has become one of the premier cloud providers on the market. It offers the same rich catalog of services and massive global hardware scale as AWS as well as a number of Google-specific features and integrations. Getting started with GCP can seem daunting given its complexity. This course is designed to demystify the system and help both novices and experienced engineers get started.

This Course covers a range of topics with the goal of helping students pass the Google Associate Cloud Engineer certification exam. This section focuses on identifying relevant GCP services for specific use cases. The three areas of concern are compute, storage, and networking. Students will be introduced to GCP solutions relevant to those three critical components of cloud infrastructure. The Course also includes three short practical demonstrations to help you get hands-on with GCP, both in the web console and using the command line.

By the end of this Course, you should know all of GCP’s main offerings, and you should know how to pick the right product for a given problem.

Learning Objectives

  • Learn how to use Google Cloud compute, storage, and network services and determine which products are suitable for specific use cases

Intended Audience

  • People looking to build applications on Google Cloud Platform
  • People interested in obtaining the Google Associate Cloud Engineer certification


To get the most out of this course, you should have a general knowledge of IT architectures.


Distributing network load properly is one of the most important architectural challenges when designing a cloud-based application. Poor choices regarding load balancing can lead to runaway costs, poor app performance, security risks, and a number of other business hazards.

So for this reason, it's worth focusing on GCP's load balancing catalog in its own short lesson. There are exactly six specific types of load balancers, each designed for a particular type of traffic workload. Three of the six are designed for global load distribution and three are regional. We will start by considering the three global load balancer types.

The three global types are the HTTP Load Balancer, the SSL Proxy, and the TCP Proxy. The first, is as the name suggests, meant for HTTP or HTTPS traffic. This is perhaps the most common type of load balancer for a typical web application. It's internet-facing, meaning it takes external traffic on port 80 or port 8080 or 443 for HTTPS, and then it routes that traffic to your back-end services.

The other two global types—the TCP Proxy and the SSL Proxy—are for more targeted use cases. Specifically, the SSL Proxy is meant for SSL offloading. Now, SSL offloading, in case you don't know, this is the practice of decrypting SSL traffic before sending it to some other endpoint. Now, this is an optimization, it's a method for reducing the strain on back-end systems by ensuring that they don't have to spend CPU cycles decrypting secure traffic. Now, the TCP proxy load balancer is for TCP connections that are not SSL and not HTTP. This is another global internet-facing load balancer, it supports ipv6 traffic, but one detail to note is that the TCP proxy load balancer will not preserve client IP addresses, so be aware of that if that's necessary.

Now, the three regional load balances are ideal for situations where you need explicit control over where TLS connections are terminated, possibly for legal or security reasons. Your options here are the networked TCP/UDP load balancer and then the internal HTTP load balancer and the internal TCP/UDP load balancer. Now, the netWORK TCP/UDP load balancer is the only one of these three that is meant for external traffic. There are two specific scenarios where this is your best choice over the other external load balancers we described earlier. One scenario is you need to load balance UDP traffic. If you, if you're doing UDP traffic, this is the way to go. And two, the other scenario is that your load balancing TCP traffic and you need to preserve client IP addresses. So if either of those things are important to you, then you'll want to look at this option.

Now, the internal TCP/UDP and the internal HTTP load balancers are comparable to their global counterparts, their global external counterparts. They handle the same basic use cases. One is meant for a TCP/UDP traffic, the other is meant for HTTP traffic or HTTP requests. The only difference is that they are situated within a private network, they cannot accept requests direct from the public Internet, they are instead meant for routing traffic between microservices or within a VPC, for example, or from one Compute Engine instance to some other back-end service or endpoint within GCP. You can, of course, combine multiple types of load balancers in your architecture. For example, you could start with a global external load balancer that will first route a client request that could eventually hit an internal load balancer that targets the appropriate service—this is one approach.

Now, to make this a little easier to understand, there's a handy chart that GCP gives us that gives you everything you need to know to decide which load balancer is best for you. And going from left to right, we can see the basic questions we need to ask ourselves about our traffic. Do we need to accept HTTP connections? Do we need SSL offloading? Do we need to preserve client IDs? You know, by going through this flow chart we could determine which of the six load balancer types best matches our needs.

So that's all you need to know at a fairly high level in order to pick the right little balancer for your situation. Next lesson, we're going to talk about resource geolocation and it's going to be a ton of fun. See you there.

About the Author

Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.