1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Getting Started with Google Compute Engine

Load Balancing


Provisioning your first GCE instance
Start course

Google Compute Engine is the cornerstone of the Google Cloud Platform. It is an IaaS (Infrastructure as a Service) environment - powered by KVM hypervisors - that allows you to create instances based on default images and custom snapshots, with complete control over network traffic.

This course, crafted by our expert Linux System Administrator David Clinton, will help you get started with Google Compute Engine, either through Google's browser console or their command line interface. By the end of this course you will have everything it takes to master the efficient and effective use of GCE.

Who should take this course

As a beginner-level course, you don't need experience with Google Cloud Platform to benefit from this tutorial. Some basic knowledge of the Linux CLI interface and TCP/IP stack might help you better understand the Networking and the CLI lectures though.

If you need a high-level introduction to the cloud, check out the Introduction to Cloud Computing course. We also have an Introduction to Google Cloud Platform course to offer you broader overview of the whole family of Google services.

If, after going through this course, you'd like to test your knowledge of Google Compute Engine and improve your CloudRank, we've got Quizzes that should serve as a perfect followup.



Hi, and welcome to CloudAcademy.com's video series on getting started with Google Compute Engine. In this video, we'll explore network load balancing.

Since it's so easy, and in many cases, cost-efficient to run many web server instances in parallel, it will often make sense to distribute web traffic between the various instances hosting identical data sets, so no one virtual machine is ever overwhelmed by traffic. To do this, we'll have to configure load balancing.

We created two instances and configured the first as an Apache web server. Let's do the same on the second. We'll SSH in. Then we'll run sudo apt-get update && sudo apt-get install apache2. Now let's change directory to var/www, which was installed by the Apache package. LS will list the files there and we'll see index studies, Gmail, which will be the webpage your browser will open when it's directed to the IP address or URL associated with the IP address of the server.

Let's run sudo nano index.html and we'll leave the first line as it is. We'll simply edit the second line to read, "Welcome to Instance2." That way, we'll be able to identify when the browser hits instance-2 rather than instance-1. We'll delete the next line, control X and then Y for yes to save the file and that's all we need to do from the console.

How to configure Load Balancing 

Now let's start load balancing. We'll click on Network load balancing. We'll go back to basic setup, take a look at how things start. We will remain in asia-east1 because that's the region where our two instances lived. We'll allow all prefix names to begin with lb1. We could specify anything we like as long as the first character is a letter.

The forwarding rule means that any traffic that is making the request of the specified external IP address, which we'll specify in a minute, that fits certain criteria. For instance, it's coming on port 80. We'll be subjected to our load balancing.

So there's an external IP address that will be assigned to this load balancing instance. Once the instance is actually running, we can then direct our browser traffic to that IP address. If it happens to be using TCP on port 80, then it will be forwarded to one of the instances that's part of the target pool. Let's take a look at target pool. We could create a new target pool or use an existing target pool. In the meantime, lb1-pool already makes use of instance one in zone B and instance two in zone B. We could add other instances or remove these and add others in their place. In the meantime though, that's exactly what we want, the two instances we created both of which are configured as Apache web servers. But with slightly different index.html files are these two, instance one and instance two. We'll leave it as it is. We could configure more precisely forwarding rules by clicking on Forwarding Rules at the top of the page and similarly Target Pools but we'll stick in the meantime with the basic setup.

Load Balancing Health Check

A Health Check will be performed on each of the instances periodically to make sure that it's worth sending traffic to the instance, so they will send request using in our case port 80 to make sure that there is a response within a quick enough frame of time to suggest that the server right now is healthy enough to take more traffic. We'll use an existing health check that will be called lb check. It will request a path on port 80 into the instance so that it will request the default page of the web server software Apache that is accessed through port 80 which in our case would be the var/www/index.html and make sure that they get a response in a quick enough time. Let's create the load balancing session and we seemed to have successfully set it up. Let's click again on network load balancing. And we have lb-1-rule, which is the rule we just created, is associated with the address

Let's copy that address and open a new tab, paste the address and it works. We've come to Instance1. Well, we see that we can reach Instance1. Let's see if after a couple of tries we don't also get the Instance2. Now because the browser cache can sometimes prevent reloading of a page as quickly as we'd like it to, we'll try a different browser using the same IP address to see if we can get Instance2. Welcome to Instance2. Using the same IP address, we reached Instance1 using the Chromium Browser. And now using Firefox, we've reached Instance2. Load balancing seems to work.

About the Author
David Clinton
Linux SysAdmin
Learning Paths

David taught high school for twenty years, worked as a Linux system administrator for five years, and has been writing since he could hold a crayon between his fingers. His childhood bedroom wall has since been repainted.

Having worked directly with all kinds of technology, David derives great pleasure from completing projects that draw on as many tools from his toolkit as possible.

Besides being a Linux system administrator with a strong focus on virtualization and security tools, David writes technical documentation and user guides, and creates technology training videos.

His favorite technology tool is the one that should be just about ready for release tomorrow. Or Thursday.