Products and Services
The course is part of these learning pathsSee 4 more
There are a lot of different options, across a variety of cloud platforms that are well suited for running specific workloads, such as web applications. Things such as Google App Engine, AWS Elastic Beanstalk, Azure App Services: Web Apps, among others.
However, there are still plenty of times where we need to set up our own infrastructure. And so cloud vendors offer IaaS (infrastructure as a service) options. Google provides us with Compute Engine which allows us to create virtual machines, custom images, snapshots, networks, auto-scalers and load balancers.
If we're going to create and implement an application on the Google Cloud Platform system operations, then understanding these services are going to help us to create highly available, highly scalable applications.
All the major cloud providers offer the ability to set up virtual machines, networks, auto-scalers, and load balancers. Where the Google Cloud is different is in the speed of creating and starting up virtual machine instances. As well as the massively scalable software-based, global load balancer; which doesn't require pre-warming. Google also offers per-minute billing for VM instances, after the first 10 minutes.
So Google has a lot to offer. And if you're looking to learn more about the Google Cloud systems operations, then this may be the course for you.
What exactly will we cover in this course?
Course Objectives: Google Cloud Platform system operations
By the end of this course, you'll know:
How to use Compute Engine to create virtual machines
How to create disk snapshots
How to create images
How to create instance templates and groups
How to create networks
How to use the auto-scaler and load balancer
This is an intermediate level course because it assumes:
You have at least a basic understanding of the cloud
You’re at least familiar with general IT concepts
What You'll Learn
Summary A review of the course
|Lecture||What you'll learn|
|Intro||What will be covered in this course|
|Getting Started||An introduction to the Google Cloud Platform|
|Networking||How to create and secure Cloud Networks|
|Disks and Images||An overview of disk types and images|
|Authorization and IAM||How to authenticate and authorise users|
|Disk Snapshots||How to use snapshots for point-in-time backups|
|Cloud Storage Overview||A refresher on Cloud Storage|
|Instance Groups||How to manage instances with managed and unmanaged groups|
|Cloud SQL Overview||A quick primer on how to use Cloud SQL|
|Startup and Shutdown Scripts||Using startup scripts to provision machines at boot time|
|Autoscaling||How to automatically add and remove instances|
|Load Balancing||How to balance traffic across instances|
|Putting It All Together||A demo of how to use some of the services we've learned about|
Welcome back. In this lesson, we'll be talking about load balancing. We'll cover the different types of load balancers, and then we'll create one to see how it's actually set up.
There are three types of load balancer. We have TCP, UDP and the HTTP load balancer. Let's start by talking about the options that Google classifies as network load balancers, which are the TCP and UDP load balancers.
Network load balancing allows you to balance the load of your systems based on the incoming IP protocol data, such as address, port and protocol type. They use forwarding rules that point to target pools, which list instances available for load balancing and define which type of health check should be used to determine if those instances are healthy. With network load balancing you have some options that you don't have with HTTP. For example, you can load balance based on protocols, such as SMTP or FTP, and you can also perform packet inspection, which isn't available for the HTTP load balancer. So, if you need to load balance an application that doesn't run over HTTP, then you can use either TCP or UDP.
The network load balancer works by distributing traffic among pools of instances inside of a single region. It uses forwarding rules that you create to determine which pool to send traffic to, and the load balancer uses health checks to ensure that it only sends traffic to instances that are considered healthy. The health check is based on an HTTP request. So, you'll need to ensure that your instance has at least a basic web server, even if the instance isn't being used for web workloads. Network load balancing also supports a couple of additional features, namely session affinity and auto scaling. Session affinity means that requests from a particular client will be continually directed to the same instance, and this is useful for applications that aren't stateless.
So network load balancing is useful for use cases, such as multiplayer game servers over UDP, or maybe load balancing SMTP servers, or FTP servers, or even load balancing an application using your own protocol over TCP or UDP. And you could use it for HTTP if you wanted to. However, because HTTP is such a common use case, Google has created the HTTP load balancer, and it offers some features that the network load balancer doesn't.
Let's cover what it offers. The HTTP load balancer distributes traffic among groups of instances based on proximity to the user and the request URL routing rules. It requires an instance group and it supports managed and unmanaged groups. And that means it supports auto scaling, however it's not required. It uses ports 80 and 8080 for HTTP and 443 for HTTPS. And, just like network load balancers, this one also supports session affinity, which is useful for legacy applications that aren't stateless. The HTTP load balancer also supports connection draining, which ensures that no new connections are made and that existing connections are preserved as long as possible before an instance is removed from the group.
Let's see how to create an HTTP load balancer from inside of the console. We already have an instance group and it has a web application in it. From the Networking page and on the Load balancer tab we can click on Create load balancer. And now, it wants to know which type of load balancer we want to create. We're going to use the HTTP load balancer, as we talked about. So, we're going to give it a name. We'll call ours webapp-load-balancer.
And now, we need a backend service. Backend services direct incoming traffic to one or more backends. Each backend is composed of an instance group and an additional serving capacity. Backend serving capacity can be based on CPU or requests per second. Each backend service also specifies which health checks will be performed against the available instances. The form for backend service already has a sub-form for a backend. So, we just need to fill it out. And each backend requires us to select an instance group. We're going to use the one that we've already created. And our backend runs on port 80. So, we'll leave that there by default. And we can select the balancing mode, which will be either CPU utilization or requests per second. We can also set the capacity for the backend, and in this case it would be 100% CPU utilization.
And now, we need a health check. The health check pulls the instances attached to our backend service and makes sure that they're available to handle traffic. And with that done we can move on to the host path rules. This is were we can map a path to a backend. We only have the one backend so we don't have to edit anything here. Next is the frontend service. Here we can set HTTP or HTTPS. We can set up the type of the IP address: static, ephemeral, et cetera. We can change the port. And if we're using HTTPS, then we would add our SSL cert, which gets terminated at the load balancer. Keep in mind if you want to have the load balancer also communicate with our backend service via HTTPS, then we need to have our cert loaded on each instance as well.
Now, we can review the setup and since everything looks good, let's click on the Create. We're going to jump forward to when it's complete so we can try this out. So, we're going to browse to the IP address of our load balancer. And there it is. There's our application running and serving traffic from our backend service.
Unlike some device-based load balancers, the cloud load balancer is a massively scalable, software-defined solution that doesn't require any pre-warming. Cloud load balancer is going to be a key component in any Compute Engine based, highly available workload, so take the time to test it out and get to know it.
About the Author
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.