Deploying and Implementing Networking Resources
Deploying and Implementing Compute Engine Resources
This course has been designed to teach you how to deploy network and compute resources on Google Cloud Platform. The content in this course will help prepare you for the Associate Cloud Engineer exam.
- To understand key networking and compute resources on Google Cloud Platform
- Be able to explain different networking and compute features commonly used on GCP
- Be able to deploy key networking and compute resources on Google Cloud Platform
- Those who are preparing for the Associate Cloud Engineer exam
- Those looking to learn more about GCP networking and compute features
To get the most from this course then you should have some exposure to GCP resources, such as VPCs and Compute Instances. However, this is not essential.
Welcome back. Another skill that Google tests for on their exams is the ability to effectively load balance applications, using various types of load balancers available on GCP. What we're going to do here is configure some basic load balancing for IIS across two different VM instances.
In this demonstration, we'll configure TCP load balancing to publicly load balance port 80 across two IIS webservers, called WEB1 and WEB2. Now, what I've done ahead of time to prepare for this demo is spin up the two IIS servers. Each IIS instance displays the name of the web server when you browse to the instance. I've also ensured that the existing firewall in GCP allows port 80 to reach my VMs.
To get started with our load balancer configuration, let's browse to my instances. If I browse to the IP for each, I can see that Web1 displays WEB1 on the IIS page, and I can see Web2 displays WEB2 on the page. This tells me that IIS is working as it should on each specific instance.
So, now that we know IIS is good on each instance, let's get started on the load balancer deployment. To begin the deployment process, what I need to do is browse to the Load Balancing page, now this located here under Network Services. From here, I can begin the setup by clicking create load balancer. Now, when we do that, we're presented with three options. We can deploy a layer-7 HTTPS load balancer, a layer-4 TCP load balancer, or a layer-4 UDP load balancer.
For this exercise, we're going to deploy the one in the middle, which is a TCP load balancer, so let's click Start configuration to get rolling here. As we get started here, we can see that we have a couple different options for this load balancer. We can make the load balancer internet facing, or we can load balance only between VMs in my network. What we're going to do in this exercise is create an internet-facing load balancer. Another choice that I need to make here is whether or not I make my load balancer span multiple regions or if I just want to place the backend in a single region only. Since both of my VMs are in the same region, we'll choose the single region option and then click continue.
At this point, I need to give my load balancer a name, so I'll call it myloadbalancer. In addition, what I need to do is configure the backend for the load balancer and the front end for the load balancer. The backend configuration specifies what the load balancer will be load balancing, and what load balancing rules it needs to follow.
The front-end configuration is where I define the public IP for the load balancer, and which ports to load balance. So, let's click Backend configuration. If you look at the name field here, the load balancer name that I already provided shows up and can't be modified. For my region, I'm going to deploy my load balancer to us-central1. Since I'm load balancing specific instances, rather than instance groups, I can specify those instances by choosing the select existing instances option, and then selecting my two web servers. We're not going to use a backup pool here and our failover ratio isn't really critical, so we'll leave these at their defaults.
Under Health check, what I need to do is Create a new health check. This health check is used to determine which instances are alive on the backend. It's what prevents the load balancer from sending traffic to a downed instance. So I'll create a new health check here and I'll call it port80alive. I can leave the rest of the settings at their defaults. And then to finish the set up of my backend, I need to click the Save and Continue button. The blue circle with a check mark here tells me that my backend configuration was completed successfully. Now, what I have to do is configure my front end. To start off, I need to give my front-end rule a name, so I'll call this myfrontend. Now, I'm going to change the network service tier here to standard, since I don't need premium features for this exercise. And when I do that, it tells me that the standard tier uses the same region as my backend's which is fine. For the IP here, I want to reserve a static IP address.
I could use an ephemeral address, but that's likely to cause problems if the address changes later. So, static it is. We'll call it mylbIP and then we'll reserve it. Since we're load balancing port 80 for this exercise, we'll specify port 80 here in the port box. And then we can click Done. After I've done so, I see the blue circle with the check mark that indicates the front-end config is successful as well. From here, I can click the review and finalize option here to ensure my settings are what I need them to be, and then I can click create to deploy the configured load balancer. We'll give this a few minutes to deploy and get up to speed and then we'll test it. On the load balancer screen, under the Backend column for the load balancer, we can see a green check mark that indicates the new load balancer is healthy.
So, now that the load balancer is configured, let's open a browser window and browse to the public IP of our load balancer. We can see that it returns the name of the web server that I was directed to. If I go in and shut down my web1 server here and then try again, we'll see that this time we're sent to the other web server. And we can see web2 is now listed. This tells me that load balancing is working as expected. Although I've shown you how to provision a TCP load balancer here, there are other options, as I mentioned earlier.
So, be sure to get in and play around with the different load balancer options to get a feel for how they work.
About the Author
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.