Products and Services
The course is part of these learning pathsSee 4 more
There are a lot of different options, across a variety of cloud platforms that are well suited for running specific workloads, such as web applications. Things such as Google App Engine, AWS Elastic Beanstalk, Azure App Services: Web Apps, among others.
However, there are still plenty of times where we need to set up our own infrastructure. And so cloud vendors offer IaaS (infrastructure as a service) options. Google provides us with Compute Engine which allows us to create virtual machines, custom images, snapshots, networks, auto-scalers and load balancers.
If we're going to create and implement an application on the Google Cloud Platform system operations, then understanding these services are going to help us to create highly available, highly scalable applications.
All the major cloud providers offer the ability to set up virtual machines, networks, auto-scalers, and load balancers. Where the Google Cloud is different is in the speed of creating and starting up virtual machine instances. As well as the massively scalable software-based, global load balancer; which doesn't require pre-warming. Google also offers per-minute billing for VM instances, after the first 10 minutes.
So Google has a lot to offer. And if you're looking to learn more about the Google Cloud systems operations, then this may be the course for you.
What exactly will we cover in this course?
Course Objectives: Google Cloud Platform system operations
By the end of this course, you'll know:
How to use Compute Engine to create virtual machines
How to create disk snapshots
How to create images
How to create instance templates and groups
How to create networks
How to use the auto-scaler and load balancer
This is an intermediate level course because it assumes:
You have at least a basic understanding of the cloud
You’re at least familiar with general IT concepts
What You'll Learn
Summary A review of the course
|Lecture||What you'll learn|
|Intro||What will be covered in this course|
|Getting Started||An introduction to the Google Cloud Platform|
|Networking||How to create and secure Cloud Networks|
|Disks and Images||An overview of disk types and images|
|Authorization and IAM||How to authenticate and authorise users|
|Disk Snapshots||How to use snapshots for point-in-time backups|
|Cloud Storage Overview||A refresher on Cloud Storage|
|Instance Groups||How to manage instances with managed and unmanaged groups|
|Cloud SQL Overview||A quick primer on how to use Cloud SQL|
|Startup and Shutdown Scripts||Using startup scripts to provision machines at boot time|
|Autoscaling||How to automatically add and remove instances|
|Load Balancing||How to balance traffic across instances|
|Putting It All Together||A demo of how to use some of the services we've learned about|
Welcome back. In this lesson, we'll talk about networking on the Google Cloud Platform. We'll start with a high-level overview of network fundamentals, and then we'll move on to cover firewall rules. We'll talk about network management, and finally, we'll take a look at how to connect to instances without an external IP address.
So we have a lot to cover in this lesson, and let's start off by talking about network fundamentals. Networks on Google Cloud Platform are a global resource, which means they're visible to all of the resources in our project. There are three supported protocols, TCP, UDP and ICMP, and for most applications, this will probably be all we need.
Also, they only support IPv4 at the moment. Every VM instance that we create belongs to a network. Now, you may be thinking back to when we created that Debian instance and you're trying to recall when we selected a network. In that example, we didn't explicitly select a network. By default every project that has the compute engine API enabled has a Google-created network, and if we don't specify a network for our instances that Google-created default is used.
The networks for the Google Cloud Platform have evolved over time so we have two different types of networks currently. We have the Legacy networks which use globally allocated IP addresses and then we have the newer and recommended subnetworks, which regionally control the IP addresses available for instances in that subnet.
So if subnetworks are the recommended way of doing things what are the benefits? Here are just a few. First, subnetworks allow you to regionally segment the networks IP address space into prefixes, and control which prefix an instance's internal IP address is allocated from. Also, when using a VPN, subnetworks allow you to target VPN tunnels to a particular region. And this allows you additional control over how VPN routes are configured.
This also results in lower latency in cases where the VPN would previously have assumed the IP address range span across all regions. Another benefit, the overall network IP address space doesn't have to be determined when you first create the project. And that's just three of the many benefits. Subnetworks are going to allow for more control and for breaking out environments in a more defined way.
When we use subnetworks, we have two options for assigning an IP address range. We can use the Auto Subnetwork networks or the Custom Subnet networks. And once we've selected which type we want for a given subnet network we can't change it. The automatic will select the IP address prefixes automatically and the custom will allow you to select any private IP address range that you want.
So, with the subnetworks, we can create regional networks each with their own IP address prefix. Now once we have instances up and running, and even instances across different subnets, we'll need a way to control the flow of packets within our network. And that's what routes provide. With routes, we can specify where packets that are addressed to a specific range should be directed to.
Routes will allow us to very easily to control the flow of outgoing traffic from instances, allowing us to route everything through a proxy if we wanted to or somewhere else. For the most part, the default routes will handle our outgoing traffic. So unless we need something more advanced we really won't need to create new routes.
Okay, so once we have instances and networks we'll need to start thinking about securing the perimeter. Firewall rules are global resources, and they enable us to allow or deny incoming or outgoing traffic on our network, which include three open ports one for ICMP, one for SSH, and one for RDP. And it also allows traffic between instances to flow unrestricted.
Next up, let's shift gears and talk about identifying instances by hostname or IP address. Each instance has a Metadata server that will act as a DNS server.
The Metadata server is something that we will be talking about in depth in a later lesson. For now just now that it stores the DNS entries for all network IP addresses on the local network and calls Google's Public DNS server for entries outside of the network. If you want to communicate between servers using the fully qualified domain name it follows the pattern of Hostname.c.projectid.internal. Now the side using the fully qualified domain name we also have an internal and external IP address. Internal IP addresses can either be ephemeral which means they don't stick around after they are no longer needed or they can be set up by a user when the instance is created.
When instances talk to each on the network they use the internal IP address. External IP addresses are used for addressing the instance from outside of the network. They can be reserved as a static IP address or can be created beforehand and attached to instances as well as being ephemeral.
Alright, by this point we know a bit about networking on the Google Cloud Platform. We know about Networks and Subnetworks and we talked about routes and firewall rules. And we just talked about hostnames and internal and external IP addresses. Now let's talk a bit about how to manage that. It starts with enabling the Compute Engine API and doing that will create the default Automatic Subnet Network and the firewall rules associated with it for allowing ICMP, SSH, and RDP traffic as well as internal network traffic.
Once that's enabled you can create additional subnet networks up to the quota limit and you can also delete subnet networks except for the last one. There needs to be at least one at all times. And you can also create firewall rules and routes which like subnet networks are subject to quota limits.
At a certain point you may end up with a lot of firewall rules and to help make management easier you can use tags. Now, this is a great feature. You can assign tags to a firewall rule and you can assign them to virtual machine instances and that enables us to have our firewall rules apply to any instances that are tagged with the same tags used for firewall rules.
Here's an example. Imagine you have an instance running an FTP server so you need port 21 open. You could create a tag and you can name it whatever you want however in our example we'll name it FTP Server. Now if you create a rule that opens up port 21, then you can tag that rule with the same value of FTP Server.
And because the instance and the rule share the same tag, that rule is gonna apply to that instance. And if we create a new instance or new instances then all we need to do to open up Port 21 is apply that tag. So, while we're talking about Firewall rules it's worth pointing out, they're not just limited to a single port.
Firewall rules allow us to filter traffic to a single network and we can specify IP addresses as well as the ports we want to filter for that single rule. OK. In a previous lesson, we created a Linux VM we connected to via SSH. Now, that worked because we had an external IP address and we had firewall rules that opened up port 22.
If you only have the one instance then this works out just fine. However, if you have multiple instances, and even multiple networks, this is no longer optimal because you don't want to expose all of your instances to the outside world if it's really not required. This is a common scenario. So, there are options to allow us to connect into our instances without external IP address for those instances.
The first option is to create a bastion. And this serves as a gateway allowing us to connect into it via SSH and then from there, we can kind of springboard to the other servers we want to connect to, and this works because we give that bastion host an external IP address and we open up port 22 to it.
This workflow requires that we keep that bastion host hardened from the outside so that it's secure. Though, it is a good solution when we have cloud-native infrastructure. Now what I mean by that is if we already have a company network, then using a site to site VPN makes more sense. It would allow us to connect a company network to the cloud network. So, a site to site VPN is our second option.
The third option is a NAT gateway. If an instance doesn't have an external IP address, it can't communicate with anything outside its network. So what a NAT gateway does is allow traffic from those instances to flow through it. Allowing traffic from the instances on the network to communicate with the outside world through it.
This option requires a fair bit of setup and the NAT gateway becomes a single point of failure. However, it's an option that may work in some cases. And the final option for connecting to an instance is via the Serial Console. This option allows us to use the web Console, the GCloud command-line, or the SSH client to connect to an instance via its serial port.
Using this does require that we enable the feature with a metadata key-value setting. However, should you need to connect to a single instance for troubleshooting those one-off issues, this could be useful.
Alright, that's gonna wrap up this lesson. In our next lesson, we're going to cover Disks and Images. So, if you're ready to keep going, then let's get started with the next lesson.
About the Author
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.