image
Networks
Start course
Difficulty
Intermediate
Duration
1h 8m
Students
8247
Ratings
4.7/5
Description

Google Cloud Platform (GCP) lets organizations take advantage of the powerful network and technologies that Google uses to deliver its own products. Global companies like Coca-Cola and cutting-edge technology stars like Spotify are already running sophisticated applications on GCP. This course will help you design an enterprise-class Google Cloud infrastructure for your own organization.

When you architect an infrastructure for mission-critical applications, not only do you need to choose the appropriate compute, storage, and networking components, but you also need to design for security, high availability, regulatory compliance, and disaster recovery. This course uses a case study to demonstrate how to apply these design principles to meet real-world requirements.

Learning Objectives

  • Map compute, storage, and networking requirements to Google Cloud Platform services
  • Create designs for high availability and disaster recovery
  • Use appropriate authentication, roles, service accounts, and data protection
  • Create a design to comply with regulatory requirements

Intended Audience

This course is intended for anyone looking to design and build an enterprise-class Google Cloud Platform infrastructure for their own organization.

Prerequisites

To get the most out of this course, you should have some basic knowledge of Google Cloud Platform.

Transcript

Now we know all of the components we want to use and we just need to connect them together with networks. Google provides what are called Virtual Private Clouds, or VPCs, but I’m just going to call them networks.

There are 5 layers in Google Cloud you can use to isolate and manage resources: organizations, folders, projects, networks, and subnetworks.

You aren’t required to have organizations or folders, but they can be useful, especially for large companies.

Projects are required, though. You use them to provide a level of separation between resources. Not only are resources in different projects unable to communicate with each other, but they’re even in different billing accounts. Projects also have separate security controls, so for example, you could give Bob in QA the highest level of access in the Test Environment project, but a lower level of access in the Production Environment project.

Each project has one default network that comes with preset configurations and firewall rules to make it easier to get started, but you can customize it, or you can create up to 4 additional networks (for a total of 5). If 5 networks per project isn’t enough for you, then you can request a quota increase to support up to 15 networks in each project.

A network belongs to only one project, a subnet belongs to only one network, and an instance belongs to only one subnet.

Instances in the same subnet or even different subnets within the same network can communicate with each other. Subnetworks are used to group and manage resources.

A network spans all regions, but each subnet can only be in one region. A subnet allows you to define an IP address range and a default gateway for the instances you put in it. The IP address ranges of the different subnets must be non-public (such as 10.0.0.0) and must not overlap, but other than that, there are no restrictions on them. For example, they can be different sizes. They must be IPv4 addresses, though, because Compute Engine doesn’t support IPv6 yet.

A network can have either automatic or custom subnets. With automatic, the subnets are created for you, one in each region. With custom, you create them yourself. If you discover that you need to customize a network with automatic subnets, then you can convert it to custom mode, but once you do, you cannot convert it back to automatic mode.

On the default network, instances within the same subnet can communicate with each other over any TCP or UDP port, as well as with ICMP. Instances in the same network can communicate with each other, regardless of which subnets they’re in, because Google Cloud creates routes between all of the subnets in a network. However, the default network’s firewall rules only allow ssh, rdp, and icmp traffic between subnets.

If you don’t want instances in different subnets to be able to reach each other, then you can change the firewall rules to deny traffic between them.

Note that only the default network comes with predefined firewall rules. When you create a new network, it doesn’t have any firewall rules. However, the instances in that network will still be able to communicate with the Internet, assuming they have external IP addresses, because all outgoing traffic from instances is allowed. Only incoming traffic is blocked. And when an instance sends a request over the Internet, the incoming response is allowed, so two-way traffic is enabled at that point.

Each network includes a local DNS server so VM instances can refer to each other by name. The fully qualified domain name for an instance is [HOSTNAME].c.[PROJECT_ID].internal. This name is tied to the internal IP address of the instance. An Instance does not know its external IP address and name. That translation is handled elsewhere in the network.

To reach Internet resources, each VM needs an external IP address. An ephemeral external IP address is created for each VM by default, but an ephemeral address gets replaced with another one if you stop and restart the instance, so if you want an instance to always have the same IP address, then you need to assign a static IP address to it.

Since IPv4 addresses are a scarce resource, Google doesn’t want customers to waste them. So you’re not charged for having static IP addresses as long as you’re using them. But if a static IP address is not associated with a VM instance or if it’s associated with an instance that’s not running, then you’ll be charged for it.

Normally, if a VM needs to send requests to other Google services, such as Cloud Storage, then by default, it has to do so using a public IP address rather than an internal one. This is problematic if you don’t want any of your internal network communications to go over the Internet. However, if you enable the Private Google Access option in a subnet, then VMs in that subnet can connect to Google services using internal IP addresses, so their requests will go over Google’s network rather than the Internet.

If you want instances in different projects to communicate with each other, then you have three options: the Internet, VPC Network Peering, or a Shared VPC. Connecting over the Internet is slower, less secure, and more expensive than the other two options, so it’s not usually the best choice.

The simplest alternative is VPC Network Peering. This allows two VPCs to connect over a private RFC 1918 space, that is, using non-routable internal IP addresses, such as 10.x.y.z. In other words, they don’t need public IP addresses, and they communicate over Google’s network. Not only can you do this for VPCs in different projects, but you can even use it to connect VPCs in different organizations. To make this work, both sides have to set up a peering association. If only one side sets up a peering association with the other VPC, then the networks won’t be able to communicate with each other. Also bear in mind that there can’t be any overlapping IP ranges in the two networks. You’ll notice in this example that the two ranges are not overlapping.

A more complicated option is to use a Shared VPC. The idea is that instances in different projects can share the same network. This is kind of a weird idea. If you’ve put resources in different projects, you probably want them to be managed separately, so why would you get them to use the same network? In most cases, it’s to enforce security standards. For example, if you want to use the same firewall rules across all of your projects, then this is a good way to do that.

To set up a Shared VPC, you need to designate one of the projects as the host project and the others as service projects. The host project is the one that contains the Shared VPC. Instances in the service projects can use subnets in the Shared VPC. This is made possible by giving Service Project Admins the authority to create and manage instances in the Shared VPC but nothing more. Meanwhile, the Shared VPC Admins have full control over the network. Note that all of the projects in this arrangement have to be part of the same organization.

OK, we’ve gone over a lot of networking topics. Now how should we apply these concepts to GreatInside?

At a minimum, we should create separate projects for the Development, Test, and Production environments. Inside each project, we should stick with the default network. There’s no need to add any additional ones. We should also stick with automatic subnetworks. The only subnetwork we need right now is one in the US, such as us-central1, since we don’t currently have any plans to expand into other parts of the world. When GreatInside decides to add instances overseas, then they can be added to the other regional subnets.

The default firewall rules should also be fine, since they only allow internal traffic plus ssh, icmp, http, and https. We should remove the rule that allows rdp traffic in the Production network, though, since we don’t have any Windows instances in it.

We don’t want the Production, Development, and Test environments to be part of the same network, so we don’t need a Shared VPC. In fact, we don’t want them to communicate with each other at all, so we don’t need to use VPC Network Peering either.

By the way, you probably noticed that everything I’ve shown so far is only for the interior design application. I’m going to get into the details of how to set up the payment processing environment in the Legislation and Compliance lesson.

One last item is that we have to decide which components need external IP addresses. That’s easy in this case because the load balancer is the only one that needs an external IP address (and ideally it should be a static address). Users will connect to the web instances through the load balancer, so the web instances only need internal IP addresses, and for security reasons, that’s all they should have.

That does raise the question of how a system administrator could connect to them for troubleshooting, though. One way is to give your administrators access to the internal network by interconnecting it with the company’s on-premises network. That’s something that GreatInside has already requested, so let’s see how to do that. There are three ways: Cloud VPN, Cloud Interconnect, and Direct Peering.

Cloud VPN lets you set up a virtual private network connection between your own network and Google Cloud. To do this, you need to have a peer VPN gateway in your own network and it needs to use IPsec to connect to the Cloud VPN Gateway and encrypt traffic. You can have multiple tunnels to a single VPN gateway.

By itself, Cloud VPN requires you to make changes to static routes on your tunnels manually. But if you use Google Cloud Router, then the routes will be updated dynamically using BGP (that is, Border Gateway Protocol). Network topology changes are propagated automatically.

The second way to connect is called Cloud Interconnect. Instead of connecting over the Internet, you can use an enterprise-grade connection to Google's network edge. There are two ways to do this: Dedicated Interconnect and Partner Interconnect. If your internal network extends into a colocation facility where Google has a point of presence, then you can connect your network to Google’s. This is called Dedicated Interconnect. It’s a great solution that provides higher bandwidth and lower latency than a connection over the public internet. It’s a bit expensive, though, because the minimum bandwidth is 10 Gbps.

If you don’t have a presence in a supported colocation facility or you want to pay for a connection that’s smaller than 10 Gbps, you can use Partner Interconnect. With this option, you connect to a service provider that has a presence in a supported colocation facility. You can purchase a monthly contract for connections as small as 50 Mbps and as large as 10 Gbps. 

The third way is to use Peering. This is similar to Cloud Interconnect because you connect your network to Google’s network at a point of presence either directly (which is called Direct Peering) or through a service provider (which is called Carrier Peering). One big difference with peering is that it doesn’t cost anything. So why would anyone pay for Cloud Interconnect when they could peer with Google for free? Well, because with Cloud Interconnect you get a direct connection between your on-premises network and one of your VPCs in Google Cloud. You have full control over the routing between your networks. If you want to change a route, you can change it on your on-premises router, and it will be picked up by BGP. Although the peering option uses BGP, too, it’s done at the most basic level. It doesn’t create any custom routes in your VPC network.

Since we don’t have requirements for low latency and high availability between the company network and Google Cloud, we should go with Cloud VPN to connect. We should also use Cloud Router so network routes will be updated dynamically.

And that’s it for networks.

 

About the Author
Students
217313
Courses
98
Learning Paths
164

Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).