This Course explores how to implement virtual private clouds on the Google Cloud Platform. It starts off with an overview, where you'll be introduced to the key concepts and components that make up a virtual private cloud.
After covering basic VPC concepts and components, we'll dive into peering VPCs, shared VPCs, and VPC flow logs, including a hands-on demonstration of how to configure flow logs. We’ll also look at routing and network address translation, before moving on to Google Kubernetes Engine clusters. We’ll cover VPC-native clusters and alias IPs, as well as clustering with shared VPCs.
You’ll learn how to add authorized networks for GKE cluster master access and we finish off by looking at firewall rules. We’ll cover network tags, service accounts, and the importance of priority. You’ll also learn about ingress rules, egress rules, and firewall logs.
If you have any feedback related to this Course, feel free to contact us at support@cloudacademy.com.
Learning Objectives
- Get a foundational understanding of virtual private clouds on GCP
- Learn about VPC peering and sharing
- Learn about VPC flow logs and how to configure them
- Learn about routing in GCP and how to configure a static route
- Understand the pros and cons of VPC-native GKE clusters
- Learn about cluster network policies
- Understand how to configure and manage firewall rules in GPC
Intended Audience
This Course is intended for anyone who wants to learn how to implement virtual private clouds on the Google Cloud Platform.
Prerequisites
To get the most from this Course, you should already have experience with the public cloud and networking, as well as an understanding of GCP architecture.
Welcome to Shared VPC. In this lesson, I will provide you with an overview of shared VPCs. We will cover some of the key concepts that you need to be familiar with when configuring shared VPCs on Google Cloud Platform.
So, what exactly does shared VPC bring to the table?
Organizations can use shared VPC in situations where they need to connect different resources from several different projects to a common virtual private cloud network, or VPC. Leveraging shared VPC allows such resources in different projects to communicate securely via internal IP addresses rather than needing public IP’s.
A shared VPC consists of a host project and several service projects that attach to the host project. VPC networks that exist within the host project are then referred to as shared VPC networks. Resources like compute engine instances, Google Kubernetes engine clusters, cloud functions, and other eligible resources within the service projects can then use the subnets within the shared VPC network to communicate with one another.
Visit the URL on your screen for a complete list of eligible resources that can be used in a shared VPC:
as flexible as shared VPC is, they can only be used to connect projects within the same organization. This means that the host projects and the service projects cannot exist in different organizations. I should also mention that linked projects can reside in the same folders or in different folders. However, if they are in different folders, the admin needs to have shared VPC admin rights to both folders.
As I mentioned earlier, a VPC network that is defined within a host project and centrally shared for resources in the attached service projects is called a shared VPC. These shared VPC networks can be auto mode or custom mode. Legacy networks, however, cannot be used in a shared VPC.
Enabling a host project causes all existing VPC networks within the project to become shared VPC networks. Additionally, new networks that are created within the host project also automatically become shared VPC networks as well. The connection of a host project and the service projects are made at the project level. The subnets of shared VPC networks within a host project are accessible by service project admins.
Access control of a shared VPC is achieved through the use of organization policies and IAM permissions. You can use the organization policies to establish organization level, folder level, and project level controls. IAM roles can be used to delegate administration.
it should be pointed out that standard per project VPC quotas are applied to shared VPC host projects. Likewise, shared VPC networks are governed by per network limits and per instance limits for VPC networks. As far as billing goes, charges for resources that are part of a shared VPC network are attributed to the service project where those resources are located.
When instances in different service projects that are attached to a host project using the same shared VPC network need to communicate, they do so using either ephemeral or reserved static internal IP addresses. It should be pointed out that these internal IP addresses are subject to any applicable firewall rules.
Ephemeral internal IP addresses can be automatically assigned from the range of available IP’s in the selected shared subnet to instances within a service project, while static internal IP addresses can be reserved by the service project admin.
It’s important to point out that when reserving a static internal IP address, the IP address object itself needs to be created in the same service project as the resource that it will be assigned to. This is important to note because even though the IP address object must be created in the same service project as the resource using it, the IP address itself will actually come from the range of available IPs in the selected shared subnet of the shared VPC network.
I should also point out that external IP addresses that are defined in a host project can only be used by resources in the host project. They cannot be used by resources in the service projects. If resources in a service project require external IP addresses, they would need to be assigned from the set of external IPs for that service project.
I also want to touch on DNS a bit as it relates to shared VPC as well. As far as internal DNS goes, it should be noted that virtual machines residing in the same service project can communicate with one another using the internal DNS names that Google cloud creates automatically when those virtual machines are provisioned. Cloud DNS private zones can also be used in a shared VPC network as well. To do this, you need to create the private zone in the host project and then authorize access to the zone for the shared VPC network.
For deep dive into the technicals of internal DNS and shared VPC, visit the URL that you see on your screen:
https://cloud.google.com/compute/docs/internal-dns#shared-vpc
Shared VPCs can also be used with load-balancing. That said, to make this work, all necessary load-balancing components need to reside within the same project. They can either reside all in the same host project or all in the same service project. Shared VPC does not support the creation of some load balancer components in the host project and others in the attached service project.
The table on your screen identifies where specific load-balancing components should be created when using them with shared VPC.
To wrap things up for this lesson, let’s take a walk through a basic use case scenario for a shared VPC. The image on your screen depicts such a scenario.
Notice that in this example, the shared VPC admin has created a single host project and has attached two different service projects to it. In this scenario, we have instance A deployed in the US-WEST1 region. It resides in service project A and is connected to the 10.0.1.0/24 subnet within the shared VPC that’s been provisioned within the Host project. The internal address of 10.0.1.3 for instance A comes from the pool of addresses for the 10.0.1.0/24 CIDR block defined in the host project.
Instance B, which is deployed in the US-EAST1 region, resides in Service Project B, and is attached to the 10.15.2.0/24 subnet of the shared VPC in the host project. The internal IP address of 10.15.2.4 for instance B comes from the pool of addresses for the 10.15.2.0/24 CIDR block defined in the host project.
Because they are attached to a Shared VPC, both instances can communicate with one another over private IP addresses.
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.