Google Kubernetes Engine Clusters
Configuring and Managing Firewall Rules
The course is part of these learning paths
This course explores how to implement virtual private clouds on the Google Cloud Platform. It starts off with an overview, where you'll be introduced to the key concepts and components that make up a virtual private cloud.
After covering basic VPC concepts and components, we'll dive into peering VPCs, shared VPCs, and VPC flow logs, including a hands-on demonstration of how to configure flow logs. We’ll also look at routing and network address translation, before moving on to Google Kubernetes Engine clusters. We’ll cover VPC-native clusters and alias IPs, as well as clustering with shared VPCs.
You’ll learn how to add authorized networks for GKE cluster master access and we finish off by looking at firewall rules. We’ll cover network tags, service accounts, and the importance of priority. You’ll also learn about ingress rules, egress rules, and firewall logs.
If you have any feedback related to this course, feel free to contact us at firstname.lastname@example.org.
- Get a foundational understanding of virtual private clouds on GCP
- Learn about VPC peering and sharing
- Learn about VPC flow logs and how to configure them
- Learn about routing in GCP and how to configure a static route
- Understand the pros and cons of VPC-native GKE clusters
- Learn about cluster network policies
- Understand how to configure and manage firewall rules in GPC
This course is intended for anyone who wants to learn how to implement virtual private clouds on the Google Cloud Platform.
To get the most from this course, you should already have experience with the public cloud and networking, as well as an understanding of GCP architecture.
Hello and welcome to firewall rules in GCP. In this lesson, we will take a look at VPC firewall rules. You’ll learn what they are and what they do.
VPC firewall rules are used to allow or deny connections to virtual machine instances and from virtual machine instances. They are applied to a given project and network. In situations where you want to apply firewall rules across an entire organization, you can use firewall policies.
When you enable a set of VPC firewall rules, those firewall rules are always enforced, meaning they constantly protect instances that are connected to your VPC network. Firewall rules that you create are actually defined at the network level. However, connections are allowed and denied on a per-instance basis. This means that firewall rules can provide protection between instances and other networks as well as between individual instances that are connected to the same network.
To create a VPC firewall rule, you need to specify the VPC network that you wish to protect, along with some components to configure what the rule actually does. When you define a VPC firewall rule, you can target specific network traffic types that are based on the source, destination, protocol, and ports. We will touch on each of these firewall rule components, as well as some other components, in a little more detail in a few minutes.
There are several ways to create or modify VPC firewall rules. You can use the Google cloud console if GUI is your style or, if you’re more into command line, you can use the gcloud command line tool. You can also use REST API to create and modify VPC firewall rules. When you create a firewall rule, you can use the target component of the rule to define which instances the rule should be applied to.
Now, I should mention that there are other rules in addition to firewall rules that Google cloud uses to control ingress and egress connections. These other rules are called ingress rules and egress rules. For example, Google cloud will not allow certain egress traffic to leave a VPC network. SMTP traffic on port 25 is a good example of this. Conversely, Google cloud will always allow communications between a virtual machine and its metadata server.
It should be pointed out that every network that gets created will have two implied firewall rules automatically defined. The Implied allow egress rule, with its action of “allow”, allows all traffic out to the 0.0.0.0/0 destination, which basically means everywhere. The priority of the implied allow egress rule is the lowest possible, 65535. The implied deny ingress rule, with an action of “deny”, blocks all incoming connections. Like the egress rule, the priority of the ingress rule is 65535. However, instead of having a destination defined as 0.0.0.0/0, it is the source that is defined as 0.0.0.0/0.
While these implied rules cannot be removed, they can be overridden with custom rules that you create.
Before we jump into the components that make up a firewall rule, let’s take a quick look at what pre-populated rules exist for the default network.
The table on your screen shows the 4 pre-populated rules for the default network. They include default-allow-internal, default-allow-ssh, default-allow-rdp, and default-allow-icmp.
The default-allow-internal rule allows ingress connections for all protocols and ports across instances in the network. The default-allow-ssh rule allows ingress connections on TCP port 22 from everywhere to all instances on the network. This rule allows SSH traffic, which is often used to remote into Linux VMs. The default-allow-rdp rule allows all inbound traffic on TCP port 3389 to all instances on the network. This allows remote RDP access to Windows VMs. Lastly, the default-allow-icmp rule allows ingress, or inbound, ICMP traffic from any source to any instance on the network. This is the rule that allows PING to reach VM instances.
Unlike the implied rules that cannot be deleted, these rules can be deleted and modified as necessary.
So now that you know what firewall rules are and what they do, let’s tie everything up with the components that make up a firewall rule.
When you create a firewall rule, there are several components you need to configure. These components include direction of connection, priority, action, enforcement status, target, source, destination, protocol, and port.
The direction of connection can be ingress or egress in relation to the target. Simply stated, the ingress direction refers to connections that are sent from a source to the target. This is inbound traffic. The egress direction, conversely, refers to traffic sent from the target to a destination. This is outbound traffic.
The priority of a firewall rule can range from 0 to 65535. The lower the number, the higher the priority. The default priority when you create a new rule is 1000. Priority determines the order in which different firewall rules are evaluated. Higher priority rules, as you would expect, take precedence over lower priority rules. Rules that have the same priority as well as the same action will have the same result.
The action that is configured for a firewall rule is used to specify whether the rule blocks traffic or allows traffic whenever the rule is matched. There are two different actions to choose from. They include allow and deny. As you might guess, allow is used to permit connections that match the configured components of a rule. The deny action, conversely, blocks connections when a rule is matched.
There are two options to choose from when configuring enforcement for a rule. They include enabled or disabled. You would typically disable a rule while troubleshooting, or in situations where you need to grant temporary access to resources protected by the rule. A rule that is enabled, obviously, is a rule that will be evaluated. I should mention that all firewall rules are enabled by default when they are created. However, you can create a rule in the disabled state if you wish.
When you configure a firewall rule, you will need to specify a target. When configuring an ingress rule, the target parameter refers to the destination VM instances, GKE clusters, or App Engine flexible environment instances. When you configure an egress rule, the target refers to the source instances. Simply stated, the target of an ingress rule will apply to traffic arriving on an instance’s network interface in the VPC network. The target of an egress firewall rule will apply to all traffic leaving a VM instance’s network interface in the VPC network.
There are three options to choose from when specifying a target. These options include all instances in network, instances by target tags, and instances by target service accounts. When you choose all instances in network, the firewall rule will apply to all instances in the network. The instances by target tags option causes the firewall rule to apply only to instances with a matching network tag. Choosing the instances by target service accounts option causes the firewall rule to apply to only those instances that use a specific service account.
The source parameter only applies to ingress rules, while destination only applies to egress rules. When configuring an ingress rule, the target parameter that we discussed specifies the destination instances for traffic. This means you cannot configure a destination parameter for ingress rules. When configuring an egress rule, the target parameter refers to the source instances for the traffic. This means that you cannot configure a source parameter directly for an egress rule. When you configure a source parameter for an ingress rule, you can specify source IP ranges, source tags, or source service accounts as your source. You can also choose a combination of source IP ranges and source tags, or a combination of source IP ranges and source service accounts. If you do not specify a source when configuring an ingress rule, Google cloud will define the source as any IP address, or 0.0.0.0/0.
As I mentioned previously, the destination parameter only applies to egress rules. When configuring the destination parameter, you can only specify IP address ranges. Such ranges can include addresses inside your VPC network as well as outside of it. If you do not specify a destination range, Google Cloud will do so for you - and set the destination to be 0.0.0.0/0. In other words, it will define the destination to be all IP addresses.
And last but not least, when you configure a firewall rule, you will need to tell the rule what protocols and ports the rule should be scoped to. Protocols and ports apply to both ingress and egress rules and you can specify individual protocols and ports or a combination of protocols and ports. If you leave out protocols and ports when configuring your rule, Google cloud will apply the rule to all traffic on any protocol and any port. I should mention that Google cloud firewall rules will use the port information to reference the destination port of a packet, not the source port. This means that for ingress firewall rules the destination ports will refer to the ports on systems identified in the rule’s target parameter. For egress firewall rules, the destination ports will refer to ports on systems that are identified in the rule’s destination parameter.
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.