Configuring Kubernetes Clusters
Configuring Firewall Rules
Please note: this course has been replaced with an updated version which can be found here.
This course guides you through the key steps to configure a Google Cloud Platform virtual private cloud (VPC), which allows you to connect your GCP services with one another securely.
After a brief introduction, the course begins with how to set up and configure VPCs, including VPC peering and shared VPC. You'll learn how to configure routes, set up cloud NAT (network address translation), and configure VPC-native clusters in Kubernetes, before rounding off the course by looking at VPC firewalls. The topics in this course are accompanied by demonstrations on the platform in order to show you how these concepts apply to real-world scenarios.
If you have any feedback, questions, or queries relating to this course, please feel free to contact us at firstname.lastname@example.org.
- Configure Google Cloud Platform VPC resources
- Configure VPC peering and API access
- Create shared VPCs
- Configure internal static and dynamic routing, as well as NAT
- Configure and maintain Google Kubernetes Engine clusters
- Configure and maintain VPC firewalls
This course is intended for:
- Individuals who want to learn more about Google Cloud networking, who may also have a background in cloud networking with other public cloud providers
- Individuals who simply want to widen their knowledge of cloud technology in general
To get the most from this course, you should already have experience in public cloud and networking as well as an understanding of GCP architecture.
Okay, for the next section we're gonna talk about VPC priorities. Now, this is a very important subject, simply for the fact that you have to really have a firm understanding of how your firewall rules are gonna be applied.
So, for this example, what I'm gonna show you is as I'm creating this firewall rule, we get down to this priority section and as you can see here on the screen, the priority can range anywhere from zero to 65,535. And ultimately, what this means is the lower the number, the higher the priority.
So for example, if I have a priority rule, a priority at 49 and this is gonna allow let's say any traffic going out with the tag on-prem to port 80, it's gonna allow it to go. It will work just fine.
At the same point, I could have another rule that's set to deny on port 80, but instead of a 49 priority, I'ma have this one at a 50 or even a 500. That rule will be applied, it wouldn't take precedence over the 50.
So this is key where you can have the same rule set up and ultimately be controlling access to your instances, based simply on the priority number.
So you can have one rule. This way, you can have say, 10 rules, and each one of the rules could be variations of the other rule, but because you have some in a lower priority and others at a higher priority, certain things will take precedence, depending on the priority number. So it's very important to understand this. Sometimes it'll be a little confusing. Even if you click on the question mark here or you highlight it, the lower the number is, the higher priority.
So with that being said, I'm gonna suggest you just play around with the priorities so you get a firm understanding of it. Let's move on to the next section.
Mark has many years of experience working with Google Cloud Platform and also holds eight GCP certifications.