1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. GKE Services and Network Policies

Kubernetes Network Policies

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW1m 46s
Cluster Networking
2
Namespaces
PREVIEW8m 46s
Course Conclusion
5
Review
1m 26s

The course is part of these learning paths

Google Professional Cloud Developer Exam Preparation
16
1
14
1
Google Cloud Platform for System Administrators
12
3
12
Google Cloud Platform for Developers
15
2
15
Start course
Overview
Difficulty
Intermediate
Duration
32m
Students
19
Ratings
5/5
starstarstarstarstar
Description

Developing for Kubernetes involves more than just building and deploying containers. This course will cover how to manage communication both inside and outside of your Kubernetes cluster. You will learn how to organize your pods with namespaces, map IP addresses to a group of pods, and how to control communication with your pods using network policies.  

If you have any comments or feedback, feel free to reach out to us at: support@cloudacademy.com.

Learning Objectives

  • Create and use namespaces
  • Connect to your pods using services
  • Define and enforce network policies

Intended Audience

  • Engineers who want to deploy applications on a Kubernetes cluster
  • People who want to get GCP certified (eg: Professional Cloud Developer)

Prerequisites

  • Basic understanding of Kubernetes
  • Experience building and deploying containers
Transcript

So you should now understand how to use Kubernetes Services to make it easy to connect to your pods. A Cluster IP Service will enable access from inside the cluster, and a Load Balancer Service will enable access from outside. But what if you only want to accept internal connections from a single namespace? Or what if you want to accept external connections, but only those from a certain IP range? In order to cover these scenarios, you need to use Network Policies.

Kubernetes Network Policies allow you to control traffic at the IP address and port level. You can specify what types of access are allowed for each pod or namespace. The default policy is set to allow all traffic. So if you don't change anything, all your services are wide open. Now this is why I did not have to set any policies in the last demo. The default policy makes it easy to test things, but it leaves you wide open. So if you are planning to run a cluster in a production environment, you want to set some Network Policies.

As I said, by default no policies exist, and all access is allowed. However, once you create a Network Policy for a namespace, any connection not explicitly allowed by that policy will be denied. Let's say you had three pods called A, B and C in the same namespace. The default network policy says that all the pods can communicate with each other. However, if you were to create a Network Policy allowing Pod A to connect to Pod B, then only Pod A could connect to Pod B. Before the policy, Pod C was able to connect to Pod B as well. But once the new policy was added, connections from Pod C to B would be rejected. You would need to add a new Network Policy for Pod C to restore access.

Now this isn't very intuitive, so keep this in mind when playing around with policies. Also, you should note that network policies are additive. This means they act like a whitelist, not a blacklist. You specify what is allowed and everything else is assumed to be disallowed. Now this is useful because it means network policies cannot conflict.

Network policies allow you to do things like set different policies per namespaces. So if you have team-specific namespaces, you probably would want different policies depending upon what each team needs. Or, if you had a testing namespace and a production namespace, you probably want much stricter policies on production than on testing.

Network policies work by specifying ingress rules for incoming connections and egress rules for outgoing connections. You could allow all incoming connections, but deny all outgoing connections. Or you could allow incoming connections but only from certain IPs. Or you could allow connections only to certain ports.

A web server container might have an allow ingress to ports 80 and 443 policy. Remember that setting an allow policy on port 80 and 443, would automatically deny all the other ports. A mySQL instance might have an allow ingress from IP address 10.10.1.84 on port 3306 policy. This would allow only a specific back-end service to connect and make queries.

Network Policies are effectively pod-level firewall rules. Please note that in order to have a working network flow between two pods, you need both a working egress policy on the source pod and a working ingress policy on the destination pod. If either the egress or ingress is blocked, then no connection can be established. Now, I previously mentioned that namespaces do not provide isolation. However, you can isolate namespaces using Network Policies.

So let me show you how to create a simple network policy. And I am going to continue using the same cluster that I created previously. I still have the two namespaces, the two Hello World apps, and two services. Now before I can start creating any policies, I first need to enable Network Policy enforcement on the cluster.

Now by default, network policies are disabled on GKE. There is an option you can select when you create the cluster to enable them. Now if you were going to create a new cluster, you could just select the my first cluster as a template, and you can look at the settings. So if you clicked on Networking you can see an option to enable the Kubernetes Network Policy. However, I already have an existing cluster. So instead of creating a new one, I'm just going to enable network policies on that one. 

So this is the command to enable the Network Policy add on. It's gonna take a while to finish, so I am gonna skip ahead. Alright, that took a while, but it finally finished. Next, I need to recreate my cluster's node pools with the network policy enforcement enabled. If I didn't recreate the node pools, my network policies still would not have any effect.

Now, I have noticed that sometimes I will get an issue where I have to run this command multiple times before it actually worked. So if you are trying to follow along, and you find your network policies are not being actually being enforced, try running this command again. Things should work after that. So for my first policy, I am going to set it to allow all ingress traffic. Now this might seem a bit weird, because by default it is already allowing all traffic. But trust me, this will help you understand how to read policies.

Now you can see that policies are structured in a YAML format. Normally you would create the policy in a separate file, and then just pass in the file name to the command. But I am going to just pass in the contents of the file directly via the command line so you can easily see which policies that I am working with.

First, I want you to notice that this policy is named allow-all-ingress. Now that's just a name. I could have called it anything. The name is just for identification. So next, you can see a field called podSelector. This is what controls which pods will be affected by this policy. The empty curly braces means to select all pods. So this policy is going to affect all pods in the assigned namespace. You can use this field to filter out certain pods if you want. The policy types define what kinds of policy we will be setting. So here you can see I am setting an ingress policy. I could also specify an egress policy as well, if I wanted. And finally, I am passing in the ingress policy to be enforced.

Now this is the list of rules that will be applied. This ingress policy uses the same empty curly braces just like the pod selector. So this means that all ingress types will be allowed. Now remember that you specify what is allowed. You do not specify what is denied. Anything not allowed is automatically denied. So this policy will allow all ingress to all pods. Which of course, is the default. So once I apply this policy, it's actually not going to change anything.

All right, so now that we've done this, let me actually create a policy that will actually change something. So first, let me delete the previous policy. And now I'm going to create a new policy. So this new policy is going to do the opposite. It's going to deny all incoming connections. Now you might be thinking it looks exactly the same as the last one. But look again, very carefully. So the name is different, but that's just a name. That does not actually affect the policy.

The pod selector is still the same. That means this policy will affect all pods in the namespace. And we also specified that we are setting an ingress policy, just like last time. However, this time we are not passing in an ingress policy. We told the policy that we would be defining an ingress policy, but we did not pass in any rules. So in effect, we told it to deny all ingress. Now if that sounds confusing, let me explain it a different way.

Let's say the Network Policy is like a security guard. I tell the security guard to not let anyone in, unless their name is on the list I give him. And then, what if I do not give him a list? Effectively, the security guard cannot let anyone in. So this policy works in a very similar way. We are telling it to enforce an ingress policy, but not specifying the criteria for allowing anyone in. Which means no one can get in.

Alright, so let's test it and see if it actually works. I'll get the external IP address of the Load Balancer Service and try to run the curl command. And I am not getting a response. It looks more like my connection is being denied. So I can no longer access my pod, even when trying to connect through the Load Balancer Service, because my new Network Policy is blocking all incoming connections. I should be able to restore access by removing the policy. See? Once I deleted the policy, I can now access the pod. So, you should now have a basic idea of how to create and apply policies.

I am done with this demo, but I encourage you to take some time and experiment yourself. There are a lot of things you can try, and the best way to gain deeper knowledge is to get some real experience.

About the Author
Avatar
Daniel Mease
Google Cloud Content Creator
Students
799
Courses
8
Learning Paths
1

Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.

Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.

When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.