The course is part of these learning paths
See 1 moreWith many enterprises making the shift to containers, along with Kubernetes being the leading platform for deploying container applications; learning Kubernetes Patterns is more essential than ever for application developers. Having this knowledge across all teams will lead to yielding the best results, for developers in particular.
This Kubernetes Patterns for Application Developers Course covers many of the configuration, multi-container pods, and services & networking objectives of the Certified Kubernetes Application Developer (CKAD) exam curriculum.
Help prepare for the CKAD exam with this course, comprised of 6 expertly-instructed lectures.
Learning Objectives
- Understand and use multi-container patterns to get the most out of Kubernetes Pods
- Learn how to control network access to applications running in Kubernetes
- Understand how Kubernetes Service Accounts provide access control for Pods
- Use the Kubernetes command-line tool kubectl to effectively overcome challenges you may face when working with Kubernetes
Intended Audience
This course is intended for application developers that are leveraging containers and using or considering using Kubernetes as a platform for deploying applications. However, there are significant parts of this course that appeal to a broader audience of Kubernetes users. Some individuals that may benefit from taking this course include:
- Application Developers
- DevOps Engineers
- Software Testers
- Kubernetes Certification Examinees
Prerequisites
We would recommend that to successfully navigate this course it would be helpful to have:
- Knowledge of the core Kubernetes resources including Pods, and Deployments
- Experience using the kubectl command-line tool to work with Kubernetes clusters
- An understanding of YAML and JSON file formats
Related Training Content
This course is part of the CKAD Exam Preparation Learning Path.
Kubernetes has several concepts relevant to networking. As an application developer, you should know about the concepts and how they can be used to securely provide access to applications running in a cluster. This lesson will begin by reviewing the basic networking model employed by Kubernetes. Then we will discuss more about services. The Networking Basics and Services topics are also covered in other content here on Cloud Academy so we will only review the key concepts of each. After, we will discuss Kubernetes network policies for controlling traffic that is allowed to and from pods. The basic building block in Kubernetes is a pod. The network model for a pod is IP per pod meaning each pod is assigned one unique IP in the cluster. Containers in the pod share the same IP address and can communicate with each other's ports using localhost. A pod is of course scheduled onto a node in the cluster. Any node can reach the pod by using its pod IP address. Other pods in the cluster can also reach the pod using the pod's IP address.
This is thanks to whatever Kubernetes networking plugin you choose. The network plugin implements the container network interface standard and enables pod-to-pod communication. But pods should be seen as ephemeral. They can be killed and restarted with a different IP. You may also have multiple replicas of a pod running. You can't rely on a single pod IP to get the benefits of replication. This is where services come in. The service maintains a logical set of pod replicas. Usually these sets of pods are identified with labels. The diagram only includes one replica but there could be many spread over many nodes. The service maintains a list of endpoints as pods are added and removed from the set. The service can send requests to any of the pods in the set. Clients of the service now only need to know about the service rather than specific pods. Pods can discover services using environment variables as long as the pod was created after the service, or by using the DNS add-on in a cluster. The DNS can resolve the service name or the namespace qualified service name to the IP associated with the service. The IP given to a service is called the cluster IP. Cluster IP is the most basic type of service.
The cluster IP is only reachable from within the cluster. The kube-proxy cluster component that runs on each node is responsible for proxying requests for the service to one of the service's endpoints. The other types of services allow clients outside of the cluster to connect to the service. The first of those types of services is node port. Node port causes a given port to be opened on every node in the cluster. The cluster IP is still given to the service. Any requests to the node port of any node are routed to the cluster IP. The next type of service that allows external access is load balancer. The load balancer type exposes the service externally through a cloud provider's load balancer. A load balancer type also creates a cluster IP and a node port for the service. Requests to the load balancer are sent to the node port and routed to the cluster IP. Different features of cloud provider load balancers, such as connection draining and health checks, are configured using annotations on a load balancer. The final type of service is the external name and it is different in that it is enabled by DNS, not proxying.
You configure an external name service with the DNS name and requests for the service return a CNAME record with the external DNS name. This can be used for services running outside of Kubernetes, such as a database as a service offering. Network policies in Kubernetes are rules that determine which group of pods are allowed to communicate with each other and to other network endpoints. Network policies are similar to simple firewalls or security groups that control access to virtual machines running in a cloud. Network policies are namespace resources meaning that you can configure network policies independently for each Kubernetes namespace. Before we get into the details there is an important caveat when it comes to network policies. The container network plugin running in your cluster must support network policies to get any of their benefits.
Otherwise, you will create network policies and there won't be anything to enforce them. In the worse case, you might think that you have secured access to an application but the pods are actually still open to request from anywhere. There won't be any error messages when you create the policy. It will be created successfully but we'll simply have no effect. The cluster administrator can tell you if the network plugin in your cluster supports network policy or not. Some examples of network plugins that support network policy are Calico and Romana. With that caveat out of the way, we can talk about two kinds of pods, isolated and non-isolated. A pod that is non-isolated allows traffic from any source. This is the default behavior. Once a pod is selected by a network policy it becomes isolated. The pods are selected using labels, which are the core grouping primitive in Kubernetes. Let's see how to use network policies in a demo by writing several network policies and observing their effects. To begin with, I am running a cluster with the Calico network plugin installed. You can see that by checking the pods in the kube-system namespace, and observe there're several pods beginning with calico.
Calico is one of the network plugins that support network policy. I have created three pods in the network policy namespace. One is a server and the other two are clients that send requests to the server every second. Client one is in the U.S. East region, while client two is in the U.S. West region. Both clients are able to send requests and get responses from the server. This can be seen by watching the logs of each client. I use the -f option to follow or stream the logs for each pod. New logs are generated every second acknowledging response was received from the server. Let's take a look at our first policy. Working our way down from the top, network policies are included in the networking API. This policy is made to allow traffic from the U.S. East region and is scoped to the network policy namespace. In the spec, first there is a pod selector that selects the pods that the policy applies to. Here we are using the match labels selector to select any pod with the app server label. This applies to the single server pod that is running in the cluster. If the pod selector were empty the policy would apply to all pods in the namespace. Next is the policy types list. The two allowed values are ingress to indicate the policy applies to incoming traffic, and egress to indicate the policy applies to outgoing traffic. You can include one or the other, or both as in this case.
If, for example, you only included egress, then all ingress traffic would be allowed by the policy. Corresponding to the ingress policy type is the ingress list, which specifies rules for where the traffic is allowed. Each rule includes a from list specifying sources and a ports list specifying allowed ports. If the from list is omitted, all sources of traffic are allowed on the specified ports. If the ports list is omitted, traffic on all ports is allowed from the specified sources. If both are omitted, all traffic on all ports is allowed. Source rules can be made of pod selectors to select traffic based on pod labels, namespace selectors to select based on the namespace of pods or IP block to select based on a range of IP addresses. This policy uses a pod selector to allow ingress traffic from pods with the us-east region label. Each item in the ports list can restrict the allowed traffic to a given port and protocol. In this example, TCP port 8888 is allowed because that is the port the server listens to. The egress mapping includes a "to" map that lists rules in the same format as the ingress rules. In this case, no rules are specified so all outbound traffic is allowed. Let's create the policy and then check to verify that client one in the U.S. East region is allowed to communicate with the server. And it is because responses are still being received. What if we check client two, which is in the U.S. West region? There're no new log messages being received.
The policy is doing its job. Besides doing some basic tests like this, you can always use the describe command to check that the policy is doing what you expect. It outputs a description of the policy in a relatively easy to understand format. Now let's look at one more policy to see how IP block rules are configured. This policy is constructed to block outgoing traffic to a single IP address from app server pods. This policy is an egress-only policy so there is no ingress included in the policy types list. The egress list has a single IP block rule. There're two parts to the rule. The cidr field is required and sets a range of allowed IP addresses. Cidr is a notation for representing a block of IP addresses. 0.0.0.0/0 represents all IP addresses. If that was the complete rule all outgoing traffic would be allowed. However, there is an except list. The except field is optional but when included, it acts as a blacklist within the white list that the cidr field specifies. In this case, there's a single exception which represents one IP address. That IP address happens to be the IP address of the client one pod. This is for demonstration purposes only. In general, you should use labels when selecting pods in your cluster because pods should be treated as ephemeral.
They can be terminated and brought up again with a different IP address, or as labels remain the same. No port list is specified so all ports are allowed. Let's go ahead and create the policy. Now there're two policies being enforced. One allows incoming traffic from the U.S. East region, which allows client one traffic in the current cluster. And the second policy denies outgoing traffic from the server to client one. Think about what you might expect to observe. Will either client pod receive responses from the server? Let's check. Client two still doesn't receive responses. What about client one? It is still receiving responses. That may come as a surprise. We never discussed how policies are combined. For network policies you should think of them as sets of allow rules that are added together. If any rule allows traffic, even if others deny it, the traffic is allowed. In this case, the default allow egress rule and the first policy allows all egress, even though the second policy has blacklisted a specific IP. So to see the effect of the IP block rule, we can delete the first policy with the default allow egress rule.
Now, if we check the logs for client one, which is not allow egress according to the policy that is being enforced, it is still able to receive responses. What is happening now? The rules apply to new connections. In the case of a client sending a request to the server, the connection is already in place and the response is sent back through the same existing connection. There is no new connection so the network policy can't block the egress. However, if we create a connection from the server, say by running the ping command, to ping the client one pod, we can see that there is no response. Contrast that with client two which we can see that we get a response. To have a clear picture of how network policies work, it is important to remember how multiple policies are combined and that the rules apply to creating connections, not traffic sent over established connections. That's all for this lesson on Kubernetes networking. We began with a review of basic networking principles. Then we paid some extra attention to the different types of services available in Kubernetes. Lastly, we discussed network policies and how they can restrict traffic that is allowed to and from pods. In the next lesson, you will continue with the theme of security in Kubernetes and learn about service accounts. Start the next lesson when you are ready.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Security Specialist (CKS), Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.