1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Deploying A Cloud Native Application into Kubernetes

Create K8s Network Policy Resources

The course is part of this learning path

Building and Deploying a Cloud Native Application
course-steps 4 certification 1 lab-steps 6
play-arrow
Start course
Overview
DifficultyAdvanced
Duration1h 26m
Students14
Ratings
5/5
star star star star star

Description

Introduction

This training course is designed to help you master the skills of deploying cloud native applications into Kubernetes.

Observe first hand the end-to-end process of deploying a sample cloud native application into a Kubernetes cluster. By taking this course you'll not only get to see firsthand the skills required to perform a robust enterprise grade deployment into Kubernetes, but you'll also be able to apply them yourself as all code and deployment assets are available for you to perform your own deployment:

https://github.com/cloudacademy/voteapp-frontend-react
https://github.com/cloudacademy/voteapp-api-go
https://github.com/cloudacademy/voteapp-k8s

Kubernetes Resources

This training course provides you with in-depth coverage and demonstrations of the following Kubernetes resources:

  1. Namespace
  2. Deployment/ReplicaSet
  3. Pod
  4. Service
  5. Ingress/Ingress Controller
  6. StatefulSet
    1. Persistent Volume
    2. Persistent Volume Claim
    3. Headless Service
  7. NetworkPolicy

Learning Objectives

What you'll learn:

  • Learn and understand the basic principles of deploying cloud native applications into a Kubenetes cluster
  • Understand how to setup and configure a locally provisioned Kubernetes cluster using Minikube
  • Understand how to work with and configure many of the key Kubernetes cluster resources such as Pods, Deployments, Services etc.
  • And finally, you’ll learn how to manage deployments and Kubernetes cluster resources through their full lifecycle.

Demonstration

This training course provides you with many hands on demonstrations where you will observe first hand how to

  • Create and provision a Minikube Kubernetes cluster
  • Install the Cilium CNI plugin
  • Build and deploy Docker containers
  • Create and configure Kubernetes resources using kubectl

Prerequisites

  • A basic understanding of containers and containerisation
  • A basic understanding of software development and the software development life cycle
  • A basic understanding of networks and networking

Intended Audience

  • Anyone interested in learning Kubernetes
  • Software Developers interested in Kubernetes containerisation, orchestration, and scheduling
  • DevOps Practitioners

Transcript

- [Instructor] Okay welcome back. If you recall, when we first provisioned our Minikube hosted Kubernetes cluster, we set it up to use CNI, and in particular the Cilium CNI implementation. 

In this lecture we're going to create the following Kubernetes resources, which will be created to provide networking ingress controls. We'll create a default deny all networking policy, followed by a Mongo to Mongo allow all networking policy, and finally an API to Mongo allow all networking policy. With these networking policies in place, we can control and authorize the networking traffic sent between pods. The Cilium CNI plugin supports creating networking policies to find either using layer 4 and or layer 7 based rules. 

In this demonstration we'll only be using layer 4 rules, or more clearly our rules will white list all ingress network traffic based on pod label selectors. I'll explain this in more detail towards the end of the lecture. Okay, to start with, let's jump into the voteapp Kubernetes directory, and again display the contents using the tree command. We'll first apply the default deny all network policy. This will prevent all pod to pod network traffic but only within the cloud academy name space. And, which is the name space where all of our application pods exist. When we do this, the application will cease to function properly until we add in the remaining networking policies. Okay let's begin. I'll run the following command: kubectl apply dash F, and I'll give it the default deny all policy. As just mentioned applying this policy, breaks the application for various reasons. 

To see this first hand, let's jump back into the browser and reload the application, and as expected the application is unable to render the programming details for each language. Why is this so? Let's now reopen developer tools, and again filter on the ajax generator traffic for the initial page load. Here we can see clearly that all 3 ajax requests are in a pending state, and no response has been returned immediately. This is directly related to our default deny all network policy that we just applied within the could academy name space. 

Okay, let's now fix the application by applying all of the networking policies contained within the netpole directory. I'll run the command: kubectl apply dash F and pass in the netpole directory name. Right, if we now retest from within the browser it should all work again, as it does. This indicates that we are now successfully controlling traffic between pods within our cloud academy name space. This is an extremely useful security feature that should be used when working not just with multi-tenanted clusters, but when you need to vet the traffic between pods within a single name space or across name spaces. 

Now before we finish, let's take a closer look at each of the 3 networking policies that we just applied. I'll first clear the terminal. Next I'll use the vim editor and open the default deny all policy like so. Here we can see that it has an empty pod selector, and a policy type of ingress only. This creates a default isolation policy in the cloud academy name space, by selecting all pods and not allowing any ingress traffic to those pods. Note, this policy does not change the default egress behavior. 

Okay let's now close this policy and next I'll open the Mongo to Mongo allow policy. Here we can see that the pod selector matches on role equal to DB, and that the ingress pod selector also matches on role equal to DB. This creates our policy which allows the 3 Mongo replicas to talk to each other, which is a requirement for replicating the data amongst themselves.

And finally, I'll open the API to Mongo allow policy. Here we can see that the pod selector matches on role equal to DB, and that the ingress pod selector matches on role equal to API. This creates our policy which allows ingress traffic sent from the API pods, into the Mongo DB replica pods. Now, you may be wondering why there isn't a network policy explicitly put aside to control traffic from the front end pods into the API pods. Well, actually the front end pods don't communicate directly with the API pods, remembering that it is the browser which is making ajax requests to the API pods, through the Nginx Ingress Controller. 

Okay, that completes this lecture. Our networking security posture has been strengthened, this is a bonus. Let's now move on and provide some useful troubleshooting tips when creating and deploying networking policies.

About the Author

Students11052
Labs28
Courses65
Learning paths15

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.