1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Kubernetes Engine (GKE)

Networking

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW2m 36s
Clusters
2
Configuration
4
Workloads
6m 30s
5
6
Storage
6m 59s
7
Security
4m 11s
Demo

The course is part of this learning path

play-arrow
Start course
Overview
DifficultyIntermediate
Duration55m
Students201
Ratings
4.6/5
star star star star star-half

Description

Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.

In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.

Learning Objectives

  • Learn how Google implements a Kubernetes cluster
  • Learn how GKE implements networking
  • Learn how GKE implements logging and monitoring
  • Learn how to scale both nodes and pods

Intended Audience

  • Engineers looking to understand basic GKE functionality

Prerequisites

To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.

Transcript

Hello and welcome. In this lesson, we'll be talking about GKE networking, specifically, VPC-native networking. By the end of this lesson, you'll be able to describe how a Kubernetes service abstracts a group of pods, you'll be able to list the types of services, you'll be able to describe how to enable HTTP load balancing, and you'll be able to describe how to perform service discovery with DNS.

At cluster creation, VPC-native clusters require an empty subnet and two secondary IP address ranges. The subnet is referred to as the node subnet because the IP addresses for the nodes pull from here. The two secondary ranges are used for pods and services allowing GKE to allocate IP addresses for our pods and services from these ranges.

By default, Google will manage the secondary IP address ranges, however, if we need to we can optionally control it. Recall that pods are the smallest unit of deployment within a Kubernetes cluster. They're our primary Kubernetes building block. When a pod is created, it gets in an ephemeral IP address which is bound to the pod's virtual network interface. Because the containers running inside of a pod share the same networking namespace they get to interact with each other over localhost.

So once a pod is created it has an IP address and it's allowed to talk to all of the other pods on the network. By default, pod-to-pod communications are unfiltered. Now, with all of these different resources utilizing IP addresses, it does impose some potential constraints and that's because the number of available addresses is a finite resource.

Now, to help with the IP usage, GKE sets a built-in limit of 110 pods per node and it allocates 256 IP addresses to each node that it can use for its pods. Roughly the goal here is to have twice as many IP addresses per node as maximum pods and that allows for the creation of pods without any concerns about using existing IP addresses.

GKE does make it possible for us to specify a smaller limit which in turn will allow more nodes in the cluster. So when pods are created they get an IP address, though these IP addresses are ephemeral. Pods will be created and removed as needed, meaning a pod's IP address is not a real means of interacting with pots over the network.

Kubernetes solves this with services. Services are an entry point for a group of pods. It works roughly like this. We create a deployment where we specify that the pods contain the label app with the value of web then we create a service and specify that it consists of all pods that have a label of app with the value of web and that allows the service to become an entry point for all of those pods.

A service provides a reliable IP address that can be used to interact with the specified pods. The IP address is pulled from the service IP address range and remains for the lifecycle of that service object. So unlike the pods of ephemeral IP address, a service is a guaranteed IP address that can interact with a group of pods based on their label. So the service doesn't need to know specifically about these pods, it just needs to know what it's targeting based on the label.

There are multiple types of service which are all used as an entry point for interacting with pods over the network. The difference here is the type of service based on how the service is going to be consumed. Some of the different types include cluster IP, node port, and load balancer. The cluster IP type allocates an IP address from the service range and it's reachable from any node inside of the cluster, so this is a single entry point for all of the pods that comprise this service and this IP address is accessible from anywhere in the cluster. The node port type exposes the service and each node at a specified port and this is only accessible from inside the network. So it allows us to bind a specific port on one of our nodes and then access that service through the node's IP address on that port. The load balancer type exposes the service using GCP's load balancers which includes internal and external network load balancers. The external allows for an entry point from outside of our cluster to our service.

Using something like the load balancer type has a lot of use cases, though it is a layer 4 load balancer and that means there's no application-specific functionality like what you would see in an HTTP load balancer. Kubernetes does support HTTP load balancing with the use of a higher level distraction called an ingress resource. Ingress resources provide layer 7 HTTP functionality such as TLS termination, URL path-based routing, etc.

The GKE ingress implementation uses the HTTP load balancer. In order to use it, the cluster is required to have the add-on enabled which GKE does by default, and if you don't have it already enabled you can enable that at any time. When a service is created, it's given a unique hostname. Kubernetes generates these using a set of rules and we can leverage these hostnames to help with service discovery.

GKE provides a managed DNS add on. The DNS service named kube-dns is run as a deployment that's exposed as a service and the pods that we create are configured to use kube-dns as its authoritative name server. Because Kubernetes generates these service names for us, we're able to use these different hostnames to interact with different services without having to use hard-coded IP addresses by using DNS to look these up. If you're going to be building applications that run on Kubernetes, it's probably worth learning a bit more about service discovery with DNS.

Alright, that's going to wrap up this lesson. Thank you so much for watching and I will see you in the next lesson.

About the Author

Students47542
Courses17
Learning paths18

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.