1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Kubernetes Engine (GKE)

Demonstration

Contents

keyboard_tab
Introduction
1
Introduction
PREVIEW2m 36s
Clusters
2
Configuration
4
Workloads
6m 30s
5
6
Storage
6m 59s
7
Security
4m 11s
Demo

The course is part of this learning path

play-arrow
Start course
Overview
DifficultyIntermediate
Duration55m
Students206
Ratings
4.6/5
star star star star star-half

Description

Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.

In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.

Learning Objectives

  • Learn how Google implements a Kubernetes cluster
  • Learn how GKE implements networking
  • Learn how GKE implements logging and monitoring
  • Learn how to scale both nodes and pods

Intended Audience

  • Engineers looking to understand basic GKE functionality

Prerequisites

To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.

Transcript

Hello and welcome. In this lesson, we're going to use the console to review some of the functionality that we've covered throughout the course, as well as some that we haven't yet seen.

I have a single-zone, three-node cluster already created. This page here shows us the cluster-level configuration settings. Expanding the add-ons section at the bottom, shows us the cluster-level add-ons, including the HTTP load balancer add-on. The bottom of this page displays the node pools and we only have the default pool, so it's the only one showing here. It's running three nodes each running cos. By clicking into the pool we can drill down into specific pool-related information, including all of the nodes.

Back on the clusters page, we can view the storage and nodes for the entire cluster using the corresponding tabs. So this allows us to see at the cluster level all of our nodes versus at the pool level. Recall that we can deploy workloads in the console though we're limited to deployments. Let's use the console to deploy a pod with a single nginx container. nginx is a webserver and it happens to be the default image listed in the console which makes it easy to demonstrate because once it's running, we can see if it's working by attempting to view the landing page. With this deployed, we have three pod replicas all running nginx.

Let's check out some of the actions that we can perform with this deployment. Using the actions link at the top we could enable horizontal pod autoscaling which requires us to set a minimum and maximum as well as the metrics to scale based on. Since pods use of ephemeral IP addresses, we need to create a service in order to have a reliable IP address for our deployment.

So let's do that, let's create a service and we can do that by clicking on the action link and selecting expose. We'll set the service type to load balancer and create it. Now that this is created we can inspect it on the services page. Clicking on the link shows the nginx landing page which tells us the service is working and it's routing our traffic to our pods. What we've created so far is a deployment with three pods all running nginx.

We have a load balancer service which created an external layer 4 load balancer that provides us with a stable IP address for our deployment. Since nginx is a webserver, it speaks HTTP which means we could benefit from an HTTP load balancer. Recall that GKE exposes the HTTP load balancer through an ingress resource which uses the HTTP load balancer add-on. And while we're not going to create this now, I did want to show it in the UI. The configuration page shows us configMap and secrets for the cluster, the storage page shows us PersistentVolumes and StorageClasses, and the object browser shows us information about Kubernetes objects.

Back on the cluster page, under the cluster detail information, we can manually upgrade the cluster master by clicking the link and selecting the version, which if you recall ignores our maintenance window. The two links on the side here for applications and marketplace both open the same overlay. Now, this allows us to launch containerized applications to our cluster. These can be either Google-provided or third-party, some are free, some are paid, the idea behind this is that it makes it easier for us to deploy new functionality to our clusters by using existing applications.

I created this cluster through the console, so what if I want to use kubectl to interact with it now? To do that we have to authenticate. The console provides us with the exact command and parameters to use to authenticate us to the cluster. Running the kubectl command while on authenticated throws an error. After running the command we just copied, we can successfully run commands against the cluster as the G-cloud user that were currently authenticated with.

We can view the different GKE logs in the log viewer and like other logs we can search and filter. And we can view monitoring info inside of stackdriver by viewing the Kubernetes dashboard. The dashboard allows us to drill down on additional info, starting at the infrastructure, workload, or service levels.

All right, we're gonna start to wrap up here, we're not going to go into too much more depth, the idea behind this demo was just to further solidify some of the things we've learned so far by seeing them in use. We've covered a lot throughout this course. I hope this has given you a solid baseline for further study. If you haven't yet deployed workloads to a cluster, I recommend creating a small cluster and deploying something. Take care to pay attention to the number of nodes and the size of the nodes. Continuous learning and technology takes a lot of effort so kudos to you for putting in the effort. I had a lot of fun while creating this and I hope you've enjoyed it. Thank you so very much for watching and I will see you in another course!

About the Author

Students47663
Courses17
Learning paths18

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.