1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Cloud Platform Container Engine (GKE)

GKE Infrastructure

Contents

keyboard_tab
Course Introduction
Course Conclusion
play-arrow
Start course
Overview
DifficultyIntermediate
Duration42m
Students344

Description

Course Description:

GKE is Google’s container management and orchestration service that allows you to run your containers at scale. GKE provides a managed Kubernetes infrastructure that gives you the power of Kubernetes while allowing you to focus on deploying and running your container clusters.

Intended audience:

  • This course is for developers or operations engineers looking to deploy containerized applications and/or services on Google’s Cloud Platform.
  • Viewers should have a basic understanding of containers. Some familiarity with the Google Cloud Platform will also be helpful but is not required.

Learning Objectives:

  • Describe the containerization functionality available on Google Cloud Platform.
  • Create and manage a container cluster in GKE.
  • Deploy your application to a GKE container cluster.

This Course Includes:

  • 45 minutes of high-definition video
  • Hands on demo

What You'll Learn:

  • Course Intro: What to expect from this course
  • GKE Platform: In this lesson, we’ll start digging into Docker, Kubernetes, and CLI.
  • GKE Infrastructure: Now that we understand a bit more about the platform, we’ll get into Docker images and K8 orchestration.
  • Cluster Creation: In this lesson, we’ll demo through the steps necessary to create a GKE cluster.
  • Application and Service Publication: In this live demo we’ll create a Kubernetes pod.
  • Cluster Management: In this lesson, we’ll discuss how you update a cluster and rights management.
  • Summary: A wrap-up and summary of what we’ve learned in this course.

Transcript

Hello, and welcome back. In this section we're going to talk about GKE Infrastructure. There are two main infrastructure pieces or concepts that make up GKE. Docker images and Kubernetes are what at the core of what makes GKE such a successful and versatile containerization platform.

Docker builds images automatically by creating the configuration commands from within a dockerfile. A dockerfile is essentially a configuration as code that describes the what, where, and how for your container. This configuration includes the base image to start from, networking details, what application or services deploy to the container, and then also, what to run once your container initializes.

Orchestration is at the core of what Kubernetes is really built for, and about. And some top level concepts of Orchestration include Pods, Nodes and Master. So Pods are the smallest deployable units. They can be created, scheduled and managed. It's a logical lection of containers that belong to an application. And each Pod is meant to run a single application. Nodes are workers that run tasks as delegated by the Master.

So Nodes can run one or more Pods, and it provides an application specific virtual host in a containerized environment. Now at the top level Master is the central control point that provides a unified view of the cluster and there is a single Master Node that controls multiple child Nodes. Digging a little bit deeper, we get some additional concepts for Kubernetes, like Replication Controllers and a little bit more functionality on Master.

So, a Replication Controller is a resource app master that ensures that the requested number of pods are running on Nodes at all times. And in addition to that, Master also serves Restful Kubernetes APIs that validate and configure Pods, Services and Replication Controllers. Going in depth into Stackdriver and GKE is out of scope for this course, but it does deserve a mention so that you know it's available. So Google Stackdriver provides extremely approachable and powerful monitoring, logging and diagnostics. It gives you the visibility into health, performance and availability of your containerized deployment, enabling you to easily resolve bugs and performance issues easier and faster.

Stackdriver is totally integrated into Google's Cloud Platform, so it's a very friction-free from an adoption standpoint. And in the next course, Adding Resiliency to GKE, we'll go much more into detail around logging, monitoring our containerized deployments. In the next section, we're going to start actually creating our cluster and dig into the kind of the tools and processes that go into that.

About the Author

Students378
Courses2

Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.