1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Adding Resiliency to Google Cloud Container Engine Clusters



Course Introduction
Adding Resiliency to Google Cloud Container Engine Clusters (GKE)
Resiliency Defined
Course Summary
Start course


Resiliency should be a key component of any production system. GKE provides numerous features focused on adding resiliency to your deployed containerized applications and services on Google Cloud Platform, allowing them to efficiently and reliably serve your applications and services.

Intended audience

This course is for developers or operations engineers looking to expand their knowledge of GKE beyond the basics and start deploying resilient production quality containerized applications and services.


Viewers should have a good working knowledge of creating GKE container clusters and deploying images to those clusters.

Learning objectives

  • Understand the core concepts that make up resiliency.
  • Maintain availability when updating a cluster.
  • Make an existing cluster scalable.
  • Monitor running clusters and their nodes.

This Course Includes

  • 60 minutes of high-definition video
  • Hands-on demos

What You'll Learn

  • Course Intro: What to expect from this course
  • Resiliency Defined: The key components of resiliency.
  • Cluster Management: In this lesson, we’ll cover Rolling Updates, Resizing a Cluster, and Multi-zone Clusters.
  • Scalability: The topics covered in this lesson include, Load Balancing Traffic and Autoscaling a Cluster.
  • Container Operations: In this lesson, we’ll demo Stackdriver monitoring and take a look at the Kubernetes dashboard.
  • Summary: A wrap-up and summary of what we’ve learned in this course.


So this concludes our course on adding resiliency to Google Cloud container engine clusters. Thank you so much and just as a brief recap, so in this course, we discussed some of the core concepts and definitions that make up resiliency including scalability.

We talked about how to maintain availability when updating or changing a cluster in several different ways. We talked about how to make an existing cluster scalable both doing that manually as well as doing that using auto-scaler with bounds. Now we also talked about the operation side of things. How to monitor running clusters and their nodes using state driver as well as using the Kubernetes dashboard.

Please if you have any feedback or questions, please leave that on the comments section on the course landing page and thank you again for attending this course.

About the Author

Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.