Adding Resiliency to Google Cloud Container Engine Clusters (GKE)
Resiliency should be a key component of any production system. GKE provides numerous features focused on adding resiliency to your deployed containerized applications and services, allowing them to efficiently and reliably serve your applications and services.
This course is for developers or operations engineers looking to expand their knowledge of GKE beyond the basics and start deploying resilient production quality containerized applications and services.
Viewers should have a good working knowledge of creating GKE container clusters and deploying images to those clusters.
- Understand the core concepts that make up resiliency.
- Maintain availability when updating a cluster.
- Create an existing cluster scalable.
- Monitor running clusters and their nodes.
This Course Includes:
- 60 minutes of high-definition video
- Hands on demos
What You'll Learn:
- Course Intro: What to expect from this course
- Resiliency Defined: The key components of resiliency.
- Cluster Management: In this lesson we’ll cover Rolling Updates, Resizing a Cluster, and Multi-zone Clusters.
- Scalability: The topics covered in this lesson include, Load Balancing Traffic and Autoscaling a Cluster.
- Container Operations: In this lesson we’ll demo Stackdriver montitoring and take a look at the Kubernetes dashboard.
- Summary: A wrap-up and summary of what we’ve learned in this course.
About the Author
Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.
Hello, and welcome to the Adding Resiliency to Google Container Engine Clusters course, from CloudAcademy. Resiliency should be a key component of any production system, and GKE provides numerous features, focused on adding resiliency to your deployed containerized applications and services, allowing them to efficiently and reliably serve your applications and services.
This is just a brief introduction to me. My name is Steve Porter, and I'm currently a practice lead and cloud architect. You can learn more about me via the LinkedIn profile, at the link below.
Okay, so moving on, who should attend this course? This course is designed to educate developers and operations engineers on the resiliency aspects of Google Cloud Container Engine. And this course builds upon the basics of just setting up clusters, and deploying your applications and services to those clusters. Coming into this course you should really have a good working knowledge of containerization in general, and then exposure to deploying work loaders to GKE specifically.
For this course we're gonna go through several topics.
The course is going to start out by talking about what resiliency and scalability really means. We will then talk through all of the container engine functionality with NGCP, as well Kubernetes, that really enables high levels of resiliency and scalability.
Building upon that, this course will walk you through the various concepts, practices, and tools that you have at your disposal, to increase the resiliency of your container engine deployments. And specifically, we're gonna go through and understand the core concepts that make up resiliency, learn how to maintain availability when updating a cluster, learn how to make an existing cluster scalable, and then we'll learn how to monitor those running clusters and their nodes, so you can react appropriately.
Okay, so in the next section we're gonna go through and define what resiliency really means. We hope you enjoy this course, and please feel free to provide any feedback by emailing email@example.com.