GKE is Google’s container management and orchestration service that allows you to run your containers at scale. GKE provides a managed Kubernetes infrastructure that gives you the power of Kubernetes while allowing you to focus on deploying and running your container clusters.
- This course is for developers or operations engineers looking to deploy containerized applications and/or services on Google’s Cloud Platform.
- Viewers should have a basic understanding of containers. Some familiarity with the Google Cloud Platform will also be helpful but is not required.
- Describe the containerization functionality available on Google Cloud Platform.
- Create and manage a container cluster in GKE.
- Deploy your application to a GKE container cluster.
This Course Includes:
- 45 minutes of high-definition video
- Hands on demo
What You'll Learn:
- Course Intro: What to expect from this course
- GKE Platform: In this lesson, we’ll start digging into Docker, Kubernetes, and CLI.
- GKE Infrastructure: Now that we understand a bit more about the platform, we’ll get into Docker images and K8 orchestration.
- Cluster Creation: In this lesson, we’ll demo through the steps necessary to create a GKE cluster.
- Application and Service Publication: In this live demo we’ll create a Kubernetes pod.
- Cluster Management: In this lesson, we’ll discuss how you update a cluster and rights management.
- Summary: A wrap-up and summary of what we’ve learned in this course.
About the Author
Steve is a consulting technology leader for Slalom Atlanta, a Microsoft Regional Director, and a Google Certified Cloud Architect. His focus for the past 5+ years has been IT modernization and cloud adoption with implementations across Microsoft Azure, Google Cloud Platform, AWS, and numerous hybrid/private cloud platforms. Outside of work, Steve is an avid outdoorsman spending as much time as possible outside hiking, hunting, and fishing with his family of five.
Hello and welcome back. In this next section, we're going to discuss the GKE platform, and the core components that make up that platform that allow for containerization. The first of those being Docker, and what is Docker? So, Docker is a platform that allows us to consolidate our application and run it in a container. So, using containers, everything required to make a piece of software run is packaged into isolated containers. And this makes for efficient, lightweight, self-contained systems and guarantees that software will run the same time regardless of where it's deployed. And Docker has been gaining market share over the last several years.
This graphic takes a snapshot of data from Datadog, showing their customers' usage of Docker containers. So you can clearly see kind of the trend line moving up over the past several years, and the Docker adoption is up 40% just in one year, over 2016 to 2017. And Docker's really excelled to lead in the containerization space by equipping by equipping both developers and operators to easily create, manage and monitor containerized solutions. Docker was also designed in a way that makes it extremely easy to integrate it into most dev-ops tools and workflows.
Now, unlike VMs, containers do not bundle the full operating system, right? It only uses libraries and settings required to make a specific piece of software work as needed. Containers use shared operating systems and resources, making them much more efficient than fully virtualized hardware. What you're able to do, then, is leave behind the VM junk that you don't need, leaving only a neat little package containing just your applications and the dependencies that it needs to function correctly.
Before we go any further, I want to talk a little bit about GKE this service. So, container engine is a powerful cluster management and orchestration system for running your docker containers, and Container Engine schedules your containers into the cluster and manages them automatically based on requirements you define, such as CPU and memory. It's built on the open source Kubernetes system, giving you the flexibility to take advantage of on-premise hybrid or public cloud infrastructure.
Now, it would certainly be possible to run your own containerization infrastructure on VMs, using GCP Compute Engine. GKE, on the other hand, is really preferred due to all of the management infrastructure you get for free and don't have to worry about. Kubernetes, which we'll talk about in more detail next, is directly entirely integrated with GKE, and provides much of this benefit. GKE takes care of all the infrastructure management, so we get to focus slowly on deploying and running our solutions.
So, Kubernetes, which I mentioned, is at the core of GKE, is dubbed a planet-scale containerization system that allows you to run extremely scalable and resilient systems with ease. Kubernetes, known as K8s for short, takes care of the heavy lifting you get by orchestrating your containers in and out of load as needed, based on health and load. K8s also handles everything from horizontal scaling and load balancing to rolling updates and maintaining maximum uptime. So, how do we interact with all of this?
We've got two CLIs that we use as our primary inputs into GCP Container Engine. So, GCloud is the first that we'll talk about, and GCloud organizes commands into subgroups for the main functional areas within GCP, like compute, data flow, and, of course, containerization. The GCloud CLI also provides global commands for tasks like IAM configuration and management, which spans multiple functional areas. For GKE specifically, the GCloud CLI is used to do things like create and manage container builds, deploy and tear down containers, list and manipulate registry images, create and delete operations for node pools, and get and list operations for container engine clusters.
In addition to GCloud, we also have the Kubectl CLI, and this is the CLI specific to Kubernetes, and is used when running commands and gets existing Kubernetes clusters. So, for instance, the Kubectl CLI is used for creating objects, viewing and finding resources, updating, patching, editing, scaling and deleting those resources. It's also used for interacting with running pods, and then interacting with nodes and clusters. Okay, that's all for this section. In the next section, we're gonna talk about the GKE infrastructure, and so these are gonna be what we use to successfully deploy our images and run them on GKE.