Deploying and Implementing Kubernetes Engine Resources
Deploying and Implementing App Engine and Cloud Functions Resources
Deploying and Implementing Data Solutions
Deploying a Solution Using GCP Marketplace
Deploying Resources Using Deployment Manager
This course has been designed to teach you how to deploy and implement Google Cloud Platform solutions. The content in this course will help prepare you for the Associate Cloud Engineer exam.
- To learn how to deploy Kubernetes Engine resources on Google Cloud Platform
- To learn how to deploy and implement App Engine and Cloud Functions resources
- To learn how to use Cloud Launcher and Deployment Manager
- Those who are preparing for the Associate Cloud Engineer exam
- Those looking to learn more about GCP networking and compute features
To get the most from this course then you should have some exposure to GCP resources, such as Kubernetes Engine, App Engine, Cloud Functions, Cloud Launcher, and Deployment Manager. However, this is not essential.
About the Author
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.
Kubernetes Engine allows organizations to quickly develop and deploy containerized applications and services. It also makes it easy for organizations to update and manage those applications and services. Although stateless applications immediately come to mind when discussing containerized solutions, Kubernetes Engine isn't just for stateless applications. As a matter of fact, you can run databases in a Kubernetes cluster, and you can even attach persistent storage to a cluster.
To leverage a Kubernetes Engine cluster, all you need to do is specify the compute memory and storage resources that your app container will need. Once you've done that, the Kubernetes Engine will provide and manage the underlying resources automatically.
You can create four types of clusters in Kubernetes Engine. They include zonal clusters, regional clusters, private clusters and alpha clusters.
A zonal cluster runs in one or more compute zones within a region. Multi-zone clusters run their nodes across two or more compute zones within a single region. I should also point out that any given zonal cluster will run a single cluster master. Regional clusters run three cluster masters across three different compute zones.
Regional clusters also run nodes in two or more compute zones.
A private cluster can be a zonal cluster or a regional cluster. Either way, a private cluster hides its cluster master and nodes from the public internet by default.
The fourth type of Kubernetes cluster is the alpha cluster. The alpha cluster is an experimental zonal or regional cluster that runs with alpha Kubernetes features enabled.
It's important to note that alpha clusters expire after 30 days. It's also important to note that they cannot be upgraded nor do they receive security updates. Most importantly, alpha clusters are not supported for production use.
When you use the GCP console to create a Kubernetes cluster, you'll see all of the cluster templates that are available to you. The Standard template is selected by default.
The complete selection of templates includes the Standard cluster, Your first cluster, CPU intensive applications, Memory intensive applications, GPU accelerated computing, and Highly available.
As I mentioned, the Standard cluster template is the default selection.
Your first cluster is a small cluster that runs less powerful nodes. Some advanced features like autoscaling are disabled.
The CPU intensive applications template creates a cluster with nodes that offer more powerful multi-core CPUs than a standard cluster.
The Memory intensive applications template creates a cluster with moderately powerful multi-core CPUs and a large amount of memory.
GPU accelerated computing clusters feature a default node pool that's configured with less powerful nodes, along with an additional GPU-enabled node pool. Autoscaling is disabled by default on clusters built from the GPU accelerated computing template.
The Highly available template creates a cluster that's configured as a regional cluster. Cluster masters are available in each zone of a given region. Unlike the GPU accelerated computing cluster, autoscaling on a Highly available cluster is enabled by default. In addition, maintenance window is enabled as well.
In the next lesson, I'll show you how to deploy a Kubernetes Engine cluster.