Section 1: Compute Resources
Section 2: Storage Options
Section 3: Networking Services
The course is part of these learning paths
Google Cloud Platform has become one of the premier cloud providers on the market. It offers the same rich catalog of services and massive global hardware scale as AWS as well as a number of Google-specific features and integrations. Getting started with GCP can seem daunting given its complexity. This course is designed to demystify the system and help both novices and experienced engineers get started.
This course covers a range of topics with the goal of helping students pass the Google Associate Cloud Engineer certification exam. This section focuses on identifying relevant GCP services for specific use cases. The three areas of concern are compute, storage, and networking. Students will be introduced to GCP solutions relevant to those three critical components of cloud infrastructure. The course also includes three short practical demonstrations to help you get hands-on with GCP, both in the web console and using the command line.
By the end of this course, you should know all of GCP’s main offerings, and you should know how to pick the right product for a given problem.
- Learn how to use Google Cloud compute, storage, and network services and determine which products are suitable for specific use cases
- People looking to build applications on Google Cloud Platform
- People interested in obtaining the Google Associate Cloud Engineer certification
To get the most out of this course, you should have a general knowledge of IT architectures.
Cloud Compute is one of Google's core offerings for businesses looking to run their apps in the cloud. It's your foundation. The hardware and CPU cycles needed to actually run your software. Now, under the compute umbrella are a few different specific services and configuration options and products. For our purposes, though we are going to focus on three main offerings—Compute Engine, App Engine, and Kubernetes Engine— and will briefly mention Cloud Functions and Cloud Run.
I'll start by giving you a quick high-level distinction between these three main offerings. So Compute Engine is for deploying virtual machines or VMs much like Amazon EC2. App Engine is a serverless compute product for running code without thinking about the ends and then Kubernetes Engine is for, you guessed it, running Kubernetes workloads for customers that need container orchestration.
So let's dig in a bit more on each service, starting with Compute Engine. As I've said, it is analogous to Amazon's EC2 but let's get a little more specific for folks who may not know AWS. Compute Engine is for deploying virtual machines. You can create a variety of virtual machine types with different amounts of CPU, storage, memory, network bandwidth, you can create VMs in different physical regions as Google Cloud has data centers all around the world and then regions have names denoting their location like us-west1. Each region can have multiple zones like A and B so a full location name for a compute resource per VM could be something like us-west1-c which denotes it's both the region and the code. So a region is usually a part of a country like US West or US East.
Now, Compute Engine offers a variety of predefined machine types with different operating systems and configurations to match different workloads. There's also the possibility of creating custom machine types if needed, if you need a specific hardware CPU memory setup. Billing for these virtual machines is done by usage per second so you only pay for exactly what you use and there are also discounts for sustained usage and sort of a commitment-based, committed pay upfront sort of commitment-based long-running jobs and the lesson after this will cover machine types in more detail and that lesson will also cover pricing in more depth.
Now let's switch over and talk a little bit about App Engine. App Engine is designed to let you deploy software without managing individual servers. Instead, you pick an environment and a runtime. Runtimes are language-specific minimal container configurations, they're runtimes for things like Python 2.7, Python 3, Java, PHP, Node, Ruby, Go. The app environment is the underlying hardware and when it comes to picking an environment there are two types of environments you can choose. There's the standard environment and there's the flexible environment. So the App Engine standard environment only supports very specific language runtimes, you can't modify the environment after the fact, you can choose an instance class which will allocate a certain range of CPU and memory and auto scaling ability, however, you can't SSH into the instances. It's more fixed. In essence, you can think of the standard environment like a sandbox: it spins up quickly and it can auto scale pretty effectively but the main trade-off is limited ability to alter the environment over time. And then, by contrast, there is the flexible App Engine environment. Now, this does let you modify the runtime via Docker files. So if you know Docker files, you know they're pretty flexible, you can do a lot there, you can set up SSH access, you can run pretty much any programming language, you can access other Google Cloud resources and Compute Engine instances within the network. So in short, the flexible environment does less for you but it gives you more control.
And then the third main Google Cloud compute service is GKE: Google Kubernetes Engine. If you're already running or planning to run a Kubernetes application, GKE is by far one of the best ways to do it, as Kubernetes was actually originally designed by Google, so it's a first-class integration. GKE platform gives you a very powerful API for managing a Kubernetes cluster. You can do this in the console or via CLI tools, you can quickly get information about the cluster, change config, scale pods, alter the access controls, do all kinds of administrative tasks. Kubernetes, in general, is known for being one of the most powerful container orchestration frameworks available but it is also known for being fairly complex, tough to debug. GKE removes much of this pain. With GKE you do almost everything in the console with a mouse, you get fantastic documentation, you get tutorials, you get Quick Start guides, and you get very good instrumentation and logging. GKE runs on top of Compute Engine, it's using VM instances as GKE nodes. Node is the Kubernetes term for a single compute resource. It's very easy to get GKE to intermingle with Compute Engine resources, if necessary. If part of your application is in Kubernetes and part of it isn't, if they're all within Google Cloud, it's not a problem to get them to work together. Now, actually teaching Kubernetes itself is out of scope for this course but we will spend a little time covering how to create a cluster in the demo, where we'll touch on it briefly.
Now finally, I want to mention two other lightweight GCP compute options that are available for specialized use cases and these are GCP Cloud Functions and GCP Cloud Run. Cloud Functions are analogous to the Amazon Lambda service. They let you define single purpose standalone functions that respond to specific events. Now, this is again a serverless compute option and it can work well for circumstances where all you want to do is run some specific function in response to an event that it's watching, so kind of a targeted use case there.
Now, Cloud Run is also a serverless option like Cloud Functions, but the difference here is that instead of running just code you create containers. With Cloud Run, you invoke stateless containers using HTTP requests or you can have them be generated in response to specifically monitored events. Now, this is a very simple way of creating container-based workloads with all considerations of hardware abstracted away. You don't think about even an environment like with App Engine. Now, as of the creation of this course, Cloud Run is still in beta so we're not going to spend a lot of time on it but it is definitely worth checking out.
So Compute Engine, App Engine, Kubernetes Engine, your three main purposes. How do I know which to select when working on a Google Cloud application? It's probably easiest to start by thinking about GKE. If you're already using Kubernetes locally or with another cloud provider, then I highly recommend migrating to GKE, creating a cluster, and provisioning needed storage and compute resources take seconds and you should be able to reuse much of your existing configuration. Compute Engine and App Engine are great options for everyone outside of the Kubernetes ecosystem.
When it comes to these two services, the real question is how much control do you need? Maybe you like to have root access to your machines by default or maybe you're migrating from EC2 or maybe you have an unusual environment not easily supported by App Engine. In the base case, I would recommend starting with Google Compute Engine. App Engine is generally a better choice for an earlier stage project, particularly if you can use the standard environment with its various language runtimes. The App Engine flexible environment is more of a niche solution for scenarios where you know for whatever reason you don't want to use Compute Engine but you don't get enough control from the standard environment.
So, that's it. At a pretty high level you now know enough about GCP compute offerings to be dangerous. You can differentiate GKE, Compute Engine, App Engine, and you should know a little bit about Cloud Run and Cloud Functions. You should be able to pick what is best for a given project. Very cool stuff. So in the next lesson, we'll take a deeper dive in VM instance types, we'll learn about cost and hardware optimization as we tour Compute Engine's menu of options. See you there.
Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.