Integrating GCP Services
The course is part of this learning path
Cloud platforms are continuing to grow and evolve. There was a time when cloud platforms consisted of a few core services: virtual machines, blob storage, relational databases, etc. Cloud platforms are now much more complex, with services being built on top of other services. Kubernetes Engine, for example, runs on top of Compute Engine and integrates with the Container Registry, load balancers, and other services. With so many services of varying levels of complexity, it can be overwhelming to develop cloud-based solutions.
Throughout this course, we’ll cover some of the topics that will help you to integrate your applications with Google Cloud Platform’s compute services and REST API.
If you have any feedback related to this course, please contact us at email@example.com.
- Implementing service discovery with Kubernetes Engine and Compute Engine
- Configuring applications with instance metadata
- Authenticating users with Identity Aware Proxy
- Using the CLI and Cloud Shell
- Integrating with the GCP API
- Developers looking to integrate with GCP compute services
To get the most out of this course, you should already have some development experience and an understanding of Google Cloud Platform.
Hello and welcome! If you're going to develop using Google Cloud, then it helps to know a little bit about the command line interface. Being able to script out common tasks saves a lot of effort. Once we're familiar with the CLI, then knowing how to use Cloud Shell offers us a remotely accessible shell that we can utilize the CLI from within.
The CLI can be used at different levels of efficacy. By default, the results of the different commands print their results in a human-readable format on the screen. However, it also includes functionality that allows it to be used from inside of scripts. The CLI allows us to filter results with the filter flag, it allows us to format data with the format flag, and it allows us to transform results with projections and transformations.
Let's see this in action on the command line. I'm pre-authenticated here. So let's go through some example commands that I have and see how these flags function.
The examples that I have here are contrived. Though, they'll help to highlight the functionality that can be used from inside of an app. Listing off the projects here shows the results printed to screen. And by using the filter flag, we can specify that we want to see only projects with the name of My First Project. And it returns just the two results here. Filters support different boolean operators. We're using the equality operator here, however, there are others, including two pattern matching operators. The first being the simple pattern operator, and the other being a regular expression operator. The simple pattern allows for basic pattern matching without the need for a regex.
Filters can be combined with the logical operators NOT, AND, and OR. Using AND here specifies that we want to see all projects with a name starting with My that was created after the date specified here.
We'll talk more about this date formatting that's happening over here, in a second. Though, first, let's see the create dates for these projects. Okay, and now we know what we should expect from the result for this command, we should have one result. And here it is. So, we can filter values and even combine filters using NOT, AND, and OR.
So far I've been using the default format. Though notice, these results are in JSON format. There are multiple supported formats. When consuming the results programmatically we can use formats such as JSON, YAML, CSV, etc. Now, there are also more human-readable formats such as table as well.
We can leverage different formats to obtain specific data. Notice here we can specify the resource-keys that we want returned in table format. The first two keys are simply the names, and this last one here uses a transformation to format the create date key. The CLI includes a set of built-in transformation functions that we can use. One of the built-in transformations is called format, which allows us to format nested data.
The resource key for quotas inside of Compute Engine exists as a list of objects, each containing a few resource keys. If we query the same data in the table format, and apply the format transform to just the quota key, we can produce a table that will have just the metric, limit, and usage.
While this is effective, what we're basically doing is just flattening the quotas list. And since this is a common task, the CLI includes a flatten flag. Notice here using flatten flag with the CSV format, we get similar results, with a cleaner syntax.
Formats might include required or optional parameters. Top-level parameters are specified using square braces after the format name. Column level settings are specified after the resource-keys, and after any transforms.
Again, these are contrived examples. However, using this functionality allows us to fetch specific data which we can leverage programmatically. Once you're comfortable using the gcloud CLI, one of the easiest ways of obtaining access to it, is from Cloud Shell.
Cloud Shell is a Google Cloud service that provides us with a remote interactive shell. Here's what you need to know to get the most use out of Cloud Shell for development tasks. Cloud Shell runs on a Google-managed version of Debian Linux and it runs on top of a small Compute Engine instance. Each user is given 5 gigabytes of persistent storage which is mounted to their HOME directory. Meaning that files outside of the HOME directory are not going to survive the termination of an instance, which happens after one hour of inactivity.
The underlying operating system is managed by Google and includes several common developer tools and languages, included among these is the Cloud SDK, which includes the gcloud CLI. There are multiple programming language environments installed, there's a few command line editors, and there's also a graphical text editor.
Cloud Shell does allow for some level of customization. Cloud Shell runs a startup script whenever the instance backing the shell starts up. When the startup script runs successfully the file google/devshell/customize_environment_done is touched, thereby updating the last modified date. By modifying the script we can make configuration changes, including the installation of new software.
As a note, if you modify the .bashrc file then it needs to include this snippet of code that sources the Google created bashrc file.
Cloud Shell uses tmux by default. Tmux is a terminal multiplexer which is an overly fancy way to say that tmux allows us to split this terminal into multiple panes, and this allows you to run multiple processes inside of a single window.
Cloud Shell allows us a few terminal configuration options including the ability to set the font, the color palette, the copy/paste keys, and some other basic configuration details Cloud Shell has another feature that I find interesting and I wanted to share, it's called Open In Cloud Shell and it allows for the creation of custom terminal driven tutorials.
The way this works is you can craft a URL by specifying a few GET parameters, and when clicked, it kicks off the Sign-in process. Once authenticated and authorized and all that good stuff, users are presented with a tutorial of your creation, and optionally the file editor, as well as a Cloud Shell terminal.
The terminal runs inside a Docker container image, and downloads the specified Git repo and branch. The terminal also optionally prints a file from the Git repo, if you specify it. The reason I call attention to this is I can see a lot of potential use cases for this, including the ability to help onboard new engineers, by giving them kind of a walk-through of something that you're working on, also for sharing a project with other engineers on the team, that way they have kind of some context around the thing you might be working on.
All right, with that, let's wrap up this lesson. Thank you so very much for watching, and I will see you in another lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.