image
Configuring Applications with Instance Metadata
Start course
Difficulty
Advanced
Duration
43m
Students
642
Ratings
4.7/5
Description

Cloud platforms are continuing to grow and evolve. There was a time when cloud platforms consisted of a few core services: virtual machines, blob storage, relational databases, etc. Cloud platforms are now much more complex, with services being built on top of other services. Kubernetes Engine, for example, runs on top of Compute Engine and integrates with the Container Registry, load balancers, and other services. With so many services of varying levels of complexity, it can be overwhelming to develop cloud-based solutions. 

Throughout this course, we’ll cover some of the topics that will help you to integrate your applications with Google Cloud Platform’s compute services and REST API.

If you have any feedback related to this course, please contact us at support@cloudacademy.com.

Learning Objectives 

  • Implementing service discovery with Kubernetes Engine and Compute Engine 
  • Configuring applications with instance metadata 
  • Authenticating users with Identity Aware Proxy 
  • Using the CLI and Cloud Shell 
  • Integrating with the GCP API 

Intended Audience 

  • Developers looking to integrate with GCP compute services 

Prerequisites 

To get the most out of this course, you should already have some development experience and an understanding of Google Cloud Platform. 

Transcript

Hello and welcome. If you're going to run applications on Compute Engine, you'll likely end up wanting to provide your application some configuration data at some point.

Now there are a lot of ways to accomplish this, however, Google has a built-in method called instance metadata. Every physical server that runs a Compute Engine instance has a metadata server located at 169.254.169.254. So nice and easy one to remember because it's just the same two numbers repeating and the metadata server is a key-value data store that contains project and instance-level data, which is made available to the running instances on that physical host. The type of information that it stores are details such as SSH-Keys, project-id, CPU details, attached disks, etc.

Google controls the values for system metadata. We don't have access to change these values. In addition to the system data, Google allows us to store a limited amount of custom metadata. Google imposes limits on what we can store, which includes a project-wide size limit of 512KB. Now that includes the total size of project and instance metadata, both system and custom values. They also impose some constraints on the size of the single entry, both key and value.

Custom metadata is set through the Compute Engine API and not directly through the metadata server. The way that we consume this data is by querying the metadata server with HTTP GET requests. If the query fails then we get a non-200 status code. Successful queries can return one of two things, they can either a directory listing or a specific value. I wanna drill down on this further, because, having a sense for how metadata is structured is going to make it a little easier to work with it.

Regardless of how metadata is actually stored behind the scenes, the way that we consume it is through the metadata server. And the structure it uses has a very similar static to a directory structure, which consists of files and or directories. Though in the context of the metadata server, rather than files, we call them endpoints. Endpoints are the fully-qualified key name for given data values. One way to tell that something is a directory is by looking for the trailing slash, all directories have a trailing slash where endpoints don't.

There are two base-level directories which correspond to instance-level and project-level metadata. And to get the value of a specific entry we specify the endpoint. To get the value of a directory, we specify it, which again, includes a trailing forward slash.

The metadata server can return data as text or JSON and each system data endpoint has a default format based on what makes the most sense for that particular type of data. This is something we can override for the entire result set by passing the URL parameter of "alt" with a value of either text or JSON.

The metadata server also includes a parameter called "recursive" which will automatically drill down from the current level all the way down to the endpoints. And this makes it easier to obtain every entry under a given directory. If we pair this with using the JSON formatted data, it makes it pretty easy to read data programmatically from the metadata server.

Let's explore this further by actually interacting with the metadata server. Using curl will help demonstrate how to interact with the metadata server from inside of an application. First, the HTTP header of "Metadata-Flavor" is required. And the value needs to be set to "Google."

Second, we can query the root level data using this base URL. And this query shows we have a few root directories, including project and instance. Recall that directories end with a forward slash. By forgetting it, you won't get the desired results.

By setting the recursive parameter to true, we get all of the data from the specified directory, all the way down.

Using recursive implicitly sets the "alt" parameter to JSON. When exploring this data on the CLI for yourself you can pipe it through Python's JSON tool. Assuming you have Python 3 installed.

Both the project and instance directories contain an attributes directory. Custom metadata resides under these attributes directories. Adding some instance metadata to the instance that I'm connected into, is going to give us to have something to query. And now listing the results of the instance attributes shows our newly created "appconfig" endpoint. By appending appconfig we can see the value reflected here.

With just what we know right now, we can have our application actually query the metadata server for a specific endpoint. Though there are cases where we might want to have an application wait for changes to be made to some specific value in that case we can use the "wait_for_change" parameter, by setting it to the value of true. And now if we make a change to the instance's metadata. And we switch tabs, we can see that the new value is reflected here.

We can also set the latest Etag parameter to a specific Etag value, so that we can ensure wait_for_change knows the last value Etag. By setting the timeout_sec parameter we can specify the number of seconds to wait before disconnecting.

All right, that's gonna do it for this lesson. Hopefully, this is giving you a glimpse in how you might use it inside of your applications to fetch custom metadata and even wait for specific changes. All right, thanks so much for watching, and I'll see you in another lesson.

Lectures

About the Author
Students
100979
Labs
37
Courses
44
Learning Paths
58

Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.