Integrating GCP Services
The course is part of these learning paths
Cloud platforms are continuing to grow and evolve. There was a time when cloud platforms consisted of a few core services: virtual machines, blob storage, relational databases, etc. Cloud platforms are now much more complex, with services being built on top of other services. Kubernetes Engine, for example, runs on top of Compute Engine and integrates with the Container Registry, load balancers, and other services. With so many services of varying levels of complexity, it can be overwhelming to develop cloud-based solutions.
Throughout this course, we’ll cover some of the topics that will help you to integrate your applications with Google Cloud Platform’s compute services and REST API.
If you have any feedback related to this course, please contact us at firstname.lastname@example.org.
- Implementing service discovery with Kubernetes Engine and Compute Engine
- Configuring applications with instance metadata
- Authenticating users with Identity Aware Proxy
- Using the CLI and Cloud Shell
- Integrating with the GCP API
- Developers looking to integrate with GCP compute services
To get the most out of this course, you should already have some development experience and an understanding of Google Cloud Platform.
Hello and welcome. In this lesson, we're going to summarize what we've covered throughout the course. We've covered a wide range of topics and we started with service discovery.
Kubernetes engine uses the Kube-DNS add on to manage DNS records in response to cluster events. This means that the creation of a service will add DNS records that we can use from inside of our containerized apps.
All services are given A records, normal services return the services cluster IP address, where headless services return the IP addresses of pods that are in a ready state. And while a pod could crash just after we fetch the DNS record, this method of service discovery allows us to interact with only healthy pods.
Compute Engine instances are given Google managed hostnames on the internal DNS server. We can use these records from inside of the network to interact with an instance. We can also use managed instance groups and use a load balancer in front of them and here Google will add DNS records to the instance internal DNS for us.
Reading instance metadata allows us to configure applications on startup. The metadata server is a service that runs on each physical host running a Compute Engine instance. Its address is 169.254.169.254. The metadata is broken into instance and project data. We can also store a limited amount of custom project and instance-level metadata.
Metadata is consumed over HTTP using GET requests, we can specify the directory or endpoint that we want returned using the URL. The metadata server includes a few URL parameters that we can use to format the data, as well as to wait for changes.
Fetching metadata from inside of an application allows us to not just bootstrap an application with some configuration. It also allows us to listen for changes, which provides a level of near real-time configuration change.
Building apps that use Identity-Aware Proxy allows for control over cloud-based internal applications. IAP checks the identity of the user making the request and once authenticated, it checks to see if they're authorized for that specific resource. Access is based on the IAP-secured-web-app-user role, which can be set project wide or for a specific resource.
IAP passes a JWT to our application via request header that we need to validate using the public key for the service. And once validated, we can use the identity information inside of the payload.
The CLI is Google Cloud's tool of choice for script automation. The use of filters, formats, projections and transformations allow it to return very specific data and in very specific formats.
There are multiple formats, some intended for humans and some for code. There are multiple transformations as well. Now these allow us to modify the return results in different ways, without the need for external tools or languages, though the CLI does pair nicely with scripting languages.
Cloud Shell offers us a remote terminal. The underlying system is Google managed, it runs Debian Linux, has five gigs of persistent storage, it uses Tmux by default, allows for file uploads and downloads, and has a basic graphical text editor. It also includes the Cloud SDK, and multiple common programming languages.
The GCP rest APIs are specific to a given service. Each service consists of resources and methods. The APIs tend to share standard parameters. Among them are fields for pagination, and field masking.
Programming directly against the API is allowed, though it's not encouraged. The client libraries are Google's recommendation when we want to develop against the API. They are auto-generated based on a service's discovery document, and they take care of much of the boilerplate code. And with that, we've hit the end.
Alright, it's time to wrap up this lesson and in turn, the course. I hope this course has been helpful to you. Thank you so very much for watching and I will see you in another course.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.