The course is part of these learning paths
Cloud platforms are continuing to grow and evolve. There was a time when cloud platforms consisted of a few core services: virtual machines, blob storage, relational databases, etc. Cloud platforms are now much more complex, with services being built on top of other services. Kubernetes Engine, for example, runs on top of Compute Engine and integrates with the Container Registry, load balancers, and other services. With so many services of varying levels of complexity, it can be overwhelming to develop cloud-based solutions.
Throughout this course, we’ll cover some of the topics that will help you to integrate your applications with Google Cloud Platform’s compute services and REST API.
If you have any feedback related to this course, please contact us at support@cloudacademy.com.
Learning Objectives
- Implementing service discovery with Kubernetes Engine and Compute Engine
- Configuring applications with instance metadata
- Authenticating users with Identity Aware Proxy
- Using the CLI and Cloud Shell
- Integrating with the GCP API
Intended Audience
- Developers looking to integrate with GCP compute services
Prerequisites
To get the most out of this course, you should already have some development experience and an understanding of Google Cloud Platform.
Hello, and welcome. In this lesson, we're going to talk about service discovery. Specifically in the context of Google Cloud.
Let's start off with some definitions that will establish a shared context.
First, what is a service and why does it need to be discovered? For the sake of simplicity, in this context, we're going to say that a service is an abstract grouping of functionality, which can be interacted with over a network.
The first takeaway here being that in this context services interact over a network. The second takeaway is that a service could be just a single instance of an application running on a virtual machine, or it could be something more complex. If a service requires high availability then it may consist of multiple instances of an application, or even multiple instances of multiple applications.
If a service uses an elastic infrastructure, then the IP addresses of application instances may change throughout the life cycle of those instances. Meaning that using the specific IP addresses of these instances for communication is unreliable.
So if we can't count on the IP addresses remaining constant, then we need a single reliable entry point that represents the service itself. Now that's exactly what DNS already provides. It allows us to create records that map from a name to an IP address.
So now we're back to why do services need discovery? After all we can use DNS to get the IP addresses for instances. What's missing from this picture? Well, so far we haven't accounted for information such as port numbers, protocols, metadata, etc. There are 65,536 available network ports. Each port can be bound by a single application at a time. And back in the old days, these numbers were assigned to specific services. For example, HTTP got 80, DNS got 53, HTTPS, got 443, MySQL got 3306, etc.
Now, having a specific port for certain services works, it's been effective for years, though with limited pools of ports, it does make more sense if you can use them in a dynamic way. If applications could dynamically look up the details about a service such as it's port number and protocol, then it would allow them to be much more flexible. Let's visualize this with an example.
Imagine there's a service that we want to integrate into our application. The service is designed to run multiple instances of an application on the same host. So it attempts to bind to a port. And if it's already in use, then it goes to the next one until it finds one that it can use.
If we want our application to interact with this service, then we need to know not just the IP address, but we also need to know the port number. So this is one reason that we might want service discovery. In some circumstances, you might need some sort of specialized service discovery.
Some systems really will require that. However, the majority of systems are gonna fall into a more generalized category. And these generalized categories tend to have solutions with standards built around them. One of the more common standards for service discovery is to use DNS. And the reason being that DNS is an existing and fairly well understood technology.
DNS-based service discovery has a codified IETF standard, which uses SRV and TXT records. Services that leverage the specification can share IP addresses, port numbers, protocols, and even metadata.
The details can be found in RFC 6763. And if you haven't already, I recommend giving it a read. The reason I think reading this is valuable is that having an understanding of how generic DNS-based service discovery operates is gonna help provide a bit more context for any specific implementation you use down the road.
Okay. So with all of the service discovery talk in mind, let's talk about service discovery in the context of Kubernetes engine. Out of the box, Kubernetes supports multiple forms of service discovery. One form uses environment variables to expose service details. Another and probably more common method is the DNS-based approach.
Let's start by looking at DNS because it will help us to understand the environment variable-based approach in a little bit. Here's a quick primer on Kubernetes, if you're not fully up to speed. Kubernetes allows us as engineers to deploy containerized applications through an abstraction called a pod. A pod is just one or more containerized applications that will be run together.
Pods are temporary instances of our containerized apps. And if a pod is replaced, its IP address will change. A service in the context of Kubernetes is a Kubernetes API object named service. Its job is to become an entry point for a collection of pods. A service can be headless or normal. What it means to be normal in this context is that a normal service allocates a cluster IP address that will serve as the IP address for our service. And a headless service doesn't allocate that cluster IP address.
The GKE implementation of Kubernetes uses the kube-dns add-on. kube-dns manages DNS entries for the cluster. All services will have a DNS A record. The hostname record for a given service follows the pattern of servicename.namespace.svc.zone, where the service name is the name we gave our service in the spec. The optional configured namespace is also set in the spec. And if we don't specify it ourselves, it uses the word default. The zone that GKE uses is going to be cluster.local.
If we request an A record for our service, it'll return the IP address or addresses. If the service is normal, the results will be just a single cluster IP. If the service is headless, the results will be a list of addresses for our pods. Let's visualize this a bit better. Here are two deployments on the same cluster. Imagine we create a service for each with these settings, the OW service has a cluster IP because that's the default and the OWC service is headless because it explicitly sets the cluster IP to none.
Once created, let's say that the OW service has a cluster IP address of 10.3.0.5, and each pod in the headless OWC service has its own IP address. When this service was created and A record was added for each service, the record for OW returns the service for the cluster IP address. The headless OWC service returns one address for each pod in a ready state.
So Kubernetes uses a specific host name pattern to allow us to locate the IP addresses for our services. The Kubernetes API service object has a setting referred to as named ports. A named port allows us to refer to port by name rather than by number. Now, this is useful for cases such as a pod that exposes multiple ports or to allow developers to change the services port number without breaking it for its consumers.
When a service is created using named ports, a DNS SRV record is added. To query an SRV record, we need to know the port name, the protocol, and the service name. The results of the query include the priority, the weight, port number, and host name or names, if the service is headless.
Okay to summarize DNS-based service discovery for Kubernetes. The kube-dns add on manages DNS records for services. All services have A records created, normal services return the cluster IP, headless services return the IP addresses of these services pods that are in a ready state.
Services using named ports have SRV records that include additional data for priority, weight, port, and host name or names if the service is headless.
When Kubernetes schedules a pod to run on a node, it includes a set of environment variables for each active service. Because these values aren't updated for the life cycle of a pod, they're not as reliable as DNS-based approaches. And therefore they're not recommended. Though it is a viable method in certain cases, so it is worth knowing about.
Service discovery methods are going to range in complexity. It could go from simple to wildly complex depending on your actual needs. And I mentioned this because we're about to talk about service discovery with Compute Engine. Now, Compute Engine is a foundational cloud service allowing us to use applications that run on one or more virtual machines. The implication being that the systems we build using Compute Engine might be configured in wildly different ways, which means that your system may legitimately require some very specialized service discovery method. Now that could be something such as etcd, Consul or maybe Cloud DNS or even something else.
Now, if we were to cover even one of these, it would take all day. So what I wanna do is just refine this down to a few basic methods that come prebuilt into Compute Engine that you could use for service discovery.
If a service is running on just a single virtual machine instance, then we can locate the IP address using DNS. Each instance has its own unique host name that we can resolve on the internal DNS zone. And Google manages these A records for us automatically.
Recall that in Kubernetes, a service is a collection of pods that serve the same purpose. With Compute Engine, instance groups kind of serve in the same role. Managed instance groups allow us to create collections of instances that serve the same purpose. Though without a load balancer, they don't have a single IP address to serve as that common entry point.
When using an internal load balancer in front of an instance group, Google will add A records for our service.
For use cases where you might need something a little more specific, but not quite ready for something more complex that comes off the shelf that you have to configure and maintain, you can try cloud DNS and implement private zones.
All right, with that, let's wrap up this lesson. Thank you so very much for watching and I will see you in another lesson.
Lectures
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.