Deploying Applications, Services, and Cloud Functions
The course is part of this learning path
Modern software systems have become increasingly complex. Cloud platforms have helped tame some of the complexity by providing both managed and unmanaged services. So it’s no surprise that companies have shifted workloads to cloud platforms. As cloud platforms continue to grow, knowing when and how to use these services is important for developers.
This course is intended to help prepare individuals seeking to pass the Google Cloud Professional Cloud Developer Certification Exam. The Cloud Developer Certification requires a working knowledge of building cloud-native systems on GCP. That covers a wide variety of topics, from designing distributed systems to debugging apps with Stackdriver.
This course focuses on the third section of the exam overview, more specifically the first five points, which cover deploying applications using GCP compute services.
- Implement appropriate deployment strategies based on the target compute environment
- Deploy applications and services on Compute Engine and Google Kubernetes Engine
- Deploy an application to App Engine
- Deploy a Cloud Function
- IT professionals who want to become cloud-native developers
- IT professionals preparing for Google’s Professional Cloud Developer Exam
- Software development experience
- Docker experience
- Kubernetes experience
- GCP experience
Hello and welcome! In this lesson we'll be talking about Kubernetes Engine. The exam guide for the Pro Developer certification says that there are some things you'll need to know about GKE, specifically deploying a GKE cluster, deploying a containerized application to GKE, configuring GKE application monitoring and logging, creating a load balancer for GKE instances and building a container image using Cloud Build. Now, if you're familiar with Kubernetes, then you know it is a non-trivial system. So, when looking at these bullet points, it seems deceptively simple, however, let's examine each of these a bit further and we'll start with the topic of deploying a GKE cluster. Deploying a GKE cluster is a reasonably simple process. It's one command with gcloud and it's just a form if you're inside of the console. Though, for a professional-level certification, high-level requirements such as being able to deploy a GKE cluster aren't often trying to ensure that you've memorized a few key commands. So, when Google says that before you take the exam you should know how to deploy a GKE cluster, they're likely saying something such as, "You should have deployed a GKE cluster or two," probably more like 10 or 15. You should be thinking about automated deployments, which means you should be familiar with the gcloud command line. You should also be familiar with the functionality of Kubernetes. When they say you should be familiar with deploying containerized applications they are saying you should have built docker container images. You should have pushed images to a container registry. You should also be comfortable with the kubectl binary. Each bullet point is abstracting away a lot of information here. So, let's try and unpack as much as we can. This is a command you should be familiar with before the exam: gcloud container clusters create. One more time, that's gcloud container clusters create. Why not, gcloud Kubernetes cluster create? Why, container, singular? I'm guessing that's because it used to be called container engine and so the container namespace just didn't change. Why is it plural? I don't know, I am fairly certain that's just a sub-command naming convention. Now, all of these arguments are fairly self-descriptive. So, this command will create a GKE cluster. It's a pretty vanilla deployment. It's a single zone cluster using a small machine type and it's running COS. In a real-world application we'd likely have a cluster with more nodes running in multiple zones for increased availability and ideally we'd use autoscaling here to help make sure that we hit that ideal utilization range so that we're not over or under-provisioning.
Recall that talking about Stackdriver with Kubernetes is one of the exam objectives so let's kick off that discussion now and talk about the now-legacy Stackdriver monitoring and logging, which has been replaced by Stackdriver Kubernetes Engine Monitoring, which centralizes the functionality and is supposed to be like a single pane of glass for GKE. Currently, you can use any of these options or none of them at all. To enable the newest incarnation of Stackdriver, you use the --enable-stackdriver-kubernetes flag. All right, now let's circle back to Stackdriver later, for now let's talk about deployment of containers. The results of the clusters create command is a three-node cluster of small instances with Stackdriver enabled. If not specified, the default number of nodes is three because three is the smallest effective cluster size that offers redundancy. Kubernetes is all about running containers and it's often necessary to run multiple containers together, which is why Kubernetes abstracts this concept into a concept of a pod. A pod is just one or more containers that will always be co-located. It's the smallest unit of deployment. It's just a simple wrapper for containers and creating pods directly is kind of like us as developers manually starting a web server process on a remote server. There's just nothing there to ensure that it stays running. So, it will run until it stops and then that's it. Kubernetes has higher-level abstractions, which they call controllers, which help.
A ReplicaSet is a type of controller that can ensure a specified number of pods are always running. ReplicaSets aren't something recommended for direct use unless you understand a specific use case for it. And the reason is: ReplicaSets are really just still too low-level. They can ensure that you have the number of pods that you want though they don't have any sort of control over the deployments. If you wanted to update the pods to use a new image, you'd have to figure out how to do that for yourself and deploy those new pods without interrupting the service. So, Kubernetes provides higher level controllers such as deployments. Deployments can deploy either pods or ReplicaSets and they have built-in functionality for rolling updates. Deployments define the pods that should be running with a pod template specification. Kubernetes will compare what currently exists against the pod template and roll out new pods without downtime.
Up to this point, the concepts that we've covered would allow us to deploy our container images, though it's still not clear how we would interact with them over the network. Every pod gets an ephemeral IP address. It's something that we can use to interact with the pods directly and for debugging that could be effective if we wanted to do that. However, Kubernetes has a concept called a service which can become an entry point for a group of pods. Services get an IP address and distribute traffic to a specific group of pods. The default service type is called a clusterIP which creates an entry point for these specified pods for the entire cluster. NodePort services create an entry point that directs traffic to a specified port on Nodes, and load balancer services build on a cluster or NodePort to add additional functionality of load balancing. The primary interface for actually using Kubernetes is the kubectl binary. It's part of Kubernetes, not just GKE and you can download it for yourself through the Kubernetes website or you could also install it as a gcloud component. Kubernetes uses desired state configuration which means you specify what you want Kubernetes to make and it's going to do that. Kubectl has the ability to manage resources directly on the command line or by defining the desired state in YAML files. Both work, though YAML allows for configuration to be version controlled which is always good.
Let's perform two deployments. The first will create an actual deployment, the deployment will use a ReplicaSet to create three copies of a pod. The pod includes one container, it's just going to run engine X and after that, we'll create a service so that we can load balance traffic to the pods in our deployment. Here's a look at the YAML for the deployment. And we're telling Kubernetes that we want three replicas of a pod with one container running nginx, which pulls from the public Docker registry. The command here is kubectl apply. Apply expects a YAML file and for that we use the -f flag. And once we have this complete, we can view our resources with the kubectl get and kubectl describe commands. Get deployments actually lists all of the deployments and describe will inspect an individual deployment. With these pods being available, we can now interact with them over the network if we use a pod's IP address, that's that ephemeral IP address I mentioned previously. However, services allow us to have a single entry point for a group of pods.
So, let's create one and we can do that in the same file as our deployment, though we could also create a separate file. The resource kind here is a service of type load balancer and it maps port 80 to port 80 on the pods. Again, this is created with kubectl apply just like everything else that's specified in YAML. This will create the service in Kubernetes fairly quickly. Though behind the scenes it's actually creating a load balancer for us so that we can make this accessible to the outside world. So by creating this in Kubernetes, we're telling Kubernetes this is what we want. Kubernetes has said, "I understand," and it's up to Kubernetes to go figure that out. So, these sorts of commands, even if they respond and say that it was created, may take a little while to actually implement whatever it is that we've asked of it.
So, I'm going to jump a few minutes ahead and here is our created service. The external IP address is the IP of the load balancer for this service which will route traffic from the internet to our service. A service knows which pods to include based on the selector provided. In this case, it's using the label from the deployment of run with the value of web server and it found our replicas. Because of the way that Kubernetes is label based, it allows us to have really interesting deployment options so that we could have similar labels for certain things and then differentiate based on maybe a canary would get its own label so that we could roll a canary out, see how it performs inside of a deployment. So, when you're thinking about deployments with Kubernetes just remember how powerful selectors are in allowing us to target specific pods that should all receive traffic from the same front-end source.
Alright, with all of the moving parts in a Kubernetes cluster, there is a lot to monitor and it's not just about applications, we also need to keep tabs on the cluster. So, let's finish our earlier discussion about Stackdriver. Stackdriver Kubernetes Engine Monitoring is a single pane of glass that displays cluster-level metrics and from inside Stackdriver on the Kubernetes resource, we get details of all the cluster. Infrastructure details show metrics for the system processes, which run as pods and we get workload details including basic logs from a container's standard error and standard output. We can still use the log viewer to further inspect the logs and there's a kubectl command that allows you to inspect the logs of current and previous pods and this assumes of course that you're writing to standard out or standard error. If you have a pod that is using more than one container and you need to see the logs for just the container, you can actually filter the logs by container name as well. If you don't yet have a lot of experience with Kubernetes or the logging of applications from pods then I recommend checking out the documentation on Kubernetes website (https://kubernetes.io/docs/concepts/cluster-administration/logging/), especially the logging architectures section. If terms such as DaemonSet or Sidecar are new to you then it probably is worth giving it a read.
Alright, the container image that we deployed is from the docker registry. In production we'll probably end up using our own images. GKE has some complementary services, namely Cloud Build and Container Registry. Cloud Build, as the name suggests, is a build service in the cloud and Docker container images are one type of resource that it knows how to build. Here is a Dockerfile for this demo. It uses a minimal alpine base and it creates a shell script that just loops every second and echos to standard out. The gcloud build submit command takes a tag name input and it takes the current contacts for Docker and it uploads the contacts in Dockerfile. It builds the images and then pushes it to the Container Registry. Now, this is really just a thin wrapper around Docker allowing us to have images built in the cloud. Besides building Docker images from a Dockerfile, we can also create a build config where we specify the steps that should be taken for each phase of the build. Rather than specifying the Dockerfile, we could provide Cloud Build with the config file and it will run through each of the steps and produce an artifact. With this built we could use it as the image for our pod. So, let's use it in the console and actually deploy it. And using the select button we can select the image and version, of course we only have the one version in this case and once the instance is running, it's going to echo some text to standard out and because it's just piping that to standard out that means Stackdriver logging can actually pick this up without any additional configuration. Alright, that's gonna wrap this up. This isn't easy for everyone to pick up quickly. Kubernetes, no matter how well-engineered, is a complex system and it's ever changing. Before taking the exam, make sure you're comfortable with deploying applications, creating services not just load balancers but clusterIPs and NodePorts for internal traffic. Deploy containerized services, break stuff and try and diagnose it and then fix it. Thank you so much for watching and I will see you in another lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.