Deploying Applications, Services, and Cloud Functions
NOTICE: This course is outdated and has been deprecated
Modern software systems have become increasingly complex. Cloud platforms have helped tame some of the complexity by providing both managed and unmanaged services. So it’s no surprise that companies have shifted workloads to cloud platforms. As cloud platforms continue to grow, knowing when and how to use these services is important for developers.
This course is intended to help prepare individuals seeking to pass the Google Cloud Professional Cloud Developer Certification Exam. The Cloud Developer Certification requires a working knowledge of building cloud-native systems on GCP. That covers a wide variety of topics, from designing distributed systems to debugging apps with Stackdriver.
This course focuses on the third section of the exam overview, more specifically the first five points, which cover deploying applications using GCP compute services.
- Implement appropriate deployment strategies based on the target compute environment
- Deploy applications and services on Compute Engine and Google Kubernetes Engine
- Deploy an application to App Engine
- Deploy a Cloud Function
- IT professionals who want to become cloud-native developers
- IT professionals preparing for Google’s Professional Cloud Developer Exam
- Software development experience
- Docker experience
- Kubernetes experience
- GCP experience
Hello and welcome. In this lesson, we'll be talking about four conceptual, zero downtime deployment methods and how these concepts apply to Google Cloud's Compute option. We're going to cover blue/green, rolling, and canary deployments, as well as traffic splitting. I'll go through and describe each, and then we'll talk about how they relate to the different services. Let's start out with blue/green deployments. Now, this one is conceptually simple, though not always in practice. You have two environments, arbitrarily called blue, and green, and you can toggle between the two of them. Imagine your green environment is currently serving traffic, you can use the blue environment to test out the latest version, and once you're happy with the results, you can just switch the traffic over to blue, and you just repeat this process. Rolling deployments progressively replace a resource with another version until everything has been updated. Imagine you have five resources all on version 100, and you wanna roll out version 101, without impacting users, so you update the resources one at a time, making sure there are no failures until everything is up to date. Canary deployments get their name from a mining practice, which involved bringing canaries into coalmines, because their death was an indicator of lethal gases. The process is similar in software though, without the potential ethical debates. A new version is introduced into the current group of resources and it's monitored. If there are problems, then only a small portion of the total user base are going to experience those problems. Once everything's working as it should, once you're happy with that canary's success, that version can be fully deployed. Traffic splitting diverts traffic to a different version of a resource, and there are a lot of use cases for this. However, imagine the classic A/B testing use case. You have two versions that you want to see how users respond to, and so you split the traffic between those two versions, you monitor for whatever it is you're looking to see, and then once you know which one is more successful, that is the one that you can actually deploy.
Okay so, with these high-level ideas in mind, let's go through and see how the different services apply these methods. We can start with Cloud Functions because this is an easy one. At the moment, there are no mechanisms that are native for deployment options, we deploy a function, it becomes the current version and that's it, and so while we could implement our own blue/green deployment, maybe using CNAME records, it's not a built-in bit of functionality, so let's move on to App Engine. App Engine has built-in service versioning and traffic splitting. Now, using traffic splitting allows for A/B testing as well as canary deployments, because you can divert a small percentage of traffic to the canary, and monitor it to see if there are any sorts of errors. If all went well, we specify that 100% of the traffic should go to the canary version, and it becomes the live version. Using versioning allows for blue/green deployments, because we can use a version-specific URL to test out changes, and then we cut over to that version when we're ready. Compute Engine offers native support for rolling and canary deployments through instance groups. When creating or updating a managed instance group, you can specify a new template to deploy. As well as different deployment configuration settings. We have a level of control over the way that new instances are rolled out. The maxSurge property specifies how many instances over the group's target size are allowed to be created during the rollout. If you omit this, the default value is going to be one. There's a maxUnavailable property, which specifies how many instances can be unavailable at a given time during our deployment, because we can perform a rolling update based on a specific template, there isn't a rollback mechanism that's built-in, though there doesn't need to be because we can just specify a previous template, and roll without.
Finally, let's talk about Kubernetes Engine and how it applies to these different principles. This is an interesting one because Kubernetes is a container orchestration platform, which means they have a lot of flexibility in the way that containers are deployed, as well as controlling the flow of traffic to them. So let's go through each deployment method and see how Kubernetes Engine supports it. A best practice for application deployment with Kubernetes is through the use of a deployment controller. Deployments specify how many copies of a pod should be running and then, it's up to Kubernetes to go and make it happen. If you specify that you want three replicas of a web application pod, Kubernetes will make sure that you have three running. Behind the scenes, deployments perform a rolling update, which does have a built-in rollback mechanism. Canary deployments are possible by creating two deployments, one being the production version, and the other being the canary. The trick to getting this to work is to make sure that the front-end service includes the pods from both deployments. Now, this is easy to do if we ensure the production and canary pods are both using the same values that we used in the label for the services selector. It's through this same mechanism that we can do A/B testing, because we could deploy two different deployments with an equal number of pods, and load balance the traffic evenly. And with some modification, this can even work for blue/green as well. All right, that's gonna do it for this lesson, I hope this primer on these different deployment methods has been helpful. Thank you so much for watching, and I will see you in another lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.