The course is part of these learning paths
If you’re going to work with modern software systems, then you can escape learning about cloud technologies. And that’s a rather broad umbrella. Across the three major cloud platform providers, we have a lot of different service options, and there’s a lot of value in them all.
However, the area that I think Google Cloud Platform excels in is providing elastic fully managed services. Google Cloud Platform to me, is the optimal cloud platform for developers. It provides so many services for building out highly available - highly scalable web applications and mobile back-ends.
For me personally, Google Cloud Platform has quickly become my personal favorite cloud platform. Now, opinions are subjective, but I’ll share why I like it so much.
I’ve worked as a developer for years, and for much of that time, I was responsible for getting my code into production environments and keeping it running. I worked on a lot of smaller teams where there were no operations engineers.
So, here’s what I like about the Google Cloud Platform, it allows me to think about the code and the features I need to develop, without worrying about the operations side because many of the service offerings are fully managed.
So things such as App Engine allow me to write my code, test it locally, run it through the CI/CD pipeline, and then deploy it. And once it’s deployed, for the most part, unless I’ve introduced some software bug, I don’t have to think about it. Google’s engineers keep it up-and-running, and highly available. And having Google as your ops team is really cool!
Another thing I really like about is the ease of use of things such as BigQuery and their Machine Learning APIs. If you’ve ever worked with large datasets, you know that some queries take forever to run. BigQuery can query massive datasets in just seconds. Which allows me to get the data I need quickly, so I can move on to other things.
And with the machine learning APIs, I can use a REST interface to do things like language translation, or speech to text, with ease. And that allows me the ability to integrate this into my applications, which gives the end-users a better user experience.
So for me personally, I love that I can focus on building out applications and spend my time adding value to the end-users.
If you’re looking to learn the fundamentals about a platform that’s not only developer-friendly but cost-friendly, then this is the right course for you!
By the end of this course, you'll know:
- The purpose and value of each product and service
- How to choose an appropriate deployment environment
- How to deploy an application to App Engine, Kubernetes Engine, and Compute Engine
- The different storage options
- The value of Cloud Firestore
- How to get started with BigQuery
This is an intermediate-level course because it assumes:
- You have at least a basic understanding of the cloud
- You’re at least familiar with building and deploying code
- Anyone who would like to learn how to use Google Cloud Platform
Welcome back to Google Cloud Platform: Fundamentals. I'm Ben Lambert and I'll be your instructor for this lesson. In this lesson, we'll talk about Compute Engine and Networking, as well as Operational Tools. Let's start out with Compute Engine. There are some applications and processes where platform as a service options are just not ideal.
For those times, we have Compute Engine. With Compute Engine, we get a highly capable infrastructure as a service offering. It allows us to quickly start up virtual machines, load balancers, and more. Let's jump right into the console and check out what Compute Engine offers. You can see on the left-hand side, we have Virtual Machine Instances, virtual machine groups and templates, Disks, Snapshots, Images, Metadata, Health checks, Zones, Operations, Quotas, and Settings.
All right, wow, that's quite a lot of features. Let's start by creating a new VM, so we can see what that process looks like. Let's create an Ubuntu instance. We're gonna call it ubuntu-demo and we'll change the machine type if we want to but we'll just leave it for as the default for now. We don't need to change that.
We can also change the boot image. If we had our own uploaded images, they'd show here or if we had a snapshot, it would show here, and since we don't have any unattached disks, we won't see anything under the Existing disks tab. Let's go back and let's just select Ubuntu 16. 04 LTS. Now even though it's a demo, I still like using the Long Term Support versions.
I just do that out of habit. We can change the Service account that the instance runs under, though we're gonna leave it as the default. We can optionally change the scope but again, we're just gonna leave the default here. We'll add a couple of firewall rules for allowing HTTP and HTTPS traffic. Now, let's click on the Management disk, networking, and SSH link.
This is a really cool thing. This allows us to specify a startup script that will run any time the instance starts and that is going to include reboot as well. So let's paste in a little bash script that I had already copied in my clipboard and what this does is it's going to install the Apache web server and echo some contents into the index.html file.
As we scroll down, we see we have some different options for things like preemptible instances. What this means is if Compute Engine requires the resources back, it can terminate our preemptible instance. This is good for things like batch processing jobs where any one node in the cluster could stop without ending the job.
It would just slow the process down. Preemptible instances are less expensive because they aren't a guaranteed resource. So, we'll leave it by default and we're gonna leave the auto restart and host maintenance settings to default, and let's create this instance. It will take a moment and when it's done, the startup script will have installed Apache and it will have the web server up and running with our HTML code that we specified.
Let's see. Yup, there it is. It's displaying the code that we had mentioned in our startup script. Let's SSH in and see what we have. It's gonna take a little bit to connect. It's gonna transfer the keys over. Okay, now that we're in, if we run the uname command, we can see that we have an Ubuntu instance, which is what we'd expect, and if we display the content in our index.html, it shows us the code that we used in our startup script. Everything looks good here. Let's exit out of this. Okay, next up, let's check out Instance groups, which allow us to organize our VMs and use these groups in load balancers. We can select existing VMs to add to a group or use a template. Let's check it out using existing VMs. Okay, we're gonna give it a name.
We're gonna call it web-app-group and we'll set a zone, and we'll select the existing instance, we're gonna select our Ubuntu instance, and let's create this. Now, if we check it out, we can see the details for our group. Let's create a template next. Templates are going to allow us a way to create VMs based on settings that we predefine.
We'll give it a name and we'll change the image here, make sure that these ports are opened up. and I wanna paste in our startup script, so we can have these as web servers, and let's create that. If we drill into it, we can see the details and we can even create an instance group based off this template.
So every instance added to that group will use the template. Let's do that. Let's name it template-based-group and set the number of instances to two, so we'll always have at least two instances. We're gonna add a Health check. This allows instances that aren't healthy to be replaced with new ones. Let's create that.
Okay, let's loop back to this and we'll look at this later when we'd be able to load balancers. For now, let's move on. We're gonna check out Disks. We only have the one that's from our VM, so there's not much to see here. We have Snapshots, which allow us to create a snapshot of a disk image. Let's snapshot our Ubuntu image.
Snapshot this and we'll leave the default encryption. Okay. This is gonna take a moment, though when it's done, we're gonna have a backup of our instance. Images show us a list of all of our VM images and we can even create our own if we needed to. Next up, let's check out Metadata. I really like this feature.
We can set metadata that can be queried by any instance in the project. This gives us a central location to manage metadata that should apply to all instances. Then we have Health checks, which we saw previously. We have zones, which is a listing of all of the zones in any instances or disks that we have in those zones.
Here we have Operations, which again is just a log of things that have happened. And we have our Quotas. So we can see how close we are to hitting any caps. Now let's loop back to our instance groups. Let's see them in action by creating a load balancer under the Networking section. We start by clicking on the Load Balancing and create new.
Right off, we're given some options. We have HTTP, HTTPS, TCP, and UDP. This gives us the ability to load balance website traffic or lower-level network traffic. This sort of power is fantastic. We could use this for something like a video game backend. We could easily distribute the UDP traffic for multiplayer backend.
We'll select HTTP for our use and we'll set the name and then create the configuration. It will use the template-based instance group that we created earlier and we'll just make sure that the Health check is selected. Notice on the bottom, we have the option Enable Cloud CDN. That's all we need to do to make sure our requests are cacheable via the CDN.
Now that's a very cool feature. And we can basically set the defaults for the rest of these options since they don't need to change. Yeah, everything looks good. All right, let's create this. Now, if we open up the load balancer in here, we'll see that we have an IP address. Let's grab that IP address and see what happens when we browse to it.
Okay, we get an error message. Now, this isn't unexpected. It can happen from time to time because we have a lot of things that are getting started up, so we'll just give it a second and we'll try it again, and there we go. So we have two VMs that are being served up on the load balancer and if one of them should die, it would automatically be replaced, which is really cool.
So, Compute Engine gives us the raw building blocks to create our own networks and instances and with a highly scalable software-based load balancer that doesn't require pre-warming. We've talked about load balancer and CDN, however, let's take a look at a couple of the other networking features such as DNS and VPN.
Cloud DNS allows us to use Google's ultra-fast domain name service, and let's just create a fake zone here so we can see it. So we can easily create a zone and manage it here and we'll just delete this because it's not really useful for our demo. Cloud VPN is a secure VPN so that we can connect our Compute Engine resources to our own networks.
So, now that we've talked about Compute Engine, we've talked about some of the networking components, let's talk about some of the operational tools that they have. First, we have Cloud Functions. Now, if you're familiar with Lambda, on AWS, then you probably already understand the concept. Cloud Functions are a platform as a service option that allows you to think about code and not have to consider things like managing servers that your code runs on.
This allows you to have single-purpose code that can respond to events, such as a new instance being created or a file on cloud storage being changed. With this, you can identify files that have changed and trigger some sort of code to execute in response. So let's say you wanted to be notified when a PNG file is written to a specific bucket, you could listen for that and then you could trigger a function that creates thumbnails of those PNGs.
This allows developers to collaborate and have all of the code housed inside your own private repo. The cool feature is that it integrates with Stackdriver Debugger, which allows you to debug your code in the Google Cloud platform should the need arise. So speaking of Stackdriver. Stackdriver is a suite of logging, monitoring, and debugging tools that can integrate with services such as PagerDuty.
With the monitoring tools, you use endpoint checks to determine if your web application and other internet-accessible services are running in the cloud. You can configure uptime checks based on URLs, groups, or resources. These are things like instances and load balancers. With Logging, you can view, search, and filter logs from your cloud and open-source applications services, if you need to process the logs, maybe you wanna look for a pattern that might indicate some sort of potential security risk.
You can always export to something like BigQuery, Cloud Storage, or Pub/Sub. The Stackdriver Trace functionality provides latency sampling and reporting for App Engine. This includes per URL statistics. The reporting functionality in Stackdriver analyzes and aggregates the errors in your cloud applications and it lets you know when errors are detected.
This is really cool. The alerts will allow us to create policies and inform us when certain metrics hits a threshold or when health checks for uptimes fail. Error reporting integrates with Slack, Campfire, HipChat, and then PagerDuty. So there's a lot of integration there. Debugger is another cool feature and I really like this.
We just kind of mentioned it a moment ago when we talked about the integration between source repos and Stackdriver. This allows you to inspect the state of your application in a production environment. So you're seeing an issue in production, maybe you can't reproduce it locally, you can use this tool to find out what's happening.
So Stackdriver is really more than any one thing. It's a collection of different tools, all combined to get one really cool service and this makes it easier to know what's going on with your application. This will help you to be proactive in identifying potential problems and should you run into a problem, it'll help you to react by quickly giving you the information you need to locate the source of a problem.
The last tool that we're gonna look at is the Deployment Manager, which is an infrastructure as code option. This is really cool. In fact, we used it earlier when we created that LAMP stack. Do you remember earlier in the course? We can create templates ourselves using the same technology, so check this out.
I'm gonna create a Jenkins instance and the reason is I wanna show you the YAML template that powers it because there's no reason we can't create our own and then upload it with G Cloud. Check out the right-hand side here. It sets some variables and then it goes through and provisions the resources it needs, and that's it.
It's a simple way to gain consistent deployments. So, Compute Engine has a lot to offer. If you need raw compute resources, Google's world-class infrastructure and customer-friendly pricing, makes it an easy choice to use Compute Engine. In our next lesson, we're gonna talk about Big Data and Machine Learning.
If you're ready, let's check that out.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.