The course is part of these learning pathsSee 3 more
Google Cloud Platform: Fundamentals
If you’re going to work with modern software systems, then you can escape learning about cloud technologies. And that’s a rather broad umbrella. Across the three major cloud platform providers, we have a lot of different service options, and there’s a lot value in them all.
However, the area that I think Google Cloud Platform excels in is providing elastic fully managed services. Google Cloud Platform to me, is the optimal cloud platform for developers. It provides so many services for building out highly available - highly scalable web applications and mobile back-ends.
For me personally, Google Cloud Platform has quickly become my personal favorite cloud platform. Now, opinions are subject, but I’ll share why I like it so much.
I’ve worked as a developer for years, and for much of that time I was responsible for getting my code into production environments and keeping it running. I worked on a lot of smaller teams where there were no operations engineers.
So, here’s what I like about the Google Cloud Platform, it allows me to think about the code and the features I need to develop, without worrying about the operations side. Because many of the service offerings are fully managed.
So things such as App Engine allow me to write my code, test it locally, run it through the CI/CD pipeline, and then deploy it. And once it’s deployed, for the most part, unless I’ve introduced some software bug, I don’t have to think about it. Google’s engineers keep it up-and-running, and highly available. And having Google as your ops team, is really cool!
Another thing I really like about is the ease of use of things such as BigQuery and their Machine Learning APIs. If you’ve ever worked with large datasets, you know that some queries take forever to run. BigQuery can query massive datasets in just seconds. Which allows me to get the data I need quickly, so I can move on to other things.
And with the machine learning APIs I can use a REST interface to do things like language translation, or speech to text, with ease. And that allows me the ability to integrate this into my applications, which gives the end-users a better user experience.
So for me personally, I love that I can focus on building out applications; and spend my time adding value to the end-users.
If you’re looking to learn the fundamentals about a platform that’s not only developer friendly, but cost friendly, then this is the right course for you!
By the end of this course, you'll know:
- The purpose and value of each products and services
- How to choose an appropriate deployment environment
- How to deploy an application to App Engine, Container Engine, and Compute Engine
- The different storage options
- The value of cloud Datastore
- How to get started with BigQuery
This is a intermediate level course because it assumes:
- You have at least a basic understanding of the cloud
- You’re at least familiar with building and deploying code
What You'll Learn
SummaryA review of the course
|Lecture||What you'll learn|
|Intro||What will be covered in this course|
|Introducing Google Cloud Platform||An introduction to the Google Cloud Platform|
|Getting Started||A review of projects and permissions.|
|App Engine and Cloud Datastore||An intro to the PaaS option for building web apps and the NoSQL database that works so well with App Engine.|
|Cloud Storage Options||What options exist for data storage?|
|Container Engine||How do we run Docker containers in the cloud?|
|Compute Engine||The IaaS option on Google Cloud.|
|Big Data and Machine Learning.||What options exist for data processing and machine learning|
Welcome back to Google Cloud Platform: Fundamentals. I'm Ben Lambert, and I'll be your instructor for this lesson. In this lesson, we're going to talk about Google Container Engine. We'll cover an introduction to containers, we'll talk about Kubernetes, and then we'll talk about how it all ties in to Container Engine, which is usually abbreviated GKE.
Containers are a popular topic right now. Docker, in particular, has made quite an impression on the technical community. Now currently, Container Engine uses Docker as its container of choice. It's possible that they'll add others in the future, such as Rocket; however, since it's using Docker currently, that's the container that we're going to talk about.
Before we dig into Container Engine, let's cover a bit about what containers are and why they're becoming so popular. So, what is a container? Virtual machines gave us the ability to install an entire operating system that are fully functional running on a host operating system. This means that if you're running a Mac, you could run something like VirtualBox or VMware, and then you could install virtual machines of a different operating system.
You could run Linux and Windows and interact with them in the same way, as if they were installed directly on your computer. And that's because VirtualBox and VMware act as a virtual computer. Now these virtual computers work by emulating the hardware of the computer, but they do it with software. This allows virtual machines to access the hardware that they expect, and the virtual machine doesn't need to know that it's not interacting with actual hardware.
So a virtual machine allows you to take an entire operating system and run it on a virtual server, which is a fantastic thing. However, operating systems tend to be large, and they can take a while to boot up. And if you're a developer and you wanna share your code with someone else, you'll need to either provide instructions on how to get that code up and running or share it inside of a virtual machine.
Now if you share the virtual machine, do you share it with the code pre-loaded or with some sort of provisioning mechanism? So there are things that you have to consider. Because if you send someone a full VM and then you make changes to some of the code, you'll need to send that whole thing to them again.
Now, you may be wondering how any of this relates to containers, which is a good question. Virtual machines work because they pretend to be a computer's hardware. Now, containers are similar in many ways except they're lighter weight than virtual machines. Containers bundle up your application and any of the binaries and libraries that your code needs into a single unit that we call a container, and these containers share the kernel of the host operating system, and for the sake of a simplistic definition, we're gonna say that the kernel is the brain of your computer.
It's the kernels job to control the memory and interact with devices such as your printer, speakers, and microphone. So if the kernel is the brain, then the rest of the OS is kind of the personality. It includes all the software that runs on a given OS. These are things such as the package manager, making systems such as Ubuntu and Red Hat different personalities.
Docker is considered OS-level virtualization where virtual machines are considered hardware level virtualization. So Docker will use the kernel of the host operating system, which means it doesn't need to be as large as a VM because we don't need to package up everything; we're just gonna package up the things that we need.
We're packaging basically just the personality. Because Docker containers don't need to boot up in OS, they start up as a process on your host operating system, so they start up more quickly. So, containers allow us to build our code into this container, bundle it up, and have it deployed anywhere with the same results.
Docker containers will allow us to create a consistent environment that contains everything we need it to run. Because Docker does its virtualization at the operating system level, it tends to use far less resources than virtual machines, and that's because you don't need the overhead of each Docker container running a full operating system.
A level of consistency and the ease of roll backs along with the speed of container start up and deployment, in addition to being able to share these container images through docker registries, all amounts to a very powerful way to develop, deploy, and operate applications. So you might use containers for things like consistency between development, test, and production environments, since you'll be running your application code inside the exact same environment it was developed for.
Or if you need to migrate between on-premises and cloud providers, because containers make a great way to run the same code in multiple places. Let's actually create a container that we'll deploy later on in this lesson. In this directory, we have a Docker file and a server JS file. Let's look at the Docker file first.
At the top it declares the base image that its going to use, and then it opens up port 8080, it copies some source code to the current directory, and then it tells Docker what command to run on start up. And if we switch over and look at the server JS file, you can see this is a fairly vanilla, no JS, Hello, World!
application. Now before we containerize this, let's just make sure the node app works by running it here. Since its running on port 8080, we can use the web preview functionality of Cloud Shell. Okay, here's our simple Hello, World! app, so that's working. Now lets build the Docker container with the build command.
So this is the first time it's being built. It's going to take just a little bit longer, it's going to bring down the base images, these are the images that make up the node container. Okay, so now that that's done, let's run it with a Docker run command. Okay, now this should be up and running, and we can check that with the web preview, so let's check it once again, and great, it says "Hello World".
To show the containers that are running, we can use the Docker PS command, and if we use the Docker kill command, we can pass in the hash that we get here from the PS command and kill this container. This will stop the container from running. So let's just verify that we've stopped it. Okay, great. So now, it's no longer running, but that's how easy it is to get started once Docker is installed.
Now, once you have at least one container image, you'll need to start thinking about managing that container, and that's where Kubernetes comes in. Kubernetes is an open-source container management system. It'll handle things like deployment and scaling of containers. Kubernetes thinks more in terms of services than containers, allowing us to keep our services up and running regardless of how many containers are required.
It allows us to consider individual containers as just part of the service that our containers are providing. Kubernetes is based on the last decade of experience Google's had with containers, and it was built to work on public, private, and hybrid clouds, and this is cool because it helps avoid vendor lock-in.
So, if Docker is a container engine, and Kubernetes is a container management platform, what exactly does Google Container Engine provide? Container Engine is the combination of Docker, Kubernetes, and Google's expertise, resulting in a very powerful cluster management and orchestration system for our Docker containers.
It will schedule containers into the cluster, and manage them automatically based on the requirements that we define. These are things such as CPU and memory. And because it's built on an open-source platform, it gives us the flexibility of using on-premises, hybrid, and public cloud infrastructures with the same platform.
Now, there are a couple of additional services that work with Container Engine. The first is Container Builder. What it will do is allow us to build containers based off of application code that we have in cloud storage. And the other is Google Cloud Registry, which provides us with a private Docker container storage.
This allows us to upload our containers to the registry, and then we can download them to other systems, such as our CI server, to test them out and optionally deploy them. Let's actually deploy the container that we built earlier. First, we're going to need to create a cluster. So, we'll start by giving it a name, and we'll call it CA dash cluster, and you can set the zone for the cluster here, though I'm going to leave this where it is.
You can also set the machine type for the underlying virtual machines. There's a setting here for a node image which determines which operating system the underlying VMs will use. The new default is Google's Container Optimized OS, referred to as COS here, and the other option is the Container VM, which is based on Debian 7.
Since Kubernetes is about providing you with a compute pool, this totals section gives you an at-a-glance of your CPU and memory, so adjusting the size of the cluster, or the machine type is going to update these stats here. Okay, before I create this, notice the links at the bottom give you the REST or command line info that you should be able to grab and use to run the stuff on the command line.
This can be really useful if you're integrating this sort of stuff into a continuous integration or continuous delivery process. Okay, this will end up taking about seven minutes-ish to run, so I'm going to fast forward to once it's complete. Okay, we're back. Before getting started, let's click here on the connect button and this provides a copy-and-paste ready command that's going to set the credentials for the cube control command.
So let's copy this, and let's paste it into our cloud shell instance. Great. Now, when we're ready to use the cube control command, we'll already be authenticated. Now, let's start by listing off the container images with Docker. So here we have the CA-demo container from earlier. In order to get Container Engine to see our container, we need to upload it to an accessible container registry.
First, let's tag our image, and for that, I'll paste in this command here that I've already copied to my clipboard. This command will tag the image so that we can push it up to the cloud registry. Listing off the images again, you can see that we have another image here listed; however, it has the same image ID as our CA-demo container.
Now, once it's tagged, we can push it to the registry, and for that, we can use the gcloud docker command. So, I'm going to paste in another command here. Great. This is going to take a little while to run, so I'm going to fast forward this, and we're back. Okay, this ended up taking a few minutes. With this uploaded to the container registry, we can now tell Kubernetes to run the container, and for that we'll use the cube control run command.
So, lemme paste this in. Okay, notice that it prints off some text to the console saying that it created a deployment. Using the cube control command, we can see the deployments, so let's list off our deployments and there we go. So now we have a deployment running; however, before we can view the node application that's running in the container, we need some sort of public-facing end point for that container.
For that, we need a service. If I run the get service subcommand for the cube control command, you can see that we currently don't have one. So, let's create a new service by exposing the deployment. For this, I'll again paste in a pre-written command here, and notice that it creates the service for us.
Let's re-run the get service command to see that it's listed here. Great, so it is listed, but notice the external IP address is listed as pending. This can take a few minutes, this isn't abnormal, so don't worry if it takes a little while when you try this. I'm going to wait just a minute and try this again.
Okay, welcome back. I'm going to run this command again, see if our IP address is ready, and it is, great. So now that we have this, we can paste this into the browser, and we can add colon 8080 to the end of the URL and we should see our Hello, World! application, so let's try that. And there we go, great!
So, at a high level, this is an overview of Container Engine. So, Container Engine gives us a platform for managing our Docker containers, and it uses the open-source Kubernetes platform for that management. So again, this is a really cool thing because once we gain these skills learning Kubernetes, we'll be able to work on any cloud platform using this tool.
In our next lesson, we're gonna take a look at the infrastructure as a service offering Compute Engine. So, if you're ready, let's dive in!
About the Author
Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.
When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.