1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Managing Container-Based Development Environments on GCP

Understanding Kubernetes

Start course
Overview
Difficulty
Intermediate
Duration
54m
Students
193
Ratings
3.4/5
starstarstarstar-halfstar-border
Description

In this course, we will explore some of the tools available to build and manage development environments intended for deployment on Google Cloud Platform products. We will also demonstrate how to easily push builds from our local machine to Google-hosted services.

We will start the course by covering the different types of development environments and their purposes. We will touch briefly on popular software methodologies and frameworks as they relate to choices in number and type of development environments.

This course will focus on container-based application development environments, tools, and services. We will first walk through installing and using Docker and Kubernetes on your local machine. Then we will explore how to push projects to Google Cloud Run and Google Kubernetes Engine.

Writing applications using Kubernetes or Cloud Run can be further streamlined with Google Cloud Code, which provides direct IDE support for development on these platforms. We will examine how to install and use Google Cloud Code with Visual Studio Code.

Learning Objectives

  • Understand the types of development environments and when to use them
  • Install a container-based local development environment
  • Add Google Cloud Code support to VS Code
  • Push code from a local development environment and run on Google Cloud Platform using:
    • Google Cloud Run
    • Google Kubernetes Engine
    • Google Deployment Manager

Intended Audience

  • Programmers interested in developing containerized applications on Google Cloud Platform
  • Solo developers new to working on a development team
  • Anyone preparing for the Google Professional Cloud DevOps Engineer certification

Prerequisites

To get the most out of this course, you should:

  • Have a Google Cloud Platform account
  • Have Google Cloud SDK installed and initialized
  • Be familiar with IAM role management for GCP resources
  • Have Visual Studio Code, Python 3, and Git installed

Knowledge of Python would also be beneficial for scripting with GCP, but it's not essential.

Resources

Transcript

In the last video, we installed WSL 2 and Docker 3 on our local Windows 10 development machine.  But how do we actually create and use Docker Containers now, and what exactly does Kubernetes do for us?  Let's take a quick look at each of the main components to get a better understanding of how they all fit together.

To start with, we’ll package our application code into a Container, which is sort of like a lightweight virtual machine.  We can do this by creating a Dockerfile that tells Docker how to build our application into a Container when we run the docker build command.  This is typically done using another base image as a starting point, then copying in our project files on top of it.  We can have our demo running in a container with only a few lines of code in a Dockerfile this way. 

Containers have some key differences from virtual machines, one of them being that a Container will serve only a single endpoint.  While we could run multiple services on a single virtual machine, we'll require multiple Containers working together to accomplish the same tasks.  This has some benefits for both security and scalability, but for a complex application this can quickly turn into a very large number of Containers to manage.

This is where Kubernetes comes in.  Kubernetes is a Container orchestration system that will help us manage all these Containers as our application grows in both complexity and number of users.  Kubernetes groups all the Containers needed to run our application together as a Pod.  A kubelet agent makes sure all the Containers in our Pod are working, and that all of our Pods are running, if our application is complex enough to need multiple Pods working together.  A kube-proxy provides networking services to the Containers in our Pods.  These components together make a Kubernetes Node.

Nodes are managed together by the Kubernetes Control Plane as a Cluster.  The Kubernetes Control Plane itself is made up of a half dozen more components that together make sure our application always has the right number of Nodes, and that the Pods in those Nodes are always running and working properly.  This will allow us to automatically add Nodes to meet increased demand for our application, and remove Nodes to reduce our operating costs during off peak hours, without ever worrying about the underlying networking or server infrastructure.

For a simple application, this can all seem like an overwhelming amount of extra infrastructure - especially if we are starting with just a single Container, in a single Pod, in a single Node, in a single Cluster like we are demonstrating in our example.  Kubernetes really only begins to shine as an application scales upwards in size and complexity, where we’ll actually benefit from these fault tolerance and high availability features that are baked in.  It can unfortunately all seem a bit cumbersome and unnecessary in our development environment though.

Luckily, we don't really need to know all the inner details of every single Kubernetes component in order to work with it on our development machine.  We just need to know how to make a Dockerfile and a Kubernetes YAML file for our application!  The Dockerfile tells Docker how to build our application container, and the YAML file tells Kubernetes how to manage it.  So long as we understand the basic concepts and know how to create these two files, Kubernetes suddenly becomes quite easy to use!  In the next video, we will learn how to build these two files so we can easily turn nearly any project into a Kubernetes Cluster.

 

About the Author
Avatar
Arthur Feldkamp
IT Operations Manager and Cloud Administrator, Database and API Integrations Specialist
Students
259
Courses
2

Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.