1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Managing Container-Based Development Environments on GCP

Adding Kubernetes Support to a Project

play-arrow
Start course
Overview
Difficulty
Intermediate
Duration
54m
Students
45
Ratings
5/5
starstarstarstarstar
Description

In this course, we will explore some of the tools available to build and manage development environments intended for deployment on Google Cloud Platform products. We will also demonstrate how to easily push builds from our local machine to Google-hosted services.

We will start the course by covering the different types of development environments and their purposes. We will touch briefly on popular software methodologies and frameworks as they relate to choices in number and type of development environments.

This course will focus on container-based application development environments, tools, and services. We will first walk through installing and using Docker and Kubernetes on your local machine. Then we will explore how to push projects to Google Cloud Run and Google Kubernetes Engine.

Writing applications using Kubernetes or Cloud Run can be further streamlined with Google Cloud Code, which provides direct IDE support for development on these platforms. We will examine how to install and use Google Cloud Code with Visual Studio Code.

Learning Objectives

  • Understand the types of development environments and when to use them
  • Install a container-based local development environment
  • Add Google Cloud Code support to VS Code
  • Push code from a local development environment and run on Google Cloud Platform using:
    • Google Cloud Run
    • Google Kubernetes Engine
    • Google Deployment Manager

Intended Audience

  • Programmers interested in developing containerized applications on Google Cloud Platform
  • Solo developers new to working on a development team
  • Anyone preparing for the Google Professional Cloud DevOps Engineer certification

Prerequisites

To get the most out of this course, you should:

  • Have a Google Cloud Platform account
  • Have Google Cloud SDK installed and initialized
  • Be familiar with IAM role management for GCP resources
  • Have Visual Studio Code, Python 3, and Git installed

Knowledge of Python would also be beneficial for scripting with GCP, but it's not essential.

Resources

Transcript

In the previous video, we learned the very basics about Kubernetes and how it manages containers for us.  In this video, we’ll see how all of that complexity is abstracted for us with the help of just a few simple configuration files.  This way, we can use Kubernetes on our local development projects, and even deploy to production Kubernetes environments, without needing to fully understand all the little things Kubernetes is actually doing under the hood.

We will start with just a very simple Python “hello world” app, containing only a few lines of code to run a webpage that will return “Hello world!” when we access it.  Normally we would run this script directly and make sure we can see “Hello world!” when we access our localhost address in a browser.  We need to make sure our app runs properly in an Ubuntu based container and not a Windows 10 desktop though, since we want to match what our GCP production environment will use. We can do this by creating a Dockerfile for our “hello world” app, so that we can package it as a container that will run the same regardless of hosting environment.

The first line of our Dockerfile tells Docker to use the official GCP Python Runtime Docker Image as our starting point.  This is an Ubuntu-based container image with Python already installed for us.  The next few lines should be familiar to Python developers - these are console commands for creating a Python virtual environment, and installing any dependencies required by our app using the Python pip command.  The only requirements for our “hello world” app are Flask and gunicorn.  We then add the folder with our application source code to the container.  The last line in our Dockerfile tells the Docker container what command to run when it starts up - this should be the main script for our app.  However, we are using gunicorn as a Web Server Gateway Interface to make our Python app accessible from a web browser.  So we will actually run gunicorn here, then instruct gunicorn to run our script, telling it to make our script available on a specified port.  

Now we can run the docker build command using our completed Dockerfile, and watch as it generates our Docker image.  Next let's use the docker run command to launch our container, and we can see it working as expected in our browser.  For more complex applications, it usually won’t be possible to test the app in a Docker container this easily, because it will likely require several other Docker containers running at the same time to provide all the functions it needs to run.  In most real world scenarios we’ll need to group our containers in a pod and run them as a Kubernetes cluster in order to properly test our containerized application.

Now that we know our simple “hello world” app works in a Docker container though, let’s package it for deployment as a Kubernetes cluster.  Kubernetes actually defines two additional YAML configuration files for running our cluster.  There is a Deployment YAML, which defines the pods in our cluster, and a Service YAML that defines the network access for the pods.  For simple local development usage with only a single pod and a single cluster, it isn’t always necessary to define a Service YAML configuration file.  It is also possible to combine multiple Deployments and Services into a single YAML file.  For our example here, we have just created very basic Deployment and Service configurations in a single YAML file for our app.

In the next video, we will learn how to use kubectl and minikube to deploy our app in a Kubernetes cluster on our local machine using the configuration files we just created.

About the Author
Avatar
Arthur Feldkamp
IT Operations Manager and Cloud Administrator, Database and API Integrations Specialist
Students
46
Courses
1

Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.