Defining Development Environments
Working with Kubernetes Locally
Integrating GCP into our Development Workflow
Deploying to Google Cloud Platform
In this course, we will explore some of the tools available to build and manage development environments intended for deployment on Google Cloud Platform products. We will also demonstrate how to easily push builds from our local machine to Google-hosted services.
We will start the course by covering the different types of development environments and their purposes. We will touch briefly on popular software methodologies and frameworks as they relate to choices in number and type of development environments.
This course will focus on container-based application development environments, tools, and services. We will first walk through installing and using Docker and Kubernetes on your local machine. Then we will explore how to push projects to Google Cloud Run and Google Kubernetes Engine.
Writing applications using Kubernetes or Cloud Run can be further streamlined with Google Cloud Code, which provides direct IDE support for development on these platforms. We will examine how to install and use Google Cloud Code with Visual Studio Code.
- Understand the types of development environments and when to use them
- Install a container-based local development environment
- Add Google Cloud Code support to VS Code
- Push code from a local development environment and run on Google Cloud Platform using:
- Google Cloud Run
- Google Kubernetes Engine
- Google Deployment Manager
- Programmers interested in developing containerized applications on Google Cloud Platform
- Solo developers new to working on a development team
- Anyone preparing for the Google Professional Cloud DevOps Engineer certification
To get the most out of this course, you should:
- Have a Google Cloud Platform account
- Have Google Cloud SDK installed and initialized
- Be familiar with IAM role management for GCP resources
- Have Visual Studio Code, Python 3, and Git installed
Knowledge of Python would also be beneficial for scripting with GCP, but it's not essential.
Following our last video, we now have both a Dockerfile and a Kubernetes YAML config file created for our application. We were able to test our application locally as a standalone Docker container, and now we are ready to deploy it as a Kubernetes cluster. There are a few more tools we’ll be needing, and since we’re deploying to Google Cloud Platform, the easiest way for us to get those tools is through the Google Cloud SDK command line tool.
gcloud components install command, we can add both kubectl and minikube to our local development environment. We can always check what gcloud components we have installed, or see what other components are available with the
gcloud components list command. We can then keep our components up to date by running
gcloud components update at any time.
Let’s take a quick look at kubectl, a command line tool that interacts with a part of the Kubernetes Control Plane, which we mentioned briefly in an earlier lecture. With kubectl we can easily issue commands to manage our Kubernetes clusters directly from the command line. In this case, we’ll be using kubectl to issue commands to a single node cluster managed by minikube on our local machine.
The entire purpose of minikube is to make it fast and simple to create a single Kubernetes cluster on a local development machine. We want to closely replicate our production environment in our development environment, but during development we rarely require the more advanced container orchestration features found in Kubernetes. Minikube makes Kubernetes more accessible to developers without bogging them down in features they don’t need right away.
Let’s launch minikube by typing
minikube start into powershell - this works from command prompt too, but then you won’t get these cute little icons while it starts up. Next if we type
minikube dashboard, we’ll get a web based interface to our local kubernetes cluster, which is much easier than running a bunch of kubectl console commands to manually check on the status of our deployments and services. When using the dashboard, it will tell you what the equivalent kubectl commands for your actions are, so you can learn kubectl while using the dashboard and don’t have to think of it as a crutch.
Next, we need to make sure that when we type in docker commands, that we’re working with Docker inside our minikube cluster, instead of the Docker install on our local host machine. If we run the
docker images command, we see the images from our local Docker installation, and not the ones minikube is using. We can fix this by running the
minikube docker-env command. This command will give you instructions on a followup command to run, which will differ slightly depending on if you are using PowerShell or the old school command line. After running this command though, we can run
docker images again and see that we are connected to our minikube instance now instead of our localhost Docker instance. You will need to run through the
minikube docker-env commands every time you open a new terminal session, as it will reset and default to your localhost Docker instance again.
We are now ready to deploy our project to our local kubernetes cluster using minikube. Let’s run the
docker build command again to build our project into a container inside minikube. Then we’ll run
kubectl create -f and give it our hello.yaml file. We can see our hello deployment and service popped up in the minikube dashboard very quickly. Now we can use the
minikube service command to give us a URL where we can access our app, and there we can see our “Hello, world!” app now running from our local kubernetes cluster using minikube.
While we have a functional local kubernetes development environment now, having to constantly rebuild containers and redeploy our changes to minikube can hamper our development workflow. In the next video, we’ll learn how to bring some Continuous Integration/Continuous Delivery tools into our development pipeline.
Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.