Deploying WordPress using Deployment Manager and Helm Charts
Start course

Learn how to simplify all of your application infrastructure into just a few configuration files then deploy your code on Google Cloud Platform with this course. We will start by learning the core concepts behind Infrastructure as Code, discussing security best practices when working with IaC, and comparing some popular orchestration and configuration management tools.

After that, we’ll implement version control for our IaC configurations, and learn how we can automate deployment updates using Cloud Build Triggers. Next, we’ll learn how to integrate Google Secret Manager to protect any sensitive data in our source code. Then to tie it all together, we’ll walk through Google Identity and Access Management and learn how to monitor our users and resources on GCP.

After learning the basics, we’ll build our own immutable server images with Google Cloud Build using both Docker and Packer. We’ll then explore a practical usage scenario and build a WordPress deployment with Google Deployment Manager. We’ll also deploy a similar WordPress configuration using Terraform to compare the two methods.

I will be working in Visual Studio Code during the demos for this course, but you are also free to use any other IDE you are comfortable with instead. A demo repository is provided along with this course, and while we will be working with some Python and PHP in our examples, you should still be able to follow along with the demos without any background in either of these languages.

Learning Objectives

  • Build server images from a configuration file using Docker and Packer.
  • Learn to use Google Cloud Build with third-party build steps.
  • Deploy application infrastructure from code using Google Deployment Manager and Terraform.
  • Understand templating systems for Infrastructure as Code.
  • Implement version control for Infrastructure as Code.
  • Learn to protect secret data in IaC configurations using Google IAM role management and Google Secret Manager.

Intended Audience

This course is intended for programmers interested in expanding their knowledge about modern cloud-based DevOps workflows. Learning to manage Infrastructure as Code is beneficial to solo developers who need to focus as much of their time as possible on their application code and not on managing infrastructure. Development teams will also benefit from this course, because implementing IaC provides consistent performance across deployments in multiple environments, which greatly simplifies project collaboration. This course will also help prepare the viewer for the Google Professional Cloud DevOps Engineer certification.


  • You should already have a Google Cloud Platform account.
  • You should have Google Cloud SDK already installed and initialized.
  • You should already have Git installed, and be familiar with its use.
  • Demos will be shown in Visual Studio Code, but you're free to use any IDE they are familiar with.
  • Demos will utilize some Python and PHP, but you're not required to be fluent in either of these languages to follow along with the examples.


Additional Documentation


Deploying WordPress Using Google Deployment Manager and Helm Charts. We've now learned a couple different ways we can build server images from a few simple configuration files. Our application may span multiple servers and databases though, and we'll also need to specify how these resources connect to each other, and what outside connections they may accept. We can create additional configuration files to use with Google Deployment Manager that define all these resources and their relationships which together make up the full operating environment required by our application.

Deployment Manager deployments are defined with a YAML configuration file. In its simplest form, this equates to just working with Google service APIs directly, but instead of using a JSON format over HTTP POST, we are formatting the same exact information as YAML and passing it to Google Deployment Manager. The real benefit of Deployment Manager comes from implementing templates into our YAML configuration so we can introduce variables and conditions to make our deployments easily reusable.

Templates are written in Python, and can optionally use the Jinja template engine, which can help improve the readability of your templates while sacrificing some of the functionality of raw Python. You can use multiple templates in your deployments however, and can use both Python and Jinja templates in the same deployment, so you're able to leverage the benefits of either method wherever is appropriate. We can also define a schema for our template that describes all the properties used and values permitted in our template.

A schema file is not required, but does make your templates much easier for other people to use since it basically serves as reference documentation for all the settings needed by the deployment. Schema files for Deployment Manager templates are typically also written in YAML, but can be written as JSON Schema as well. For our deployment, we want to create a cluster on Google Kubernetes Engine to run our WordPress Docker image. With GKE, we can enable autoscaling and load balancing so that new instances of our WordPress server can be created to meet demand, and spun down during off peak times.

A common hurdle encountered with Kubernetes deployments is network management. Kubernetes manages its own networking for its pods, but those pods may also need access to other Google services on an internal network or require access to the public internet over an external network. Rather than defining all these network routes ourselves, we can tell Kubernetes to use alias IP address range assignments on our pods and take advantage of Virtual Private Cloud Network Peering. By creating a VPC-native cluster this way, Google handles all the routing to other Google resources for us, and we can create very simple firewall definitions to make our application web accessible.

Kubernetes also has its own system for service accounts to manage authentication for its pods, which is completely separate and apart from the Google Service Accounts and IAM roles we were looking at earlier. This can cause a bit of confusion and difficulty with authentication issues when working with GKE. We can resolve this by connecting the two service account systems together by installing and implementing Workload Identity on GKE. This lets us map a Google service account to a Kubernetes service account, where we can then easily manage access to other Google services from our Kubernetes app using Google IAM roles.

Let's start by double checking that all the APIs we'll need are enabled for our project with this quick command. Then we need to edit the variables here in our project source code before we are ready to create our deployment. We can see our deployment.yaml file imports our deployment.jinja file, with properties to fill in the variable values. You only need to change these properties to match your own project. We can review the schema file to see what properties we need to set in our YAML file to deploy our jinja template. We create the deployment with this command, which will take some time to run, but when it's done, we can see our new cluster available from the Cloud Console here.

Our cluster is now deployed with Workload Identity and Config Connector enabled, but we still need to actually configure these components so our cluster knows what service accounts to be using. To do this, we'll need to install kubectl which gives us a command line interface to our Kubernetes Engine. We can do this with a simple gcloud components install command.

Next, we run this get-credentials command, which fetches the Kubernetes credentials for our cluster, and sets our cluster as the current context for kubectl commands. Next we create a Google service account and give it the role of owner on our project, so Kubernetes will be permitted to create other Google resources in our deployment definitions. We then bind our Google service account to this Kubernetes service account, which was created in our cluster automatically when we enabled Config Connector, and we also assign it the workloadIdentityUser role here.

Next check the configconnector.yaml file in our project and make sure this service account name matches the one you just created. We can then update our Kubernetes configuration by applying this file with a kubectl command so our cluster knows to authenticate using this new service account. We then just need to tell Config Connector how to organize any resources it creates. We can do this by creating a Kubernetes Namespace, then creating an annotation that maps this namespace to our service account. This last line sets the current context of our kubectl commands to also use our new namespace.

We can now simplify the rest of the configuration needed to deploy WordPress on our GKE cluster using a package manager for Kubernetes called Helm. By enabling the Config Connector add-on for our cluster, we can now consume Google APIs from within our Kubernetes configurations to interact with other Google services.

Keep in mind Config Connector requires Workload Identity for authentication to make use of these APIs and won't work without it. I have included this Helm chart from the Cloud Foundation Toolkit in the repository for this course. It deploys a WordPress server using Config Connector to create a Cloud SQL database for our website backend. We only need to edit here to change the WordPress container image to use the custom image we built with Docker earlier in this course.

Notice the plain text password here, we could store this value as a Kubernetes secret, which is also separate and apart from Google Secret Manager secrets. We could connect the two systems together like we did with the service accounts by using Kubernetes External Secrets. This demo is probably complicated enough as it is, so we won't go through how to set up Kubernetes External Secrets here, but I have included a link in the course resources. If you plan to use Google Kubernetes Engine in production, you should definitely look into implementing this. For now, we just need to make sure the information here matches the secrets we made for our WordPress Docker container earlier in this course, either by updating our secrets to match our YAML file, or updating our YAML file to use the same information stored in our secrets.

Now to deploy our chart, we just need Helm. There is a Cloud Build community image we could use to do this, which would allow us to include Helm in our CI/CD pipeline with a Cloud Build trigger like we demonstrated earlier in this course. Alternatively, you could download Helm and run it locally. It's literally just a single executable file. The extremely lazy could just leave this executable in the same directory with their Helm chart, but if you plan on using Helm for more than a single project, you should probably store Helm elsewhere and create a PATH variable to properly map the Helm command. With either approach, it's just a simple helm install command from there, entered from the command line if using Helm locally, or using a YAML file format with Cloud Build. This will now take ten to fifteen minutes to run and may spew out some errors before it's finished, so don't panic! We can know that it's working correctly if we see a Cloud SQL database appear here.

When it's finally done, we'll find our custom WordPress image deployed on an auto-scaling, load balancing Kubernetes cluster that implements IP aliases for networking, and authenticates with a Google service account using Workload Identity. Our cluster contains a persistent disk for file storage and uses Cloud SQL for database storage through a SQL proxy container, all configured for us using Helm with the help of Config Connector.

You could further expand on this deployment on your own to perhaps include a Google Cloud Storage bucket to serve your static files. You could also reuse this jinja template and schema file as part of your own deployments in the future when you need a Kubernetes cluster with current best practices already built in, or you could deploy a different Helm package on top of this deployment instead.

About the Author

Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.