1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Managing Infrastructure With Terraform

Two-Tier Application Demo

The course is part of this learning path

Solving Infrastructure Challenges with Terraform
course-steps 1 certification 1 lab-steps 5

Contents

keyboard_tab
Intro
1
Course Introduction
PREVIEW3m 28s
Overview
2
What is Terraform?
PREVIEW15m 6s
Terraform Parts
4
Providers
8m 12s
5
Resources
7m 58s
6
State
4m 35s
Summary
8
Summary
2m 27s
play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 10m
Students1602
Ratings
4.6/5
star star star star star-half

Description

Overview

Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can help with multi-cloud by having one workflow for all clouds. The infrastructure Terraform manages can be hosted on public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, or on-prem in private clouds such as VMWare vSphere, OpenStack, or CloudStack. Terraform treats infrastructure as code (IaC) so you never have to worry about you infrastructure drifting away from its desired configuration. If you like what you are hearing about Terraform then this course is for you!

In this course, we’ll learn Terraform from the ground up. While building a strong foundation for you to solve real-world challenges with Terraform, you'll learn about its core concepts including HashiCorp Configuration Language, providers, resources, and state. The course concludes with a demo to illustrate how Terraform can be used to manage a practical infrastructure for deploying development and production versions of a two-tier application in Google's Cloud using Cloud SQL, Google Kubernetes Engine (GKE), and Kubernetes. The Terraform configuration files used in the course are all available in the course's GitHub repository.

Intended Audience

This course is for anyone that is interested in managing infrastructure in public, private, or hybrid clouds. Some roles that fit into that category are:

  • DevOps Engineers
  • IT Professionals
  • Cloud Engineers
  • Developers

Learning Objectives

After completing this course, you will be able to:

  • Describe what Terraform is
  • Write Terraform configuration files
  • Understand how Terraform integrates infrastructure sources
  • Manage multiple infrastructure environments with Terraform

Prerequisites

This is an intermediate level course that assumes:

  • You have priors experience with a scripting or programming language.

Course Agenda

Lesson What you'll learn
Introduction What will be covered in this course
What is Terraform? Take a high-level look at what Terraform is and when to use it
Terraform Configuration Understand the ins and outs of HashiCorp Configuration Language (HCL)
Providers Discover how Terraform integrates various infrastructure sources
Resources See how to configure parameters that are common to all resources
State Learn how Terraform state connects your configuration with the real world
Two-Tier App Demo See how to deploy a two-tier app in multiple environments with Terraform
Summary Review the course and see what's next

Feedback

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

This lesson is to show a more practical example of infrastructure you might set up to run an application. A lot of the knowledge gained in the course up to this point will be used in understanding the demo.

 

Agenda

The lesson will first introduce the application and details of the architecture

and then we’ll focus on the details of the configuration files and demonstrating how to build the infrastructure at the command-line.

 

The Application

A well-known open source two tier application is WordPress.

 

WordPress is a very popular application for hosting websites and blogs. Many applications share a similar two-tier architecture, but I’ve chosen WordPress because it is well-known.

 

The Two-Tier Architecture

The application’s two-tier architecture includes a data tier that houses a MySQL database for persistence, and a frontend web tier that hosts the WordPress website that is written in PHP. Clients connect to the web tier and the web tier connects to the data tier to store and retrieve data.

 

Architecture Implementation (Dev)

There are many ways to implement the application’s two-tier architecture. Each with pros and cons. Because this is a course on Terraform and not GCP, I won’t focus too much on the architecture implementation details. The implementation I’ve chosen could be used for real applications and includes a few GCP services, so you can see more resources if you inspect the Terraform configuration files. Asides from that the implementation it is somewhat arbitrary. With that said, let’s start by inspecting the data tier.

 

The data tier uses Cloud SQL, which includes support for managed MySQL databases.

 

There is one instance that runs MySQL version 5.7.

 

For the web tier, a Google Kubernetes Engine or GKE cluster is used. GKE uses containers to run the application.

 

There are two worker nodes in the cluster, spread across two zones. This might be overkill for a dev environment but brings your dev environment closer inline with production. A Kubernetes replication controller creates multiple pods running WordPress containers. A Docker provided WordPress and apache web server container image is used by the pods actually serving the website. The configuration uses resource attributes to pass the Cloud SQL database host information to configure the database WordPress uses.

 

A Kubernetes service creates a load balancer to balance requests between the zones and providing high availability in case of a zone outage. There are also additional resources that aren’t included in the diagram. The diagram shows what you need to know at a high level without getting into the details of Kubernetes.

 

There is one more resource that I’ll include and that is a Google Cloud Storage bucket. The demo will illustrate how to use a GCS bucket as the backend for Terraform’s state.

 

Architecture Implementation (Prod)

We will use the same configuration to configure the development and production environments by using workspaces conditions in the configuration. The prod environment has a few differences with the dev environment.

 

The production environment includes a failover replica of the database in a separate zone. This provides high availability in the data tier.

 

The production environment also increases the number of worker nodes in the Kubernetes cluster. To utilize all of the workers, the production configuration increases the number of pod replicas to have pods running on each worker and increasing the capacity to handle requests.

 

We’re almost ready to start the demo. But before we get started, I’m assuming your GCP account has enabled Google Cloud Storage, Google Cloud SQL, Google Compute Engine, and Google Kubernetes Engine APIs. Terraform will give you the link to enable the APIs when you run plan if they aren’t enabled. You can also search for the API name in the console search bar to get the page where you can enable an API. And with that, let’s get started with the demo

 

 

Demo

I’m inside of the gcp-demo directory and there are two sets of configurations. In the backend directory is a configuration for configuring the Google Cloud Storage remote state backend, and the two-tier directory contains the configuration for the WordPress application infrastructure. I’ve chose to separate the backend configuration because it could be used for several projects and doesn’t necessarily have a lifespan tied to the two-tier application.

 

I’ll change into the backend directory and open the main.tf configuration file. It has our familiar google provider configuration as well as GCS bucket resource named backend. That’s all that we need to have available to set up a remote backend for Terraform with GCS. In practice you would want to configure access controls for different accounts but for the demo I’m using the same owner account for everything.

 

Run init to prepare the working directory. Then run apply to see the planned changes. The bucket is set to be created. I’ll accept the plan by entering yes and in a couple of seconds it is ready to use. I’ll change over to the two-tier directory and give me a second to open all of the configuration files.

 

I’ve opened all the configuration files in the order I want to go through them. There is too much configuration to go through everything and a lot of it is specific to GCP and Kubernetes. I’ll just focus in on the parts that are of general interest to working with Terraform. If you are interested in how to configure GCP and Kubernetes resources, I’d encourage you to review the entire configuration after we finish the demo. You could also consider using modules to package reusable parts of the configuration instead of having several files, but that is outside of the scope of this course.

 

The first file is main.tf which includes a terraform block to configure the GCS remote state backend. You provide the bucket name and can optionally include a prefix that will be added to any state files Terraform stores. That’s all that is needed to configure remote state on GCS.

 

The file also includes the providers which are Google and Kubernetes. I’ve used the same project that I’ve used throughout the course, but you could use a variable to easily modify it. It’s probably a good idea to use separate projects for different environments as well, but for the sake of the demo I’ve used one so it’s easy to see all the resources in the GCP console if you want to look around at what gets created. You can also see examples of using variable and resource attribute interpolation syntax to configure provider keys. The Kubernetes provider uses attributes from the created GKE cluster in order to connect to the Kubernetes master.

 

The variables file is divided into sections for each main component of the infrastructure. Some things to notice are that I’ve set default variable values for all of the variables except passwords for the database and the GKE master. Also note that strings are used when assigning boolean variables because of slightly different override behavior depending on where Booleans vars are set. HashiCorp recommends settings Boolean values of variables as strings until first-class Boolean support is implemented. The last point is that maps are used to for variables that change behavior based on the active workspace. This variable is used to control the replica setting of a Kubernetes replication controller so that the number of WordPress containers can be more in prod than in dev.

 

The values of the password variables have been set in the terraform.tfvars automatic variables file. Because the automatic variable file is storing sensitive information, I have excluded from version control using the .gitignore file. You need to create your own version of the file or otherwise set the password variables if you are following along. The README file of the repository has a reminder about this as well.

 

The cloudsql file configures a master and replica sql database instance running MySQL version 5.7. To control whether or not the replica should be present, a condition is used to set the count of the replica. If the environment is prod then there is one failover replica, otherwise there is no replica.

 

The GKE cluster uses the terraform dot workspace interpolation to get the appropriate key from the map for the initial node count. This sets the appropriate number of workers based on the workspace.

 

The Kubernetes configuration creates the resources needed to run WordPress. I’ll highlight the replication controller. Notice that the replicas value is set using another map that uses the workspace to set the value. You can also see the Docker WordPress image that is used and that the container has the MySQL database host set through an environment variable whose value is interpolated from the cloudsql master’s ip_address attribute.

 

The outputs file configures the database master, Kubernetes master, and load balancer IP addresses as outputs. Although the configuration is fully operational, keep in mind that the configuration isn’t meant to be entirely production ready. As an exercise, you can improve the security of the infrastructure by enforcing SSL connections to the Cloud SQL database, only allowing traffic from specific networks, and other enhancements.

 

We can focus on the command prompt now. I’ll start by initializing with terraform init. Notice that Terraform tells us it successfully configured the GCS backend before downloading the Google and Kubernetes plugins.

 

Now we will create a development workspace called dev using the new workspace subcommand. Once everything is working on the dev environment, we can create a production workspace. Notice that we don’t need to use the select subcommand to activate the dev workspace because new automatically selects the new workspace. In automation environments you may prefer to use the TF_WORKSPACE environment variable instead of the select command when switching between workspaces.

 

Next we create the plan and save the plan output to a file called demo.tfplan. Scrolling through the plan we can verify that there is no sql database replica set to be created. Now we run apply and add the plan at the end instead of having apply create its own plan. Now Terraform will do its thing and create the resources. It takes a while, so I’ll skip ahead to when it’s complete.

 

The apply has finished creating and we see the outputs printed at the end. That took around 10 minutes to complete. To access WordPress we can use the load balancer IP. I’ll copy that and paste it into a browser tab.  And here we see the initial WordPress configuration page. I’ll select English, enter some account information, then login. Here we see the WordPress Dashboard. To demo that it is operational, I’ll customize my website by selecting a new theme. That looks nice, so I’ll publish the changes. If I refresh the same new theme is applied and everything is working as we would expected.

 

We are ready for the production deployment. I’ll create a prod workspace at the command-line. Recall that the prod workspace will activate the failover replica in the configuration. If we make a plan, we should see it in the list of changes, and there it is.  I’ll go ahead and apply the changes for the production environment.

 

 

That took about 16 minutes to complete. You can see that the Cloud SQL replica finished last. This is a situation where you may want to explicitly configure a dependency to have the WordPress replication controller wait for the failover replica to be created, but it isn’t a problem for this demo.  I’ll copy the IP of the production load balancer and upon navigating to it we see the WordPress initialization page. I’ll leave configuring the production WordPress environment as an exercise for you. Instead I’ll use the time to show a couple of things in the GCP console.

 

Switching over to the GCP console tab, I’ll search for storage so we can have a look at what’s stored in the remote state bucket. Diving down through the prefix subdirectories, we eventually get to the state files. Notice there is separate state files for each workspace. That is essentially all that Terraform does to manage multiple workspaces. I’ll open up a state file to show something. See here that the passwords are stored in plain text. That is why you want to be careful with access controls on the bucket storing the remote state.

 

If I navigate back to the console homepage, I can see that there are 6 instances in total which matches the 2 GKE instances running in dev and the 4 GKE instances running in prod. There’s also 3 Cloud SQL instances a dev master, a prod master, and a prod replica. I’ll dive into the SQL instances page and here we can see the prod replica is operating as a failover of the prod master.

 

That’s all for this lesson where we demonstrated using workspaces for different environments to deploy a two-tier application. The terraform.workspace interpolation allowed us to modify the configuration based on the active workspace.

About the Author

Students35208
Labs95
Courses11
Learning paths7

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.

Covered Topics