The course is part of this learning path
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can help with multi-cloud by having one workflow for all clouds. The infrastructure Terraform manages can be hosted on public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, or on-prem in private clouds such as VMWare vSphere, OpenStack, or CloudStack. Terraform treats infrastructure as code (IaC) so you never have to worry about you infrastructure drifting away from its desired configuration. If you like what you are hearing about Terraform then this course is for you!
In this course, we’ll learn Terraform from the ground up. While building a strong foundation for you to solve real-world challenges with Terraform, you'll learn about its core concepts including HashiCorp Configuration Language, providers, resources, and state. The course concludes with a demo to illustrate how Terraform can be used to manage a practical infrastructure for deploying development and production versions of a two-tier application in Google's Cloud using Cloud SQL, Google Kubernetes Engine (GKE), and Kubernetes. The Terraform configuration files used in the course are all available in the course's GitHub repository.
This course is for anyone that is interested in managing infrastructure in public, private, or hybrid clouds. Some roles that fit into that category are:
- DevOps Engineers
- IT Professionals
- Cloud Engineers
After completing this course, you will be able to:
- Describe what Terraform is
- Write Terraform configuration files
- Understand how Terraform integrates infrastructure sources
- Manage multiple infrastructure environments with Terraform
This is an intermediate level course that assumes:
- You have priors experience with a scripting or programming language.
|Lesson||What you'll learn|
|Introduction||What will be covered in this course|
|What is Terraform?||Take a high-level look at what Terraform is and when to use it|
|Terraform Configuration||Understand the ins and outs of HashiCorp Configuration Language (HCL)|
|Providers||Discover how Terraform integrates various infrastructure sources|
|Resources||See how to configure parameters that are common to all resources|
|State||Learn how Terraform state connects your configuration with the real world|
|Two-Tier App Demo||See how to deploy a two-tier app in multiple environments with Terraform|
|Summary||Review the course and see what's next|
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
We have briefly touched on resources in previous lessons, but as the core of Terraform configurations, they deserve a lesson all on their own.
There are way too many resources to discuss in a lesson, so instead I will discuss the aspects of resource configuration shared by all resources. This includes various meta-parameters, provisioner, and lifecycle blocks.
Then to firm up some of the knowledge, I’ll demonstrate creating a resource with some of what we learned. Our first resource in the course!
A resource in Terraform is simply
n. a component of your infrastructure -- https://www.terraform.io
As mentioned before, terraform resources can be anything from cloud virtual machine instances, to GitHub repositories and DNS records. Each type of resource has their own attributes and arguments.
But all resources do have several things in common, starting with meta-parameters. Meta-parameters are configuration keys available to all resources.
We’ve actually already seen an example of a resource meta-parameter when we described provider aliases. A resource can explicitly declare a provider to use with the provider key.
Another meta-parameter is depends_on, which is used to explicitly state resource dependencies. Most of the time implicit dependencies captured by interpolation references are enough to capture all the dependency information. You should rely on implicit dependencies when possible, but depends_on is available for the situations where dependencies aren’t implicitly captured. The syntax is to assign a list including strings of the format resource type dot resource name for the depended upon resource. The resource will be created after and deleted before any depended upon resources. The example shows how to depend on a Google Compute Engine instance named server.
The count meta-parameter specifies how many copies of a resource to create. You may want to create several servers in a tier of your N-tier application, for example. The first example shows how to use a variable called num_servers to assign the count. The count is allowed to be zero. One scenario where this is useful is when you have boolean variables and interpolation conditions enabling or disabling resources. You can set count to zero when you don’t want the resource to be present. You don’t need to remove the resource configuration all together since you do want to have the resource sometimes.
The second example shows how to use an interpolation that is available when you specify a count. The count.index interpolation Is set to the index of the resource copy, starting with zero for the first copy.
The last two examples show different methods for interpolating resource attributes with a count meta-parameter. The first shows how to add an index number after the resource name to target a specific copy. The second uses an asterisk to return a list of the attribute values for all copies. This is referred to splat syntax. There are a few more meta-parameters that I’ll cover on their own slides.
Resources can specify provisioners to run scripts when they are created or destroyed.
Resources are allowed to have zero or more provisioners.
The syntax for including a provisioner is to include a provisioner block in the resource’s configuration. The name of the provisioner must match the name one of the available provisioners shown on the left. There are generic provisioners such as local-exec to run a script on the machine that is running terraform or remote-exec to connect to a remote machine to run a script there. For example, when an instance is created, you could use the remote-exec provisioner to set up a configuration management tool. Terraform isn’t a configuration management tool and HashiCorp recommends using a tool that is built for configuration management to manage the software on machines. You can also use built-in support for configuration management tools like Chef, or SaltStack. Of course, you may not need configuration management tools if you are using pre-configured images or containers. The file provisioner is used to copy files to instances.
Each type of provisioner has different configuration keys but they all share a few. The when key is optional and can be set to create or destroy to indicate when the proviisoner should run. create is the default. Destroy provisioners may be used to extract data or perform cleanup routines. To ensure a destroy-time provisioner is executed, you should set the count of the resource to zero. If you instead deleted the resource block, Terraform wouldn’t have the configuration of the provisioner.
Provisioners can also optionally specify a connection block. Many provisioners require accessing remote resources and the connection block allows you to configure how to connect either using SSH or WinRM.
For example, you can set the desired type, user, and password. As a best practice the password is set using a variable to avoid storing sensitive information in the configuration file. That’s everything I want to say about provisioners. We’ll see an example showing how to use the local-exec provisioner in the demo at the end of this lesson.
All resources can also specify a lifecycle block to customize their lifecycle behavior. The lifecycle block can specify up to 3 optional keys.
Create before destroy controls whether or not a new resource is created before or after the old resource that it replaces is destroyed. The default is false.
Prevent destroy will cause Terraform to throw an error any time an attempt is made to destroy the resource, if prevent destroy is set to true. The default is false.
Ignore changes tells Terraform to ignore changes to the specified resource attributes. You may desire this behavior if Terraform created a route table that has routes dynamically populated. That’s all for the slide portion of this lesson. We’ll switch over to a demo illustrates how to create Google Compute Engine disks using some of the meta-parameters we’ve discussed.
I’m here at the Google Cloud Terraform provider documentation page. Below the list of available data sources in the sidebar you can find all of the resources available through the provider below the available data sources. As you can see there are a lot of available resources covering much of the available APIs for Google Cloud Platform. I’ll select the Google Compute disk resource.
Resource documentation usually includes an example followed by the arguments, which are the keys used to configure the resource, and resource attributes below that. We can see that the name and zone are required arguments and the others are optional.
I’ve written a configuration that will create 3 disks using the count meta-parameter. The resource also includes a local-exec provisioner. The local-exec provisioner doesn’t need a connection block because the script runs on the same machine that’s running terraform. The command key specifies the command to run. In this case I append the disk index followed by the disk’s self_link URI. Notice you can use the self interpolation syntax to refer to the current resource. Usually provisioners are used in the context of creating and destroying virtual machines but as you can see provisioners can be used in any resource block.
I’ll run the apply command to have it generate a plan that I can reject or accept. The plan shows that three resources will be created. That looks good so I’ll enter yes to accept the plan. Terraform starts creating the three disks in parallel so they won’t necessarily finish in order. Now the apply has completed and we have three disks. Depending on the state of your GCP account, you may have received an error message because the Google Compute Engine API was disabled. If that’s the case, follow the instructions in the error message to enable the API. To check on the provisioner, I’ll output the disk file and there we see each disk ran the provisioner and added an entry to the file.
That’s all for the demo. To clean up the resources I’ll issue the destroy command. If you ever forget a command you can simply enter terraform and scan the list of commands. To show the available options for a command you can use the -help option. Destroy without any options will destroy all the resources, so I will simply enter destroy. Terraform prompts to accept the plan to destroy the three disks, enter yes and the disks are destroyed.
In this lesson we learned about the meta-parameters shared by resources including provider, depends on, and count.
We also saw how provisioners can be used to run scripts when any resource Is created or destroyed. There are a variety of provisioners to choose from including local and remote-exec to run scripts on the machine running terraform or a remote machine, respectively.
We learned how to use resource lifecycle blocks to configure the lifecycle behavior of resoruces. We demonstrated several of the concepts by creating Google Compute Engine disks.
In the next lesson we’ll uncover some of the internals of how Terraform manages infrastructure. Continue on to the next lesson to learn more about how Terraform works.
About the Author
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.