The course is part of this learning path
Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can help with multi-cloud by having one workflow for all clouds. The infrastructure Terraform manages can be hosted on public clouds like Amazon Web Services, Microsoft Azure, and Google Cloud Platform, or on-prem in private clouds such as VMWare vSphere, OpenStack, or CloudStack. Terraform treats infrastructure as code (IaC) so you never have to worry about you infrastructure drifting away from its desired configuration. If you like what you are hearing about Terraform then this course is for you!
In this course, we’ll learn Terraform from the ground up. While building a strong foundation for you to solve real-world challenges with Terraform, you'll learn about its core concepts including HashiCorp Configuration Language, providers, resources, and state. The course concludes with a demo to illustrate how Terraform can be used to manage a practical infrastructure for deploying development and production versions of a two-tier application in Google's Cloud using Cloud SQL, Google Kubernetes Engine (GKE), and Kubernetes. The Terraform configuration files used in the course are all available in the course's GitHub repository.
This course is for anyone that is interested in managing infrastructure in public, private, or hybrid clouds. Some roles that fit into that category are:
- DevOps Engineers
- IT Professionals
- Cloud Engineers
After completing this course, you will be able to:
- Describe what Terraform is
- Write Terraform configuration files
- Understand how Terraform integrates infrastructure sources
- Manage multiple infrastructure environments with Terraform
This is an intermediate-level course that assumes you have prior experience with a scripting or programming language.
|Lesson||What you'll learn|
|Introduction||What will be covered in this course|
|What is Terraform?||Take a high-level look at what Terraform is and when to use it|
|Terraform Configuration||Understand the ins and outs of HashiCorp Configuration Language (HCL)|
|Providers||Discover how Terraform integrates various infrastructure sources|
|Resources||See how to configure parameters that are common to all resources|
|State||Learn how Terraform state connects your configuration with the real world|
|Two-Tier App Demo||See how to deploy a two-tier app in multiple environments with Terraform|
|Summary||Review the course and see what's next|
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
For Terraform to work effectively, it must keep track of the state of the resources that it manages. This lesson is devoted to the topic of Terraform state.
We’ll cover where state is stored,
What kind of information is stored in state, and
discuss workspaces which are containers for state.
State maps real world resources to your configuration, keeps track of metadata, and improves performance for large infrastructures. The state is the link between your configuration and the real world. By default, Terraform will update refresh its state before any operation. The state is used to generate plans.
Terraform can persist state in several different backends. The image shows the supported backends. They include public cloud storage services such as Amazon S3, Azure Resource Manager storage accounts and Google Cloud Storage. Enhanced backends allow remote operations which makes operations appear to be running locally although they are running on a remote machine.
The default backend is the local backend which stores state in a file on the local filesystem.
The local backend stores state in Terraform’s working directory in a file named terraform.tfstate.
You can configure where the local backend stores the state file by adding a terraform block to your configuration and configuring the path in a local backend block as shown in the image.
The contents are written in JSON.
Using the local filesystem can be problematic when working in teams because the state file is a frequent source of merge conflicts. Using remote state is, or where Terraform store state in a remote store, such as cloud storage is recommended for teams.
Using remote state Is also more secure because the data can be encrypted at rest and Terraform only ever stores remote state in memory, never on disk. Requests for remote state are also encrypted using Transport Layer Security or TLS. Security is important because configurations can store sensitive information.
You can also access remote state using data sources. This allows different projects to access a project’s state in a read-only fashion.
Whatever backend you use, if it supports locking, Terraform will lock the state while an operation that could potentially write state changes is happening. This prevents state corruption.
What is Stored?
To see what kind of information gets stored, let’s inspect a terraform.tfstate file for a configuration that manages an instance.
The state includes top-level fields including the version of the state format, the version of Terraform used, and a serial number that increments every time the state changes. We also see that the state stores a list of modules.
Scrolling down, we see that the module path for the configuration is root. Even though we never explicitly declared any modules, the configuration is automatically included in a module called root. Below the path are any ouputs that are declared followed by resources. The resources map has keys that are formed by joining the resource type and resource name by a dot, similar to the interpolation syntax for resource attributes. The type and depends on list of dependencies are stored. The depends_on list includes both implicit and explicit dependencies, although there are no dependencies in this case. The primary map stores the id, all of the attributes, and metadata for the resource. The attributes and metadata have been omitted in the image. There is also a tainted flag to indicate if the resource needs to be recreated on the next apply. I’ll leave it at that. You should have a good understanding of what state is after reviewing the state file.
Workspaces are containers for state. You can use workspaces to manage infrastructure environments using the same configuration files. For example, you may want to create a workspace for every branch in a version control system. The local and cloud storage backends all support workspaces.
Workspaces are managed using the workspace command. You can create additional workspaces with the new subcommand, and switch between workspaces using the select subcommand. If you select a new workspace, there is no state until you apply the configuration. Any resources created in other workspaces still exist, you simply need to change workspaces to manage those resources.
You can interpolate the current workspace in configurations using terraform.workspace. This is useful for conditions that modify the configuration based on workspace, for example if you have a development and production workspace. It is also useful for creating unique names for the created resources. Modifying configurations based on workspace is best combined with other isolation techniques, such as splitting larger configurations into multiple remote state configurations that can be linked using data sources. We’ll gain experience with workspaces in the following lesson which builds a moderately sized infrastructure across multiple environments.
That wraps up this relatively short lesson on Terraform State. We’ll gain experience with workspaces in the following lesson which builds a moderately sized infrastructure across multiple environments. If you are ready to use Terraform to build practical infrastructure, continue on to the next lesson.
Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.