image
Terraform Introduction
Terraform Introduction
Difficulty
Intermediate
Duration
1h 41m
Students
10127
Ratings
4.3/5
Description

Terraform is an open source "Infrastructure as Code" tool, used by DevOps and SysOps engineers to codify their cloud infrastructure requirements.

In this course you'll learn about Terraform from the ground up, and how it can be used to codify infrastructure. Terraform can be used to provision infrastructure across multiple cloud providers including AWS which this course will focus on.

resource "aws_instance" " cloudacademy " {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.key_name 
subnet_id = aws_subnet.private.id
security_groups = [aws_security_group.webserver.id]
 
user_data =<<EOFF
#!/bin/bash
read -r -d '' META <<- EOF
CloudAcademy ♥ Terraform!
For any feedback, queries, or suggestions relating to this course
please contact us at:
Support: support@cloudacademy.com
LinkedIn: https://www.linkedin.com/in/jeremycook123
EOF
echo "$META"
EOFF

tags = {
Org = "CloudAcademy"
Course = "Terraform 1.0"
Author = "Jeremy Cook"
}
}

Learning Objectives

  • Learn about Terraform and how to use it to provision AWS infrastructure
  • Learn how to build and create Terraform configurations and modules
    Learn how to use the Terraform CLI to launch and manage infrastructure on AWS

Intended Audience

  • Anyone interested in learning about Terraform, and the benefits of using it to codify infrastructure
  • Anyone interested in building and launching AWS infrastructure using Terraform
  • Anyone interested in deploying cloud native applications on AWS

Prerequisites

Prerequisites which would be considered useful for this course are:

  • Knowledge of the AWS cloud platform and the various services within it – particularly VPC, EC2, and IAM
  • Basic System administration experience
  • Basic Infrastructure and Networking Knowledge
  • Basic SysOps and/or DevOps Knowledge

Resources

All Terraform configuration as used within the provided demonstrations is located in GitHub here:

Transcript

Welcome back. In this lesson, I'll provide a high-level overview of Terraform, highlighting some of the more important features that you'll benefit from once adopted. After this lesson, you'll be able to answer questions like: What is Terraform? Why would you use it? And a simplified version of how does it work? Let's begin.

To begin with, Terraform is an open source infrastructure as code tool originally started by HashiCorp and contributed to by the open source community. HashiCorp is a company that specializes in producing tools and applications for DevOps, security and cloud computing infrastructure management. Terraform itself is a cloud agnostic infrastructure provisioning tool that helps to ease the burden of infrastructure builds and maintenance. Beyond the open source version of Terraform, which is installed locally, Terraform is also available in a cloud and enterprise edition.

Terraform Cloud is Hashicorp's managed service offering and Terraform Enterprise is similar to Terraform Cloud, but is focused on being a self-hosted solution addressing the needs of data localization and operational security policies. Again, the intention of this course is to focus on the open source version of Terraform using it to provision AWS infrastructure. Having said that, it is worthwhile acknowledging that Terraform can be used to provision multi-cloud deployments.

The infrastructure Terraform managers can be hosted on public clouds such as Amazon Web Services, Azure and Google Cloud platform. It can even be used for on-prem or private clouds such as OpenStack, VMware vSphere or CloudStack. Terraform Infrastructure integrations also allow you to manage software and services including databases like MySQL, source control systems like GitHub, configuration management tools like Chef and Puppet and much more.

Currently, there are well over 100 publicly available infrastructure integrations. Before we dive deeper into Terraform itself, I'd like to step back and quickly review the concept of infrastructure as code, why it is important and why it has become so popular in recent times. Infrastructure as code allows us to codify our infrastructure requirements into machine readable definition files. In doing so, we are effectively creating executable documentation.

Anyone new to a project can examine the projects infrastructure as code templates and immediately understand the infrastructure configuration, et cetera. By using code to generate infrastructure, the same environment can not only be recreated multiple times, it can be done so consistently without era or unintentional divergence. Additionally, infrastructure as code can address environmental drift, situations where the initial infrastructure is drifted away from the initial day zero configuration.

Over time, the day one, day two et cetera, infrastructure, may encounter unintentional or intentional, but unapproved changed. By comparing the current site of an infrastructure and baselining it back against your existing infrastructure as code templates, you can deduce any drift and receipt it back to the recorded baseline. With infrastructure as code, your templates can be stored in a vision control system such as Git allowing teams to collaborate on infrastructure. Team members can get specific visions of code and create their own development or test environments. In the past, a pain point that often existed for developers before moving to cloud infrastructure, were the delays encountered with operations, having to budget, plan, create and deliver physical infrastructure.

Now with the elasticity of the cloud allowing resources to be created on demand, developers can instead provision the infrastructure they need when they need it. All combined, these benefits make infrastructure as code not only useful, but a must-have tool, particularly so being a SysOps and DevOps enabler.

So now that we know what infrastructure as code is, let's return to Terraform itself and begin to understand how it can be used to codify our infrastructure requirements. Terraform, the open source version is packaged into a single executable file lightweight and easy to install regardless of operating system. Once installed, you access its features via the terminal.

A typical infrastructure provisioning workflow involving Terraform goes like this. Iteratively, codify your infrastructure requirements into one or several Terraform configurations. Within your local terminal, use the Terraform tool to first validate and plan the infrastructure an then later apply it. Later on in the project lifecycle, you can always use the Terraform destroy command to destroy your infrastructure if and when required.

Now, I'll go a lot deeper into each of these steps in the coming slides. As an infrastructure engineer, you write or modify Terraform template files for your infrastructure. These configuration files declare the desired state of your infrastructure. Later on, you can modify existing configuration files to declare how you want to change your existing infrastructure.

To keep you productive and from having to code every requirement from the ground up, Terraform provides for your convenience, a public module registry, from which you can import and leverage any number of modules. A module encapsulates related resources together, which when combined, are use to achieve a particular requirement. During the demonstrations that I provide later on, I'll demonstrate how to both build your own modules and how to work with the modules available within the public Terraform module registry.

When it comes to provisioning time, Terraform integrates with different cloud providers and or other infrastructure vendors through the use of providers. A provider encapsulates all of the mechanics to connect, authenticate and communicate with the infrastructure provider. This is one of the true benefits of using Terraform, that is, the potential for it to provision multi-cloud infrastructure.

Once you're happy with your declared configuration, you can ask Terraform to generate an execution plan for it. The execution plan tells you what changes Terraform would need to make to bring your current infrastructure to the declared state in your configuration.

Now, if you were to accept the plan, you can then instruct Terraform to apply the changes. To do so, you proceed by using the apply command. The apply command can use the plan that you previously generated, or, if you don't provide a plan, apply can generate one for you and ask for you to accept the plan before applying the changes. Terraform will then orchestrate the infrastructure API calls required to implement the infrastructure changes.

The Terraform Root Module is the entry point for all Terraform configuration. By convention, the Root Module gets populated with the following three Terraform conflict files that you produce, main.tf, variables.tf, and outputs.tf. However, as the complexity of your infrastructure requirements grow, you may refactor this initial arrangement in a number of ways. As already mentioned, it is only conventional to have the three previously named files. The actual Terraform configuration spread across these three files could in fact, actually be collapsed and contained within a single file named anything you like, as long as it has the extension .tf.

Going in the opposite direction, the Terraform configuration within the Root Module could also be refactored by storing parts of it across and within additional subdirectories. Such subdirectories within the Root Module become nested modules.

Finally, depending on how you have configured your Terraform state setup, Terraform state files may also co-exists in the Root Module. This however, is not the case when remote Terraform state has been configured, more on this later.

Another important concept to consider when setting up your initial Terraform environment is the concept of a Workspace. Terraform uses the concept of Workspaces to manage and separate multiple but different infrastructure environments using the same state of Terraform configuration files. This is particularly useful when you want to provision and mirror infrastructure for dev, test or prod environments. With Workspaces, we can establish a Workspace peer environment and then provision infrastructure specifically for that environment using the same Terraform configuration files. And I technical level, Workspaces isolate and manage multiple versions of Terraform state.

Workspaces are managed using the Workspace command. You can create additional Workspaces with its new subcommand and switch between Workspaces using the select subcommand. If you select a new Workspace, there is no state until you apply the configuration. Any resources created in other Workspaces still exist. As and when required, you can simply swap between Workspaces to manage resources assigned and provisioned within that Workspace.

I'll now do a basic review of the three core files that are typically added to the Root Module, starting with the main.tf your file. The main.tf contains your core Terraform configuration, mostly resources that you had declaring, which when working with the AWS provider, at provisioning time, will get converted into actual AWS Cloud hosted infrastructure resources, such as EC2 Instances. Over time, larger and more complex infrastructure setups might require you to go back and refactor and split up the contents of the main.tf file across multiple.tf files.

Next up is the variables.tf file. This is another file that again, will often be edited into the Root Module. The variables file contains all possible variables that are then referenced and used within the main.tf and or other.tf files within the Root Module. When performing a Terraform plan or Terraform apply, the values assigned to each variable will be injected into any place the referenced variable name is used. Variables can be both typed and have default values as seen here, although, this is not mandatory.

As you'll see later on, there are multiple ways in which the defaults can be overwritten. Rounding out the last of the three conventional default files which are added to the Root Module to compose a simple Terraform configuration is the Outputs file. The Outputs file is where you configure any messages or data that you want to render out to the end user within the terminal at the end of an execution of the Terraform apply command.

Additionally, when using end and beading modules in a parent Terraform template, module outputs can be referenced within the parent Terraform template by using the module.<MODULE NAME>.OUTPUT NAME> notation. This will be demonstrated later on within the demonstrations. Terraform State.

As earlier mentioned, Terraform is a stateful application. It has been purposely designed to keep track of all infrastructure provisioned through it. All state tracked is stored inside of a Terraform State file. Having performed a Terraform apply, Terraform will capture and record the infrastructure state in two files, terraform.tfstate and terraform.tfstate.backup located in your working directory when working with local state.

The state is written in JSON format, meaning you can parse these files if required. These files represent Terraform's source of record, recording the last known state. The great thing about having Terraform track and maintain the last name at state of your infrastructure is that it enables you to detect any drift or divergence. If you'd like to check and see if the state files still matches what you last built, you can use the Terraform refresh command. Running this command will alert you to any detected change.

Terraform by default will store state on the local file system. However, you can update this configuration to store the state remotely, perhaps within a dedicated AWS S3 buckets. When using the local file system for state, this can become problematic when working in teams since the state file is a frequent source of merge conflicts. When this occurs, consider using remote state instead.

Using remote state is also considered more secure since the data can be encrypted at risk, and Terraform only ever stores remote state and memory, never on disk. Requests formal state are also encrypted during transit using transport layer security or TLS. Security is important because configurations can store secrets and sensitive information. You can also access remote state using data sources. This allows different projects to access a project state in a read only version.

Now, regardless of what you have at backend, you end up using four configuring Terraform state. If it supports locking, Terraform will lock the state while an operation that could potentially write state changes is happening, this has done so to prevent state corruption.

Now when it comes to connecting Terraform against a particular infrastructure provider, it's good to know that Terraform provides a public registry located at registry.terraform.io, which contains a bunch of providers and modules. We'll talk about modules later on. But for now, it is providers which are used to integrate against an infrastructure providers API.

When it comes to provisioning AWS infrastructure, you'll want to work with the latest version of the AWS provider. Providers are visioned to maintain compatibility with the infrastructure providers API as it evolves over time. Once you have selected the AWS provider, the use provider link top right, provides an example of how to configure the AWS provider with a new Terraform code.

For the record, each available provider held within the registry provides a comprehensive documentation, including examples of how to work with it. To exit the documentation associated with the AWS provider, click on the documentation link, top right. All provider documentation is searchable allowing you to quickly navigate to the required documentation. This example shows documentation specific to launching an AWS EC2 Instance. You can easily copy and paste the provided examples, thereby quickening the pace of development.

About the Author
Students
143004
Labs
69
Courses
109
Learning Paths
209

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).