1. Home
  2. Training Library
  3. Amazon Web Services
  4. Amazon Web Services Courses
  5. Terraform 1.0 - Provisioning AWS Infrastructure

Terraform CLI Subcommands

Start course
Overview
Difficulty
Beginner
Duration
1h 41m
Students
3640
Ratings
4.3/5
starstarstarstarstar-half
Description

Terraform is an open source "Infrastructure as Code" tool, used by DevOps and SysOps engineers to codify their cloud infrastructure requirements.

In this course you'll learn about Terraform from the ground up, and how it can be used to codify infrastructure. Terraform can be used to provision infrastructure across multiple cloud providers including AWS which this course will focus on.

resource "aws_instance" " cloudacademy " {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.key_name 
subnet_id = aws_subnet.private.id
security_groups = [aws_security_group.webserver.id]
 
user_data =<<EOFF
#!/bin/bash
read -r -d '' META <<- EOF
CloudAcademy ♥ Terraform!
For any feedback, queries, or suggestions relating to this course
please contact us at:
Support: support@cloudacademy.com
LinkedIn: https://www.linkedin.com/in/jeremycook123
EOF
echo "$META"
EOFF

tags = {
Org = "CloudAcademy"
Course = "Terraform 1.0"
Author = "Jeremy Cook"
}
}

Learning Objectives

  • Learn about Terraform and how to use it to provision AWS infrastructure
  • Learn how to build and create Terraform configurations and modules
    Learn how to use the Terraform CLI to launch and manage infrastructure on AWS

Intended Audience

  • Anyone interested in learning about Terraform, and the benefits of using it to codify infrastructure
  • Anyone interested in building and launching AWS infrastructure using Terraform
  • Anyone interested in deploying cloud native applications on AWS

Prerequisites

Prerequisites which would be considered useful for this course are:

  • Knowledge of the AWS cloud platform and the various services within it – particularly VPC, EC2, and IAM
  • Basic System administration experience
  • Basic Infrastructure and Networking Knowledge
  • Basic SysOps and/or DevOps Knowledge

Resources

All Terraform configuration as used within the provided demonstrations is located in GitHub here:

Transcript

Welcome back. In this lesson, I'm going to now review the Terraform CLI and its available subcommands. Particular focus will be placed on the main commands: init, validate, plan, apply, and destroy. Let's begin. When starting out with the Terraform CLI tool, check out the embedded help documentation. To do so, fire up your local terminal and type in terraform -help. This will display Terraform help regarding all of the available subcommands, which are grouped into those considered the main commands followed by all remaining commands, which are least commonly used and/or for more advanced requirements.

As seen on the slide, the main commands are considered to be init, validate, plan, apply, and destroy. These commands, which I'll go deeper into in the following slides are the ones that you will often cycle through when you are iteratively developing and building out your Terraform infrastructure. The remaining subcommands, as seen here, are as mentioned less commonly used and more so for advanced scenarios. Having said that, I'll call out a couple that I tend to use frequently.

Console. The console subcommand fires up a REPL, read-evaluate-print-loop, interactive console. This allows you to test out and evaluate Terraform expressions. More than useful when experimenting or troubleshooting. The interactive console will actually use the available Terraform state during evaluations.

Format. The format subcommand is useful to reformat and standardize the layout of your Terraform code.

Output. The output subcommand can quickly re-render the output values for your root module.

Workspace. The workspace subcommand we covered earlier on in the course, but to reiterate, it is used to create and manage multiple workspace environments, which in turn can be used to build multiple versions of infrastructure of the same set of Terraform configurations. I'll now move on and do a deeper dive on the main commands since they will be the ones that you will often use and will need to be confident with, to build AWS infrastructure.

Starting with the Terraform init command. This is a mandatory command required to initialize your Terraform workspace in the current directory. Before running the Terraform init subcommand, your root module directory will look something like the following. The key point here is the absence of the .terraform directory.

Now, if we were to examine a typical main.tf file that gets populated into the root module directory, it would look something like this. Here we can see that it has been configured with the AWS 3.55 provider and has been specified that we are configuring it to authenticate by using a profile name. In this case, the default profile. This tells Terraform to collect the AWS credentials from the currentusers.aws/credentials file. The same one that the AWS CLI uses and manages.

Now, to initialize our current working directory, we enter the command terraform init. The initialization process will do a number of things for us. Firstly, Terraform reads our configuration files in the working directory to determine which plugins are necessary, searches for the installed plugins in several known locations, and then downloads the correct one. In this case, the AWS 3.55 provider. It will also create a log file to log down the version of the plugins that we have initialized our waking directory with. And finally, it will also pull down any external modules as used and referenced within our remaining Terraform templates.

If we were to rescan the current working directory after having performed a terraform init command, we would now see the updates made within it. Namely the presence of the .terraform directory, which holds a copy of the configured provider and any referenced external modules. The current working directory also now contains a .terraform.log.hcl file, which, as previously mentioned, is used to log down the version of the plugins that we have initialized our working directory with.

If we were to use the tree command on the .terraform directory, we would be able to see and examine its internal structure, which in this case, we have configured a single provider, that being the AWS version 3.55 provider. Note here that the AWS provider file is an executable file itself. Terraform plugins, and in our case, the AWS provider, are written in Go and are executable binaries invoked from the Terraform Core over RPC.

In this example, you can also see the presence of the AWS VPC module, which is being truncated for brevity purposes. As an FYI, the AWS VPC module is available in the Terraform public registry, and it's super useful for building out VPC configurations and related networking components very quickly.

Next up is the Terraform validate command. The Terraform validate command does just that. It validates all of your local Terraform configuration, making sure that it is syntactically correct, et cetera. It is often used immediately after any save operation on the configuration. In the example provided here, the current Terraform configuration has successfully passed and is therefore considered valid.

In the next example, the cidr_block attribute in the VPC resource has been intentionally commented out to cause an error. Rerunning the terraform validate command will cause it to flag the problem. In this case, it is flagged expectedly with the "Error: Missing required argument" message and importantly highlights the offending Terraform configuration file and the problematic line number within it.

Next up is the Terraform plan command. The terraform plan command is a dry-run command, which is typically run just before the apply command. When executing this command, Terraform is just telling us what it would do if we perform the apply command. Running this command acts as a safety check. Sometimes our assumptions of what an apply would do might be slightly or considerably wrong. Fingers crossed here this is not the case. Regardless, the plan command will highlight exactly what would happen and provides us an execution plan that we can do when we're doing the real thing.

Whenever you run a plan or apply, Terraform reconciles three different data sources. One, what you wrote in your Terraform templates. Two, the current Terraform state file. And three, what infrastructure actually exists within the infrastructure provider. In the plan example shown here, the plan results are indicating that nine new resources would be added, zero would be changed, and zero would be destroyed.

Because Terraform is convergent, it will play in the fewest required actions to bring the infrastructure to the desired configuration. Terraform also considers dependencies to determine the order that changes must be applied in. The plan stage is relatively inexpensive compared to actually applying changes. So you can often use the plan command while developing your configuration to see what changes would need to take place. 

Moving on to the Terraform apply command. The terraform apply command reruns the plan or execution plan, and assuming you approve it, will then provision the changes within the provider as per the plan. Now, if anything goes wrong, Terraform will not attempt to automatically roll back the infrastructure to the state it was in before running apply. This is because apply adheres to the plan. It won't delete your resource if the plan doesn't call for it.

To address the need for a rollback position, if you version control your infrastructure configuration code, and we strongly encourage you to do so, you can use a previous version of your configuration to roll back to. Alternatively, you can use the destroy or taint command to target components that need to be deleted or recreated respectively. By default, the apply command will always prompt you first for confirmation before applying the plan changes.

Order confirmation can be set by attaching the order approve parameter. This is useful as a workflow optimization when doing frequent small incremental changes, perhaps within ADF test environment. Obviously, take care when considering doing this in production. When the apply command completes, it will report back the applied changes as those added, those changed, and those deleted. It will also render out any outputs that you have coded into your Terraform templates.

In the example shown here, the output section shows the subnet IDs for subnets one and two, the VPC ID, and the public IP address for the web EC2 instance. Finally, we have the Terraform destroy command, which is used to tear down all Terraform managed infrastructure that you have codified. The terraform destroy command is clearly a destructive command, so care must be taken using it, particularly so in production.

When it comes to production environments, authorizing destructive operations via Terraform within your AWS account can be and should be controlled by an appropriately designed IAM policy. This policy would then be attached to an IAM user whose credentials are securely managed and available to only a select few. When the terraform destroy command runs, it will again plan for and report out the required deletion operations to remove all Terraform managed resources within your AWS account.

By default, the destroy command will always prompt you first for confirmation before applying the deletions. Order confirmation can again be set by attaching the auto-approve parameter, but this should only be done if you're a hundred percent certain of what the end result is, more so than ever for production environments. Similar to the plan and apply commands, the destroy command once executed and completed will report on the final number of resources that have been destroyed.

About the Author
Students
99719
Labs
55
Courses
112
Learning Paths
91

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).