1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Terraform 1.0 - Provisioning AWS Infrastructure

Terraform Language

Start course
Overview
Difficulty
Beginner
Duration
1h 41m
Students
1373
Ratings
4.5/5
starstarstarstarstar-half
Description

Terraform is an open source "Infrastructure as Code" tool, used by DevOps and SysOps engineers to codify their cloud infrastructure requirements.

In this course you'll learn about Terraform from the ground up, and how it can be used to codify infrastructure. Terraform can be used to provision infrastructure across multiple cloud providers including AWS which this course will focus on.

resource "aws_instance" " cloudacademy " {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.key_name 
subnet_id = aws_subnet.private.id
security_groups = [aws_security_group.webserver.id]
 
user_data =<<EOFF
#!/bin/bash
read -r -d '' META <<- EOF
CloudAcademy ♥ Terraform!
For any feedback, queries, or suggestions relating to this course
please contact us at:
Support: support@cloudacademy.com
LinkedIn: https://www.linkedin.com/in/jeremycook123
EOF
echo "$META"
EOFF

tags = {
Org = "CloudAcademy"
Course = "Terraform 1.0"
Author = "Jeremy Cook"
}
}

Learning Objectives

  • Learn about Terraform and how to use it to provision AWS infrastructure
  • Learn how to build and create Terraform configurations and modules
    Learn how to use the Terraform CLI to launch and manage infrastructure on AWS

Intended Audience

  • Anyone interested in learning about Terraform, and the benefits of using it to codify infrastructure
  • Anyone interested in building and launching AWS infrastructure using Terraform
  • Anyone interested in deploying cloud native applications on AWS

Prerequisites

Prerequisites which would be considered useful for this course are:

  • Knowledge of the AWS cloud platform and the various services within it – particularly VPC, EC2, and IAM
  • Basic System administration experience
  • Basic Infrastructure and Networking Knowledge
  • Basic SysOps and/or DevOps Knowledge

Resources

All Terraform configuration as used within the provided demonstrations is located in GitHub here:

Transcript

Welcome back. In this lesson, I'll review the more commonly used parts of the Terraform HCL language, which you'll require a good understanding of to codify your own Terraform infrastructure as code templates. Let's begin.

Formally, Terraform configuration is running using HCL, HashiCorp Configuration Language, a human-friendly, readable, and writeable syntax, perfect for codifying infrastructure requirements. HCL's configuration was created to have a more clearly visible and defined structure when compared to other well-known configuration languages, such as JSON and YAML.

Now, at the top level, the HCL syntax comprises stanzas or blocks that define a variety of configurations available to Terraform. Stanzas or blocks are comprised of key value pairs. Terraform accepts values of type string, number, Boolean, list, and map. Single line comments start with hash, while multi-line comments use an opening slash, asterisk and a closing asterisk, slash.

An interpolated variable reference is constructed with the dollar sign, clearly brackets syntax. For example, the type tag in the provided example interpolates the variable named project. Single line strings are written in double quotes, whereas multi-line strings are specified using the heredoc format. In this case, an opening EOF, end of file, character sequence is paired with a closing EOF character sequence. In-between each line is considered part of the multi-line string. This multi-line string approach is often used to capture scripts as used within the user data attribute when bootstrapping EC2 instances.

Maps are defined using curly braces and are a collection of key value pairs. They are often used for creating variables that act as look up tables. In the example provided here, an AMI look up table has been created. The Terraform Core program requires at least one provider to build anything. You can manually configure which versions of a provider you would like to use. If you leave this option out, Terraform will default to the latest available version of the provider.

Remember to initialize the current working directory using the Terraform init command, which is required before attempting to perform a plan or apply. At the end of the day, it's all about provisioning resources within your infrastructure provider. The resource keyword is used to declare the type of resource you want to provision.

In the example given here, we are declaring two AWS resources, a VPC and a subnet. Each resource is then configured with its required and optional attributes. Note in this given example, the subnet resource utilizes and sets the count attribute, which is considered a meta argument within Terraform. It allows you to create multiple versions of the resource it is declared within.

In the example provided here, Terraform will create one subnet per availability zone. The outcome of applying this Terraform configuration will be an AWS VPC, which has public subnets deployed into each of the AZs for which the VPC spans across. This type of syntax, although more abstract, is far more concise and compact when compared with hand-writing each subnet individually.

When declaring resources, the following layout is required. Resource is the top level keyword followed immediately by type and the name of the resource, both in double quotes. Although the more recent versions of Terraform do not mandate double quoting either the type nor name, it is still considered idiomatic to do so. In fact, if you were to run the Terraform format command to reformat your code, all unquoted resource types and names will become double quoted. The type represents the type of the resource to be provisioned.

In the two resource examples shown here, we are declaring types of aws_vpc and aws_subnet for an AWS VPC and an AWS subnet, respectively. The resource name is an arbitrary name that you come up with that you can then later use to refer to this instance of the resource. Every Terraform resource, regardless of type, is structured exactly the same way. This resource example demonstrates how to launch a single EC2 C5 instance type for the purposes of performing number-crunching, et cetera. Here the type is set to be an aws_instance, which represents an EC2 instance. The resource is then named NumberCruncher for lack of imagination.

Data sources are a way of querying an infrastructure provider for data about existing resources. Data sources, when declared, can leverage one or several filters to narrow down the return data, to be more specific about the requirement at hand. In the example provided here, a data source is declared to return AMI IDs for all available Ubuntu 20.04 images for the current AWS region. If more than one image is discovered, the most recent one will be returned based on the fact that the most recent attribute has been declared to be true.

The Ubuntu data source, as seen here, is then later used within the number cruncher AWS instance resource to specify its AMI. Using this type of approach instead of hard-coding the actual AMI ID directly within the AWS instance resource future-proofs your Terraform templates.

For example, consider the scenario of the Ubuntu 20.04 operating system being overhauled or patched by chronicle, perhaps due to a recently discovered security vulnerability. Having done this, they will likely also publish a new set of updated AMIs.

Now, the next time you perform a Terraform plan or apply command, Terraform will detect that your existing instance or instances, running the old AMI are now out of date and can be relaunched with the newer equivalent updated AMI. Or when launching a brand-new environment, you'll always be safeguarded by the fact that the instance launched within it will be using the latest patched and up-to-date AMI.

In the second example of a data source, information about all available AZs for the current AWS region is queried for. The AZ data source is then referenced within the subnet resource being declared. Here, the availability zone attribute takes on the first value contained within the AZ's data source. Taking this approach helps to keep our Terraform templates generalized such that they can be reused easily across different AWS regions.

Variables are another technique to assist in keeping your Terraform configurations generalized and reusable for multiple requirements. The idiomatic practice is to store variables in a file named variables.tf. Variables can have default settings. If the default is omitted, the user will be prompted to enter a value. In the example provided here, we are declaring the variables that we intend to use, but haven't declared any default values. The declared variables can then be referenced from within the main.tf file and for the same meta elsewhere in all other .tf files in the current directory.

It's important to understand that Terraform provides several ways in which you can see it and override the default value for any and all declared variables. If multiple approaches are used together, then Terraform follows a defined precedence in terms of which ones get used first. I'll now review them in order from highest priority to lowest priority, as also displayed here currently.

Option one. Leverages command line variable flags. If they are defined on the command line, then these will have the highest priority. Option two allows you to define your variable values within a terraform.tfvars file. If this is detected unavailable, it will be automatically used. If required, you can have multiple distinctly named versions of the tfvars file. When you do so, you must declare which one is being used via the VAR file parameter. This approach is perhaps useful to alter the infrastructure provisioning process for say, different environments, et cetera.

Option three. Within the shell or terminal session from which the Terraform CLI is being used, you can see environment variables named with the following naming strategy. Capitals TF_VAR_ followed by the actual name of the variable itself, and then assign it with a value.

Option four uses the default values stored against the declared variables within the variables.tf file. And lastly, option five, the lowest priority option, manual entry. You will be prompted to supply a value at runtime within the terminal during the Terraform execution. Output values are like the return values of a Terraform module. The idiomatic practice is to store outputs in a file named outputs.tf.

Primarily outputs are used for the following two purposes. One, the root module uses outputs to print out values in the terminal for your convenience. In the example shown here, the public IP output would print out the AWS EC2 assigned public IP address to the terminal once the provisioning has completed. And two, a child module can use outputs to export a set of values which are required and used elsewhere within its parent module. From here, the parent module can then later pass these values as inputs to other child modules.

Modules are an abstraction that can be used to combine related resources together for reusability purposes. At implementation time, modules are containers of multiple related resources that are used together. A module consists of a collection of terraform.tf files, all kept together in the same directory. Modules are the main way to package and reuse resource configurations within Terraform. Every Terraform configuration has at least one module known as its root module, which consists of the resources defined in the .tf files in the main working directory.

A Terraform module, usually the root module of a configuration, can call other modules to include the resources into the configuration. A module that has been called by another module is often referred to as a child module. Child modules can be called multiple times within the same configuration, and multiple configurations can use the same child module.

Finally, as previously mentioned earlier in the course, Terraform has a public registry containing modules built by the Terraform community, all of which are available for use to cherry pick from as and when required. Expressions are used to refer to all compute values within a configuration. The simplest expressions are just literal values like the string hello, or the number five. But the Terraform language also allows for more complex forms, such as references to data exported by resources, arithmetic, conditional evaluation, and all those that utilize built-in functions.

In the provided example shown here, expressions are used to test whether the AWS security group variable is ND or not and react accordingly. To round out the Terraform language introduction, Terraform includes a number of built-in functions that you can call from within your expressions, as just previously explained, to transform and combine values.

The general syntax for function calls is a function name followed by parentheses containing a comma separated list of input arguments. The available built-in Terraform functions, and there are many of them, allow you to perform infrastructure provisioning operations more dynamically. In the provided example here, three different built-in functions are used: length, cidr subnet, and element, working together to codify the creation of multiple subnets for the scenario. The length function returns the length of a list.

In this example, returning the availability zone count. The cidr subnet function will create a cidr block string based on the inputs given, returning something like 10.0.0.0/24, 10.0.1.0/24, 10.0.2.0/24, et cetera, et cetera. Keep in mind here that this function gets called multiple times since this resource sets and uses the count meta argument. The element function retrieves a single element from a list at the given position. If the given index is greater than the length of the list, then the index is simply wrapped around.

About the Author
Avatar
Jeremy Cook
Content Lead Architect
Students
78993
Labs
49
Courses
110
Learning Paths
61

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, GCP, and Kubernetes (CKA, CKAD, CKS).