image
AWS Simple VPC + EC2 Instance

Contents

Introduction
1
Introduction
PREVIEW2m 7s
Terraform Introduction
2
Terraform CLI
Terraform Language
Wrap Up
12
Start course
Difficulty
Intermediate
Duration
1h 41m
Students
8604
Ratings
4.3/5
starstarstarstarstar-half
Description

Terraform is an open source "Infrastructure as Code" tool, used by DevOps and SysOps engineers to codify their cloud infrastructure requirements.

In this course you'll learn about Terraform from the ground up, and how it can be used to codify infrastructure. Terraform can be used to provision infrastructure across multiple cloud providers including AWS which this course will focus on.

resource "aws_instance" " cloudacademy " {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.key_name 
subnet_id = aws_subnet.private.id
security_groups = [aws_security_group.webserver.id]
 
user_data =<<EOFF
#!/bin/bash
read -r -d '' META <<- EOF
CloudAcademy ♥ Terraform!
For any feedback, queries, or suggestions relating to this course
please contact us at:
Support: support@cloudacademy.com
LinkedIn: https://www.linkedin.com/in/jeremycook123
EOF
echo "$META"
EOFF

tags = {
Org = "CloudAcademy"
Course = "Terraform 1.0"
Author = "Jeremy Cook"
}
}

Learning Objectives

  • Learn about Terraform and how to use it to provision AWS infrastructure
  • Learn how to build and create Terraform configurations and modules
    Learn how to use the Terraform CLI to launch and manage infrastructure on AWS

Intended Audience

  • Anyone interested in learning about Terraform, and the benefits of using it to codify infrastructure
  • Anyone interested in building and launching AWS infrastructure using Terraform
  • Anyone interested in deploying cloud native applications on AWS

Prerequisites

Prerequisites which would be considered useful for this course are:

  • Knowledge of the AWS cloud platform and the various services within it – particularly VPC, EC2, and IAM
  • Basic System administration experience
  • Basic Infrastructure and Networking Knowledge
  • Basic SysOps and/or DevOps Knowledge

Resources

All Terraform configuration as used within the provided demonstrations is located in GitHub here:

Transcript

Welcome back. In this demonstration, I'll show you how to create a simple AWS VPC spanning two availability zones. Two public subnets will be created together with an internet gateway and a single route table. A T3.micro instance will be deployed and installed with Nginx for web serving. Security groups will be created and deployed to secure all network traffic between the various components. Lets begin.

Okay, so starting out in the following, CloudAcademy Github repo. If you want to follow along, then I highly recommend you clone this repo yourself locally. This repo contains four Terraform AWS infrastructure based exercises.

In this demonstration, we'll perform exercise one. The AWS VPC architecture that we'll build in this exercise is shown here. Nice and simple for our first example. The Terraform route module will consist of the following. Jumping into Visual Studio Code, you can see that have already git cloned the repo. As mentioned, the repo contains four exercises, and in this demo, we'll focus on the Terraform templates and the exercise one folder. I'll now proceed and open up the main.tf, outputs.tf, terraform.tf files and variables.tf files. Starting off with the main.tf file, I'll highlight the AWS provider configuration.

Now, when it comes to building your own configurations, remember that you can always copy and paste this block from the terraform AWS online provider documentation. Most of the time, you will want to go with the latest version. With the AWS provider now configured, it's time to jump into the terminal and initialize the Terraform working directory. To accomplish this, I'll simply execute the Terraform init command. This would download the required plugins, in this case, the AWS is provider plugin.

Here you can see that this has now completed successfully. This is a one time operation that is required before you perform any Terraform plan or apply commands. Next I'll do a directory listing to highlight the new .terraform directory and the .terraform.lock.hcl file that have been created as a result of executing the last command. I'll now use the tree command to highlight the internal structure of the .terraform directory. Here we can see the AWS3.55 provider plugin binary that has been downloaded.

Okay, moving on, let's examine the Terraform configuration within the main.tf file. The first resource block that we declare is that for the VPC. Here, we're initializing it with the cidr block 10.0.0.0/16. The largest VPC EDU space we can create with an AWS. I'll also tag it with the following tags for identification purposes.

Next up, I'm creating two public subnets, which will be provisioned within the previously declared VPC. Subnet one will be created with the first /24 block and subnet two will be created with the next second /24 block. Subnet one will be deployed into the first AZ and subnet two will be deployed into the second AZ. Both AZs as used in this example, are stored in a list based variable named availability zones.

Taking a look at the variables.tf file, we can say each of the declared variables that the route module takes as imputs. Highlighting the availability zones variable, we can see that it is indeed typed as a list of string, but has no default value. Instead, the default value is passed in via the Terraform.tfrs file. Here we can see the two string values assigned to the availability zones or list, US West to A and US West to B.

Okay, jumping back into the main.tf file, the next resource I'll highlight is the internet gateway. The internet gateway is required to facilitate internet traffic. Next, we declare a public route table containing a default route which will route outbound traffic through the internet gateway. This route table is then associated with both public subnets.

Next, we declare a security group to restrict inbound and outbound traffic to the EC2 instance that is to be later declared. This security group has two ingress rules and one egress rule. The first ingress rule allows inbound SSH traffic from my week stations parameter IP address. This IP address is stored in a Terraform variable named workstation IP.

Now, the default value for the workstation IP variable is set using an environment variable in the terminal. Within the terminal, I'll run the command set pipe grip to search for, in capitals, TF_VAR. And here, we can see the value assigned to it. When the Terraform plan or reply command is executed, that will detect the presence of the environment variable and then use it within Terraform. The second ingress rule is used to allow inbound port 80 web traffic to the Xginx web server that we'll install on the EC2 instance.

The single egress rule is required to allow the EC2 instance to connect out to the internet to pull down the Nginx package which'll be installed. The final resource is the EC2 instance itself which will be bootstrapped, as mentioned, with the Nginx web service. That EC2 instance is configured using various settings stored within Terraform variables, such as it's AMI, instance type, SSH key, subnet ID and security groups. Additionally, the EC2 instance is being configured to have a public IP address automatically assigned to it.

The Nginx web server is installed by virtue of configuring the user data attribute, which in turn, is configured with a multi line string containing the bash install screen. Note, the multi line sting uses heredoc format encapsulated within a pr of EOF strings.

Okay, at this stage, we're ready to jump back into the terminal and perform a plan and apply. Before I do, I just want to highlight how the AWS user credentials are managed. For this example, I'm setting the credentials using environment variables set within the terminal, as seen here.

Right, we are ready to run a Terraform plan. In doing so, Terraform will generate an execution plan for us highlighting what will be created and or changed. Having reviewed this, we can proceed by running the Terraform apply command, and in this case, I will auto approve it by adding on the auto approve parameter. So here we can see that we have begun the AWS infrastructure provisioning process. The time it takes to complete is entirely dependent on the number and type of AWS resources being launched.

While this is happening, let me show you the Terraform extensions that I have set up within Visual Studio Code. The first extension that I have installed in the HashiCorp Terraform extension. Now, one of the customizations that I've applied on this extension is the format on save option, setting it to true. When doing so, this will have the extension automatically run the Terraform format command on your Terraform code within the file just saved. This is super useful as it keeps your code following best practices in terms of formatting and layout. For example, if I modify the main.tf file to have non standard formatting, when I save it, the Terraform extension will automatically apply best practice formatting, string quoting and indentation, et cetera.

Another thing to highlight is the snippet generation and intellisense options provided by the HashiCorp Terraform extension. Additionally, I can also pull up any number of snippets provided by the Terraform doc snippets extension, which I have also installed. In the example shown here, I'm entering the character sequence tf-AWS-resource to trigger the AWS snippets available. Each snippet has a preview of what to expect. In this example, if I go with the AMI snippet, I get the following AMI block pre configured with the commonly used attributes, et cetera. As mentioned, these snippets are provided by the Terraform doc snippets extension.

Okay, let's do a little bit of clean-up and then head back over into the terminal as our Terraform apply command has now completed successfully. Here we can see that nine resources have been added and that we have several outputs indicating the subnet IDs, VPC ID and the public IP address assigned to the Nginx web server instance. These particular outputs have been declared within the outputs.tf file. Let's now copy the public IP address and then perform a curl request to it. Excellent, our Nginx server is now up and running and has been able to respond to our HTTP git request. This is a great result.

Considering that all of the AWS VPC set up and networking configuration was performed automatically for us by Terraform. We can now confidently jump into our browser and test out the same address like so. And perfect, we get the default Nginx web page displayed. Heading over into the AWS console, we can examine the VPC section and view the newly provisioned VPC.

Likewise, the same for the EC2 instance. Here we can see the EC2 instance that is up and running. We can also navigate to the user data that was passed to it at launch time and indeed, that has the best script that has installed the Nginx web server, very cool. Back within the terminal, let's run a Terraform refresh command. This will reconcile the local Terraform state with the actual infrastructure state, and again, print out the configured outputs. As expected, nothing has changed.

The next thing I will demonstrate is the concept of Terraform workspaces. Let's first consult the workspace command help, but running the command Terraform workspace --help. Here we can see each of the workspace sub commands. Let's examine the current list of workspaces. Here, we can see that we've just the default workspace.

Let's now create a new workspace named test. When we create a new workspace, we are swapped into it automatically. We can confirm this by using the show command to display the currently active workspace and indeed, it is the test workspace. I'll now run Terraform apply, auto approve to provision a new identical AWS infrastructure, albeit, managed within the local Terraform test workspace.

Again, the AWS provisioning has completed successfully, copying the latest public IP address for the new Nginx web server. We can test it out to see if it's alive. Here, the HTTP request has failed. This is likely due to the fact that the Nginx web server is still in the process of warming up. That is completing its boot and installation processes. Repeating the curl command should eventually result in a successful HTTP response. Which it now does.

We can double check this by returning to our browser, and again, we are able to successfully pull up the Nginx web page. Jumping back into the AWS console, let's check that we have additional AWS infrastructure. Here in the EC2 console, we can see the presence of another EC2 instance. And likewise, back in the VPC console, we have a second identically configured VPC, very cool.

Okay, at this stage, we are now finished with this exercise. To minimize our ongoing AWS costs, we can tear down the test workspace infrastructure by running the command, Terraform destroy. Returning to the AWS console, I'll confirm that indeed, the test infrastructure has now been removed. We can also repeat the same Terraform destroy command in the default workspace. Let's do this now. And again, confirming that our AWS resources for the default workspace have been successfully removed, which they have. Okay, that now completed this Terraform exercise.

About the Author
Students
133498
Labs
68
Courses
111
Learning Paths
190

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).