1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Terraform 1.0 - Provisioning AWS Infrastructure

AWS Advanced VPC + ALB + EC2 Instances (v1)

Start course
Overview
Difficulty
Beginner
Duration
1h 41m
Students
1373
Ratings
4.5/5
starstarstarstarstar-half
Description

Terraform is an open source "Infrastructure as Code" tool, used by DevOps and SysOps engineers to codify their cloud infrastructure requirements.

In this course you'll learn about Terraform from the ground up, and how it can be used to codify infrastructure. Terraform can be used to provision infrastructure across multiple cloud providers including AWS which this course will focus on.

resource "aws_instance" " cloudacademy " {
ami = data.aws_ami.ubuntu.id
instance_type = var.instance_type
key_name = var.key_name 
subnet_id = aws_subnet.private.id
security_groups = [aws_security_group.webserver.id]
 
user_data =<<EOFF
#!/bin/bash
read -r -d '' META <<- EOF
CloudAcademy ♥ Terraform!
For any feedback, queries, or suggestions relating to this course
please contact us at:
Support: support@cloudacademy.com
LinkedIn: https://www.linkedin.com/in/jeremycook123
EOF
echo "$META"
EOFF

tags = {
Org = "CloudAcademy"
Course = "Terraform 1.0"
Author = "Jeremy Cook"
}
}

Learning Objectives

  • Learn about Terraform and how to use it to provision AWS infrastructure
  • Learn how to build and create Terraform configurations and modules
    Learn how to use the Terraform CLI to launch and manage infrastructure on AWS

Intended Audience

  • Anyone interested in learning about Terraform, and the benefits of using it to codify infrastructure
  • Anyone interested in building and launching AWS infrastructure using Terraform
  • Anyone interested in deploying cloud native applications on AWS

Prerequisites

Prerequisites which would be considered useful for this course are:

  • Knowledge of the AWS cloud platform and the various services within it – particularly VPC, EC2, and IAM
  • Basic System administration experience
  • Basic Infrastructure and Networking Knowledge
  • Basic SysOps and/or DevOps Knowledge

Resources

All Terraform configuration as used within the provided demonstrations is located in GitHub here:

Transcript

Welcome back. In this demonstration, I'll show you how to create an advanced AWS VPC spanning two Availability Zones, which will have both public and private subnets, an internet gateway and net gateway will be deployed into it, public and private route tables will be established. An Application Load Balancer will be installed within it, which will load balance incoming traffic across an auto scaling group of NGINX web servers. Again, security groups will be created and deployed to secure all network traffic between the various components. Let's begin.

As per the previous demonstration, all of the Terraform configuration, which will be demonstrated here, is available online, this time in the exercise two folder within the repo. The AWS VPC architecture that we'll build in this exercise is shown here and is more advanced than the previous one by virtue of the VPC having public and private zones, combined with the introduction of an Application Load Balancer and an auto scaling group. Regardless, the Terraform configuration is still contained within a single route module.

Jumping into Visual Studio Code, I'll open up each of the Terraform config files. Starting off in the main.tf file, again, we have the AWS provider, which will allow us to provision our AWS infrastructure. As with all Terraform projects, before we can provision actual infrastructure, we first need to run the Terraform init command to initialize our working directory, which I'll do now.

Okay, that's kicked off and initializing. While that is happening, let me explain the key configuration changes introduced into the main.tf file. Here, we have a data source which captures information about the available Ubuntu AMIs. We'll leverage this later on in a launch template resource further down in this file. Next, is our VPC resource for establishing the VPC. The only difference here is that it now receives the CIDR block from a variable. If we look at the variables.tf file, we can see the variable that is used to store the CIDR block. The terraform.tfvars file actually holds the default value as seen here.

Now, within the VPC, we'll establish public and private zones. Subnets one and two will be for the public zone and will have an attached route table that routes default traffic via an internet gateway. Subnets three and four, on the other hand, will be allocated to the private zone and will have an attached route table that routes default traffic through a managed net gateway. The subnet resource configuration demonstrates the use of calling an in-built Terraform function, in this case, the CIDR subnet function, to calculate the CIDR block for the subnet itself. By doing so, we can again, make our main.tf file more flexible for future requirements.

Now, to understand how the CIDR subnet function works, I will jump over into the terminal and fire up the Terraform console. Next, I can simply copy across the expression that uses the function and evaluate it. Here we can see that it has returned the CIDR block 10.0.1.0/24. We can repeat this again for the second subnet. This time it returns 10.0.2.0/24 and we can keep repeating this to understand how the CIDR blocks are being generated. This also demonstrates the usefulness of the Terraform console command.

Okay, moving down the main.tf file, next up is an elastic IP resource. This will be used by the following net gateway resource, which collectively allows privately zoned instances to route traffic out to the internet, which we will require since our auto scaling group of instances will reside in the private zone and they will need to call out to the internet to download the NGINX packages for web serving.

Next up is our route table configuration. Separate route tables are established, one for each of our zones. The public route table routes traffic via the internet gateway and the private route table routes traffic via the net gateway. The security group configuration has also been modified. Here we have separate security groups for both the web fleet and the Application Load Balancer. The web service security group, as seen here, should actually have it's second ingress port 80 rule more restrictive to allow only inbound port 80 from the Application Load Balancer nodes.

For the record, I'll now make this change and commit it back to the repo for your benefit. The Application Load Balancer security group allows all inbound port 80 traffic from the internet. Additionally, it is required to have an egress rule to allow it to forward downstream traffic to the web fleet.

Next up, we have configured an AWS launch template resource. This represents the launch configuration requirements for the web fleet that are managed within an auto scaling group and sit behind the Application Load Balancer in the private zone. The AMI ID is pulled from the Ubuntu data source that we earlier reviewed towards the top of this file. The network interface is block declared here, is used to attach the security group into explicitly disabled public IP address assignment.

Now, since recording this demo, I have replaced the network interfaces block in favor of using the VPC security group IDs attribute to attach to the security groups. This is more stable for the overall infrastructure when you reapply any updates through Terraform. The launch template finally configures user data to bootstrap the instances with the NGINX web server. In this example, the actual bootstrapping script is stored externally in its own file and is pulled in using the inbuilt filebase64 function, which in turn uses string interpellation to inject the current module path. Here we can see the actual contents of the referenced ec2.userdata file.

Next up, we have the Load Balancer. It's configured to be an external facing Application Load Balancer. It's configured with the ALB security group and is deployed across the two public subnets spanning both Available Zones for availability purposes. We then establish a web server target group with the target group port being set to port 80, the NGINX default listening port. Equally, on the Application Load Balancer itself, it's configured to also listen on port 80, such that it simply forwards traffic from port 80 down to port 80. This is configured via both a default action configured directly on the Application Load Balancer's listener itself and via a single listener rule, whose only condition is to forward the route path to the same target group.

Finally, an auto scaling group resource is configured to span the two privately zoned subnets, subnet three and subnet four. The desired min and max settings are all set to two, which will result in two instances always at runtime. Done so for demonstration purposes only. The auto scaling group references the earlier reviewed launch template and is configured to register its instances as targets within the target group. 

Okay, with the route module review now complete, let's head over to the terminal and launch the infrastructure by executing Terraform apply. Okay, that has now completed successfully and we have several outputs printed out for our viewing. They include the Application Load Balancer DNS, the subnet IDs for each of the four subnets and the VPC ID. I'll copy the Application Load Balancer DNS value and then call for an HTTP response, using the -Y to indicate that I'm only interested in the HTTP headers for now. And excellent, it retains a HTTP 200 response code, indicating success and that the response appears to have originated from an NGINX web server, which is what we would expect.

From here, I'll jump over into my browser and browse to the Application Load Balancer, like so. And what would you know, we have a valid response back from our auto scaled group of NGINX web servers via the Application Load Balancer. A top result. To round out this exercise, I'll now examine the AWS infrastructure as just provisioned. In the EC2 console, we can see that we indeed have two web server instances. 

Navigating to the Load Balancer section, we can see our newly provisioned Application Load Balancer with the same DNS address, which we just used to browse to. We can also see that it has a single HTTP port 80 listener and if we click on the view rules link, we can observe the listener rules we have configured. In this case, our custom rule and the default rule both forward traffic downstream to the same target group, which in our case, is the web server auto scaling group.

Drilling into the target group, we can see that the target group has successfully registered both EC2 instances and they importantly, both are registered as healthy. Okay, that now concludes this demo. If you've been following along, please don't forget to perform a Terraform, destroy to tear down your AWS resources.

About the Author
Avatar
Jeremy Cook
Content Lead Architect
Students
78993
Labs
49
Courses
110
Learning Paths
61

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, GCP, and Kubernetes (CKA, CKAD, CKS).