AWS Advanced VPC + ALB + Cloud Native Application
Start course
1h 41m

Terraform is an open source "Infrastructure as Code" tool, used by DevOps and SysOps engineers to codify their cloud infrastructure requirements.

In this course you'll learn about Terraform from the ground up, and how it can be used to codify infrastructure. Terraform can be used to provision infrastructure across multiple cloud providers including AWS which this course will focus on.

resource "aws_instance" " cloudacademy " {
ami =
instance_type = var.instance_type
key_name = var.key_name 
subnet_id =
security_groups = []
user_data =<<EOFF
read -r -d '' META <<- EOF
CloudAcademy ♥ Terraform!
For any feedback, queries, or suggestions relating to this course
please contact us at:
echo "$META"

tags = {
Org = "CloudAcademy"
Course = "Terraform 1.0"
Author = "Jeremy Cook"

Learning Objectives

  • Learn about Terraform and how to use it to provision AWS infrastructure
  • Learn how to build and create Terraform configurations and modules
    Learn how to use the Terraform CLI to launch and manage infrastructure on AWS

Intended Audience

  • Anyone interested in learning about Terraform, and the benefits of using it to codify infrastructure
  • Anyone interested in building and launching AWS infrastructure using Terraform
  • Anyone interested in deploying cloud native applications on AWS


Prerequisites which would be considered useful for this course are:

  • Knowledge of the AWS cloud platform and the various services within it – particularly VPC, EC2, and IAM
  • Basic System administration experience
  • Basic Infrastructure and Networking Knowledge
  • Basic SysOps and/or DevOps Knowledge


All Terraform configuration as used within the provided demonstrations is located in GitHub here:


Welcome back. In this demonstration I'll now show you how to create an advanced AWS VPC to host and support a fully functioning cloud native application. The VPC in question will span two availability zones and have public and private subnets and internet gateway and managed net gateway will be deployed into it. Public and private route tables will be established. An application load balancer will be installed, which will load balance incoming traffic across an auto scaling group of nginix web service installed with the cloud native application front-end and API. A database instance running Mongo DB will be installed in the private side. Security groups will be created and deployed to secure all network traffic between the various components. For demonstration purposes only the cloud native application that we will deploy, which consists of the front-end API components will be deployed such that both components are on the same set of easy two instances. This is done so to reduce running costs only.

Okay, let's begin. As per the previous demonstrations, all of the Terraform configuration, which will be demonstrated here is available online this time in the exercise four folder within the repo, those particular demonstration will show you how to deploy a fully working end-to-end deployment of a cloud native application. This wave app allows you to vote for your favorite programming language, with the votes being collected and stored in a MongoDB database. The voting data is seen from the front-end to an API using Ajax calls. The API will in turn read and write to the MongoDB database, an auto scaling group of app servers will be provisioned. Each app server will have both the front-end and API components deployed onto it. During provisioning time, each app server will pull down the latest release of the front-end and API components from git hub. Jumping across and to the front-end git hub repay, you can see the source code that makes up the react based front-end, the lightest fronting release that gets pulled down and installed onto the instances will be this one.

Next I'll pull up the API, git hub repo. This contains the API as used by the vote app. The API is written and go, and as earlier mentioned, provides an interface to read and write the voting data into and out of the MongoDB database. Again, app provisioning time, the app instances, they get launched. We'll pull down and install the latest release of the API found here. For your benefit, the actual bootstrapping user data script, as used to bootstrap the app instances is documented directly here. It's encoded within a Terraform team plight cloud and at conflict block, it is used to first install nginix for web serving, then it pulls down the latest front-end release from git hub installing it into nginix default seeming directory, Next, it pulls down the lightest API release from GitHub and starts it up, pointing it at the MongoDB instances assigned private IP address connecting on port 27017.

The next diagram is seen here highlights the application load balances, target groups set up. Here we are seating up two target groups, one for the front-end and one for the API. The front-end target group will listen on port 80 whereas the API target group will listen on port 8080. The application load balancer itself will be configured to listen on port 80 and will forward traffic downstream to either of the target groups based on some forwarding roles that will be configured. More on this later.

The end result is that incoming requests will be layed balanced across the auto scaling group, which spans across two availability zones for high availability purposes. Now from a VPC point of view, the architecture that will build and leverage within this demonstration will again be the same as used in the previous three demonstrations, the VPC will span two AZs and have public and private zones. But this time the VPC and underlying networking components will be declared using AWS's VPC Terraform module available within the public Terraform registry.

Within this exercise, the key Terraform objective that I want to demonstrate is to show you how to modularize your Terraform configurations. As seen here, the project structure for this demonstration will be the following. When your Terraform configurations become large and complex modularizing them helps to make them more maintainable and readable.

Okay, jumping into visual studio code within the terminal pain are on the tree command to again, highlight the project structure. Here we can see the route modules file, and then beneath it, we have a modules directory. Within this directory, we have several child modules. We have an application module, a bastion module, a network module, a security module, and a storage module. Back within the root module directory, we also have a file, a terraform.tfvars file for default variable values and an file.

Let's now take a look at the root modules file. This is our terraform entry point. Within it, we can see how references to the child modules are made starting with the network module, which has been reflected out to contain all the VPC networking conflict. Next is the security module, this has also been reflected out to contain all of the security grid configuration. Note, that the security module has a dependency configured on the networking module since it needs and references the VPC ID, which is configured as an output on the networking module.

It's important to note that referencing values from other modules can only be done so if that other module creates an output for it. We can see that this is the case for the network module, by opening up its file and observing the fact that it has an output named VPC_ID. While in the network module, let's look at its file, here we can see that the enclosed configuration is very concise. In fact, it is contained all within a single VPC module block. This is one of the very cool things when working with custom modules. It is your ability to abstract away, a lot of the underlying configuration. In this case, we simply configure the public subnet cider blocks, the private subnet cider blocks. We enable the net gateway seating, and then that's enough for Terraform to be able to go away and create LVC subnets, routing tables, routing associations, internet gateway, net gateways, et cetera. Using this approach is very clean and super productive.

Okay, let's return back to the root modules file. Next stop is the bastion module. This module is designed to launch a jump box, to allow us to connect to the privately zoned instances. Our ASG at fleet and MongoDB database, the bastion will be deployed into the first public subnet and will be secured with the best UN security grape created within the security module. Opening the bastion modules file, the key configure items to call out are, it explicitly declares the ami ID to be this value, which in this case is an Ubuntu 20.04 image. And that it also requires the instance to have a public IP address automatically associated with it hitting back to the route modules, file.

Next up is the storage module. This encapsulates the conflict for the MongoDB database, jumping into the storage module and looking inside its file. We can see a similar conflict to that used for the best year. However, in this case, we obviously don't need a public IP address assigned to it. The Mongo instance leverages user data to install and configure the MongoDB service onto itself. The user data is pulled in by calling the inbuilt function filebase64, which reads in the contents of the file.

Opening the file we can see the commands required to download and install the MongoDB package. We then passed out some configuration to the file system and start up the MongoDB service. We then generate a db.setup.js file containing sample data, which when called upon will used to populate the MongoDB database. Now the setup for our database will be sufficient for demonstration purposes, but in a production environment, you'd likely want to configure a MongoDB and a replica CIT and perhaps have its data volumes stored on EBS provisioned with high ops for better performance, scalability, availability, and redundancy purposes.

Moving back to our root modules file one last time we have the application module, which contains all resources related to the application itself. In this case, it contains the application load balancer, the auto scaling group, the launch template, et cetera, et cetera. This module clearly has dependencies on the network, security and storage modules. And therefore has these dependencies explicitly declared in the depends on list at the bottom.

Let's now jump over into the application modules file. Here we can see that it starts off with a data source of the Ubuntu 20.04 images. This is later reference within the launch template block for the day down in the file. Next up is the template cloud in a conflict resource. This contains the same script documented in the rebate within the exercise for folder. Again, I'll highlight a couple of the more important parts of this template resource which once rendered is used as user data for the app instances.

On lines 33 and 34 string interpolation is used to embed two environment variables within the script. The first is the application load balances, fully qualified domain name. This is required by the front-end being used to tell the browser where to aim the voting API Ajax requests to. The second is used to configure the API service with the MongoDB databases private IP address, allowing it to know where to read and write data to.

Lines 39 to 51 are used to pull down the latest release of their react based front-end and unpack it unto the default engineer assuming directory. Lines 53 to 62 are used to pull down the latest release of the compiled API. Note how it references the MongoDB private IP address environment variable previously configured on line 34.

Finally, with the fronting and API components installed and ready, the nginix web server is started up. Next up is the app launch team plate. Nothing too special going on here other than the fact that the user data is configured by calling the inbuilt by 64 and code function, which types the rendered output of the previous cloud and a template and retains the base 64 encoded version of it.

Next up is the application load balancer resource. Again, nothing special here to call out other than the fact that it is an application load balancer. Behind the application load balancer are two target groups, one for the front-end nginx web server configured on port 80 with the following health cheat configuration. The second target group is for the API, which has configured to listen on port 8080. The API target groups health check is configured to send its health checks to the /ok endpoint specific to the API, the application load balancer is configured with a single HTTP port 80 listener. This is useful both the fronting and API requests on the outside.

On the inside two listener rules are created, one for the front-end HTTP requests, and one for the API Ajax requests, the front-end HTTP listener role forwards to the front-end target group and the API HTTP listener role forwards to the API target group and has a lower priority value of 10, meaning it gets precedence. This has done site that we can detect any incoming API calls as per the condition configuration, and then forward these directly to the API target group.

Moving onto the auto scaling group resource, this is configured in much the same way as it was in the previous demos. The only difference here is that it is configured to register its instances into both fronting and API target groups. Additionally, the auto scaling group references the earlier launch template that was configured and purposely grabs the latest version of it. Finally, a new data sources configured to scan for any easy two instances that are tagged with the following tags, which map exactly to the tags specified in the app servers launch template. We filter all instances, which are either an IP ending or running site. This data source is set up with a dependency on the auto scaling group resource.

Now, the reason for this last data resource is to be able to report back to the terminal, the auto scaling group instances, private IP addresses. Looking backwards, if we first look at the application modules file we can see that it contains a private_ipsoutput, which references the instance data source mixed going up and out to the root modules, file we can see that it has an output named application private IPS, which references the application modules, private IPS output.

Okay, let's now proceed and launch the setup. To do so, I need to first initialize our working directory using Terraform in it, once initialization is complete, I'll proceed with the Terraform apply command. Okay, fast forwarding to the point where the apply command has now completed successfully. Here, we can say the outputs, including the application load balancer FQDN the application private IPS, the bastion public IP and the MongoDB private IP, I'll take a copy of the application load balancer FQDN and cool for it to see if we get back a valid response, and so far so good.

Here we are receiving an HTTP 200 response code indicating success. However, the acid test is to call it up within our browser like so, and excellent, how good is this? The Vote App has successfully loaded within the browser. I can vote on various languages, which when I do results in Ajax calls going back to the application load balancer on port 80 and with the application load balancer forwarding them downstream to the API target group listening on port 8080, we can view this traffic by bringing up the browser's developer tops and capturing the network traffic generated when clicking on any of the Vote buttons. Clicking on the vote network requests, we can then view the HTTP request and response heaters associated with it.

Moving on, let's take a look at the IWCC two console and view the instances running. Here, we can see four instances, the Mongo database instance, the bastion instance and the two auto scaling group, front-end at managed instances. Peering into the light balances section, we can see our application load balancer. Drilling into the listeners, we have the one port 80 listener configured. Clicking on it's rule set, we can see the three rules that have been established. The highest priority role is used to capture and re-iterate browser initiated Ajax requests directly to the API target group. The remaining lower priority roles are used to capture all other traffic and route it to the front end target group.

Navigating back to target groups, we can observe both the API target group and the front-end target group. The API target group has configured on port 8080 and the front-end target group is configured on port 80. Drawing into the API target group. We can see that it has successfully registered the two ASG managed app instances, and they're both of them are in a healthy state, which we require. Then drawing into the front-end target group, again, we can observe the same two instance IDs have been registered successfully.

Next let's take a quick look at the VPC set up that has been provisioned by Terraform for us. Within the VPC console, we can see the cloud academy nine VPC. If we jump over into the subnets, we see the four subnets, two public and two private. This slide of blocks match those as explicitly configured and widened to the network modules, file on lines eight and nine. Within the NAT gateways view, if we filter on those that are in an available state, we can see that they are two, one per AZ.

Let's now jump back into the terminal and we'll bounce over into the MongoDB instance to review the data that's been captured within it. To do this easily, I'll load the cloud academy demo ssh private key into my local ssh agent, and then use ssh- A parameter, enabling at the indication forwarding, and then at the indicate into the bastion host from here, I'll bounce over to the MongoDB instance like so.

Now that I'm on the MongoDB instance, all far up the Mongo client using the Mongo command or swap into the Lang DB database and then execute the db.languages.fine.pretty command to display the data currently hold in the languages table. And here indeed, we can see that the MongoDB database has captured our voting data that we seem to it. Let's now generate a few more votes within the browser. And then again check for the capture data back within the database, and excellent the data has been successfully transacted within MongoDB.

Okay, that concludes this demo. As per the other exercises, if you've been following along, please don't forget to perform a final Terraform destroy to tear down your AWS resources.

About the Author
Learning Paths

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).