image
Understanding and using Terraformer

Contents

Infrastructure to Code with Terraformer
1
Course Introduction
PREVIEW1m 58s
Understanding and using Terraformer
Difficulty
Intermediate
Duration
17m
Students
879
Ratings
5/5
starstarstarstarstar
Description

In this course, we take a look at the Go-written tool “Terraformer”. Terraformer is a CLI level tool that allows you to easily Terraform already existing resources in your environment. Performing the reverse of what Terraform is designed to do, this tool can be thought of as Infrastructure to Code (IaC).

Learning Objectives 

This course will enable you to:

  • Understand what Terraformer is
  • Configure Terraformer to work with Terraform and a specific Cloud Provider 
    • GCP will be exemplified in the demonstrations
  • Codify an existing set of infrastructure
  • Increase adoption of Infrastructure as Code amongst your team(s)

Prerequisites

  • Experience with Terraform and knowledge of IaC
  • Comfortable in a terminal environment
  • Comfortable with a cloud provider of your choice that Terraformer supports
    • AWS, GCP, Azure, Kubernetes, DigitalOcean, etc.

Intended audience

  • Those looking to gain insight into their cloud environment to prevent drift 
  • DevOps Engineers / Site-Reliability Engineers
  • Cloud Engineers

Resources

Github Accompanying this Course - https://github.com/cloudacademy/terraformer-examples

Terraformer Source - https://github.com/GoogleCloudPlatform/terraformer

Transcript

So let's first talk about what is Terraformer? Well, we know that Terraform as a tool for building changing and versioning infrastructure safely and efficiently. It can be customized to your needs and environment and it all ready supports the ability to import existing infrastructure on its own.

So if you're building out a new infrastructure set, team or company Terraform is the way to go about creating said items. But what about the all too common situation of importing existing resources and infrastructure. The fact that the resources were created in a manual light before the knowledge of Terraform was known to you or your team. And as more and more people move to the cloud to solve for their needs they encounter this issue of starting with what is defaulted for them.

GUI menus that major cloud providers have built out with a gazillion buttons and tasks to work through before achieving the same end result, a simple instance on the cloud providers infrastructure.

Now we know the Terraform can already import resources but let's ask the question this way. Do the built in resources that are created from Terraform's import function generate the infrastructure as code files the way that we want to see them be written? And the answer is no, they don't. The native Terraform import tool will only generate the current state of the infrastructure that you choose to import leaving you to manually generate all of those TF or JSON files.

This can be a very tedious process and sure, you could have a little bit of fun in hosting company Hackathon to drum up an in-house solution but that's just counter-intuitive to the DevOps mindset. This is where the superhero tool for Terraform comes in, which is Terraformer.

Terraformer is a CLI level tool that is written in Go that generates already-created infrastructure as HashiCorp TF files and this includes the TF state file and it typically generates it in a very nice and easy to read directory output.

It has read only permissions which means it's only ever going to read your resources, never change them. It has the ability for remote state sharing so that you and your team can all stay within the exact same resources that you were seeing.

It has the ability to filter similar to Terraform and it has planning capabilities that are similar to Terraform, meaning you can run a Terraformer plan and see what you're going to generate as code and as a TF state file before you even generate the code.

Now enough talk, let's check out an example, and we're going to be using these versions for this example. They're going to be also included in the resources, so don't worry, let's check it out now.

Now before we actually go and import some infrastructure we need some infrastructure to see. These two instances were created manually through the Google Cloud UI. They have information such as labels, app information containers associated with them that we can't see now, and they're also not in any source control management.

So let's go run the terraformer import command against the resources of instances to grab this information and to codify it.

Okay, we're in the CLI, so let's run the Terraformer binary to import those two instances that we just saw starting off with Terraformer. Following that we have import and then our declared cloud provider.

Now we need to tell Terraformer what to look for so we're gonna be specifying it with the resources flag, and we're going to be saying the supported resource of those instances. Next, since Google structures all of its resources through projects we need to specify the project, in this case cloud Academy prod.

After that we have our regions, the default is global but since these instances were created in a specific region, we need to tell Terraformer to look in this region as well. So let's run it and see what we get.

Okay, let's break this down a little bit. Our first line is importing the project with the region us-central1. From there we're importing those instances, and then you can see the refreshing state for those specific instances which include our web server prod in our data server prod.

The next line denotes that Terraformer is attempting connect to connect to the remote state, if specified, we didn't do that in this case. After that we're saving the instances as well as saving the TF state for those instances, and that's done. So let's go check out an example of what Terraformer generated for our data server prod instance.

Okay, we've seen what the terraformer import command does once it's ran, but what kind of code is generated after it's ran? Well, you're looking at a sample right now. We can see our boot disk, labels, machine type, and metadata. In fact, there's a lot more information included but I've kept this short for brevity. If you wanna see the entire Terraform file generated for this instance and the web server instance, check out the GitHub Repo associated with this course.

Let's further break down the CLI example by exploring the commands and what they specifically mean as well as some alternatives we could issue instead in their place.

We've seen a brief example of Terraformer and now we're going to break it down starting with those first three words, Terraformer import in Google. The first, Terraformer, it's just as simple as it sounds we're just using the binary that we have for Terraformer.

From there we have import, which is importing our resources in the cloud as Terraform files. It's also one of the four subsequent commands following the invocation of Terraformer. The others are help, which expand on the previous command and defines further commands that can be added to it. Plan, which acts very similarly to import in that you can just replace the import command with the plan command. It also acts very similarly to Terraform's plan and that it'll generate what it thinks there is to be codified before actually importing the infrastructure. It can be acted upon similarly to Terraform's plan with Terraformer import plan and then the path to the plan JSON. We're going to see an example of this later, so don't memorize all of this right now.

From there we have version, which just declares the version of Terraformer that is being invoked. Our last word is Google, and this is just as straightforward as Terraformer. It's our declared cloud provider and Terraformer has an extensive list of providers that natively use the Terraform providers to make their API calls.

Okay, let's expand further on our Terraformer knowledge by looking at flags, the main functionality behind Terraformer. There's only one requirements to the Terraformer CLI after we declare our cloud provider and that is declaring the resources that we want to codify.

With our example, we only specified the instances resource, denoting that we wanted to codify these resources for these projects. If we wanted to add more resources to codify, all we would have to do is append them within a comma.

If we wanted to scan the entire Cloud Academy Prod Project for all available resources to be codified, it would look something like this. Theoretically, we could set up a script to scan all these supported resources to see if there's infrastructure created that we haven't seen yet from one of our developer teams, we could run this on a cadence and then store the generated Terraform files and state files in a bucket and gain complete visibility in the resources being used across our org, project or team.

As stated before there needs to be a flag set for projects when invoking Terraformer for GCP. The project's flag denotes the specific GCP project you wish to list resources for. And the regions flag in this example is used to determine the projects resources being created in a specific region.

Similar to resources, projects can be chained off of each other, to search for resources in multiple projects improving the functionality of the CLI. The quick way to iterate through all the projects for the resources is to run a gcloud list all projects command and then to use the project IDs as the input for the projects flag, easily capturing the state and JSON files for all projects.

This will also ensure that you don't accidentally input an incorrect project into your invocation. Terraformer unfortunately doesn't ensure that the projects exist before running and will generate false empty state and provider files for the Sudo project.

Let's take a look at Terraformer's plan.

We're back in the terminal and here we're doing terraformer plan command instead of import. Terraformer plan is identical to Terraform or import, except all you're going to be doing is replacing import with plan, the resources projects cloud provider and regions are all the same, in fact, the only thing that's going to change is we're not going to see codifiable infrastructure afterwards. We're going to see a JSON level structure of the resources that would be codified. That is what the last line you're seeing here shows. Here we're saving the plan file to the generated directory path.

From there, we're going to, actually codify this infrastructure by running the terraform import plan command. Since it's in the generated and it's the only file that sits within the generated directory, we can just quickly tabulate through and get our Terraformer plan JSON.

From there we can hit the return key and watch our plan JSON return as Terraform files. And just like that we have our instances codified and our state codified as well.

Let's describe that plan command a little bit more. As we notice it's nearly identical to the import command and after the resources are planned the command to import the generated resources was as follows: terraformer import plan {PATH}/{TO}/plan.json. After it's generated, the specified resources are created in the exact same way that the import command would be when ran.

Now that we have a basic understanding of some basic flags, let's take a look at some advanced invocations with advanced flags, starting with the bucket string.

The bucket string is used to declare the bucket where the state will be pushed to, and it's used in conjunction with the state flag. From there, we have the compact flag. The compact flag will put all resources specified into a resources dot TF file instead of their respective named TF files.

From there we have the connect flag, which specifies whether or not the ran command should attempt to connect to the remote state specified.

Moving on, we have the filter flag. Filtering allows us to filter out specific resources that we already know of exist in order to reduce calls or to only codify specific resources for our provider.

Next, we have the traditional help flag which gives us information about our declared provider and command.

From there we have our output string flag. As the default states, it is the output of the generated file. JSON and HashiCorp language are the only supported languages at this time.

From there we have our path output string flag. This allows us to alter the default generated flag string for something that we choose. If we want to alter the directory structure, we can use the path pattern string flag, which alters the path for the output of the generated code.

After the path pattern string flag, we have our regions flag which we're familiar with. The default is global and use -z for quicker region specificity. From there we have our resources flag, which can be done the same with -r.

After that, we have our state flag and this tells Terraformer where to store the current TF state file. The default is local and if state is used, you'll need to specify the bucket where it will be dumped.

Lastly, we have our verbose flag, which is useful for debugging information. So if Terraformer isn't generating the infrastructure you requested or isn't working correctly, use the verbose flag to understand where it's failing. Before we close this course out on infrastructure to code, I wanted to provide you with some resources available to you should you need them to explore more options with Terraformer, starting with the repo that accompanies this course.

This repo is designed specifically to provide examples, installation methods, and more to accompany you for this course. The next repo I wanna provide you with is the official Terraformer Repo. This repo has all of the supported resources, supported cloud providers, and more to get you up and running with your specific cloud provider and specific resources. I highly recommend checking this out as it supports more up-to-date versions of Terraform such as Terraform 13.

The last resource I wanna provide you with, is our DevOps Training library. If there's anything you want to learn about DevOps, such as Terraform, Docker, Kubernetes, we've got it for you. We also have hands-on labs so you can get practical experience in a safe environment that I highly encourage you to check out.

That's it for this course, I want to thank you for running through it with me, Jonathan Lewey. If you have any questions, concerns or feedback don't hesitate to contact support@cloudacademy.com. Thank you.

About the Author
Students
28470
Courses
8
Learning Paths
2

Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.