Atlassian BitBucket Pipelines and Terraform

Contents

DevOps Adoption Playbook - Part 2 - Intro
1
Introduction
PREVIEW1m 8s
DevOps Adoption Playbook - Part 2 - Review
9
Start course
Difficulty
Advanced
Duration
50m
Students
2132
Ratings
4.5/5
starstarstarstarstar-half
Description

In this course, we introduce you to the DevOps Playbook Part 2.

The DevOps Playbook Part 2 course continues with Books 8 through to 12, covering the topics, Infrastructure as Code, Configuration Management, Continuous Delivery, Continuous Deployment, and Continuous Monitoring, where each book documents a required DevOps competency, one in which you’ll need to adopt and establish skills in to be effective in DevOps.

  • Book 8 - Infrastructure as Code
  • Book 9 - Configuration Management
  • Book 10 - Continuous Delivery
  • Book 11 - Continuous Deployment
  • Book 12 - Continuous Monitoring

The DevOps Playbook Part 2 course includes 2 demonstrations where we put into practice some of the DevOps theory presented.

  • Atlassian BitBucket Pipelines and Terraform
  • Atlassian BitBucket Pipelines Promoting Staging to Production

Note: the source code as used within these demonstrations is available at:
https://github.com/cloudacademy/devops/tree/master/DevOps-Adoption-Playbook

Transcript

- [Instructor] In this demo, we're going to introduce Terraform as a method for building infrastructure as code, and we're going to do so within our pipeline. So let's jump into our repository. This time, we've a new repository called terraform-test. Let's navigate into it. Here we can see the layout of the files within this repository. They're similar to the previous repository that we worked in. We'll jump into our terminal, and we'll run the tree command to see the layout. Here again we have a Dockerfile, which will be used to build our Docker image. We have our bitbucket-pipelines.yml file, which drives the overall pipeline. We have a new directory, gauntlet, for running our gauntlet attacks. Under our source directory, we have a React web app. Under the Terraform directory, we have our Terraform templates which we'll use to do infrastructure as code, or programmable infrastructure. 

Next, we'll jump into Visual Code, and we'll take a closer look at the directory structure of our project. So within Visual Studio Code, close the welcome message and we'll open up the explorer. Next, we'll select the app.js file, which is a React JSX file, which is a combination of JavaScript and HTML. We'll update the build vision number to 2.0.0 and save this file. We'll jump back into our terminal and we'll do a get status. Here, we can see we have a number of modified files. We'll add them all, we'll do get commit-a for add, -m with the message, and this time, we'll use latest updates. 2.0.0, enter. Okay, next, we do git push, and this will push the changes up to our repository within Bitbucket. Okay, jumping into the pipeline, we can see there the pipeline is composed of a number of steps. The first step does a React build of our React web app. So this is going to run a number of npm commands to do things like transpiling the code from JSX into pure JavaScript. Taking a closer look at the React build step, we can see that the npm unit tests have just completed, and that a coverage report is presented. 

This is really useful and shows the concept of continuous testing. So we're still waiting for the React build to fully complete. We get a lot of output, which is very useful. So, again, this is doing the compilation phase of the React app, converting it from JSX into pure JavaScript. And doing minification on the CSS files. Under the hood, this step is using Webpack to run those processes. So give it a few more seconds, and that should up, that should complete, as it has. So from here, we move on to the next step, which is the Docker build, so this will create our Docker image. Now, this time, the Docker image is going to use an artifact that was created out of the first step. So you can see here the artifact has been downloaded and copied into the current build step. This particular build step is similar to the previous build step in the previous demo. 

So now it's doing the Docker Build. And if we swap over to Docker hub, and again, we do a refresh on this, we'll see shortly the new docker image show up, which means it's being built and registered. So we're still building in the pipeline, and the Docker Build has just completed, and it was successful. So now, again, if we go back to Docker hub, and we take the commit ID, d132, we reload and here we can see d132 for the start of the commit ID. So that's a good result, again, we've got our Docker image built within our pipeline and registered within Docker Hub. Okay, so now we're moving onto the infrastructure as code stage where we're going to build an ECS cluster on AWS. The Terraform templates that we use builds everything from the ground up, so we're building all of the networking, the VPC, the route tables, the security groups. 

We're going to deploy a net gateway, and then we're going to move on and deploy an ECS cluster on top of that network. So again, we're using Terraform here as our technology for doing infrastructure as code. So you can see we're well underway, and a number of resources have been launched on AWS. So when we start to do this creation when there's no infrastructure there, it will take longer than any subsequent time, so if we do another build later on because the infrastructure is already there, Terraform's clear enough to know to not have to recreate it, which is a really useful feature. And the way that it does that we'll go into later on. But let's take a look at ECS, and we'll refresh and see that our staging cluster has been launched and created, and under Task Definitions, we have a new task definition staging app. 

So Terraform has already done a lot of work quite quickly. So back within our pipeline, we can see that we're still building. We'll jump back into Visual Code, and we'll take a closer look at the Terraform files. So we've got a main.tf file, which is the main terraform file, and at the very top is a terraform resource for maintaining state on S3 within an S3 bucket. We see here that we have a, we're building a VPC, and as we scroll down, we're building a number of networking resources on AWS. Here we have an AWS security group for our load balancer. Further down, we have a application load balancer. We have a target group. We have a couple of listeners for ports 80 and 443 htdp and htdps, and then we have our ECS cluster. We have a task definition. The task definition requires Firegate, and then we have a service, an ECS service. 

Again, the launch type is Firegate. So Firegate's going to manage all of the EC2 compute resources for us, but it will deploy it into the VPC that we've just created. And lastly, we create a AWS route 53 alias record, which points to the application load balance that we create within this terraform template. So let's go back to our pipeline, and we'll scroll down, and we're almost there. In fact, I think we're finished, indeed we have. So our staging Terraform step has completed, and it's successful, so it took just under four minutes to build everything within AWS, which is quite amazing, considering what you get out of it. So let's click on the output, which is the application load balancer default DNS name, provided by AWS, and, excellent, we've got our web app. We'll jump into developer tools, and we'll show you a couple of things. So here, we can see the CSS resource and the JavaScript resource, and if we have a look at the actual contents of those files, you can see, for example, the transpiled JavaScript, which has been minified as well. And likewise, with the CSS file. 

So this was all done by our first step within our pipeline. So going back into AWS, if we look at our cluster, ECS cluster, you can see we have one service and two running tasks. Again, within Task Definitions, if we drill into the task definition itself, slipping the JS on tab we can see, for example, the image attribute points back to our Docker image hosted in Docker Hub, having the same commit ID as the tag. So we'll go into Visual Code, and we'll take a closer look at the Bitbucket pipeline.yml file. So under pipelines, we have branches, we have a feature, a feature branch, which we're not using at the moment, and we have a master branch. So master was the one that we committed to, so this is the branch that executed. So the first step was to do the React build, which calls a number of npm commands, resulting in an artifact, which is used by the next step.

So the next step was to do a Docker Build, and this would create our Docker image with an image name, involving the Bitbucket commit ID. We do the build, we do a Docker login to Docker Hub and then we push our Docker image into Docker Hub. Again, we have a couple of environment variables that need to be set up to allow us to authenticate into Docker Hub. So if we go back into the pipeline, into settings, additional to the environment variables for Docker Hub, are a couple of other variables that we need to set up to allow terraform to interact with AWS. So we've got our Docker Hub credentials username and password, and then we've got our AWS credentials, an access key and secret access key. Okay, so back within our pipeline.yml file, our staging Terraform step involves a deployment to staging, so we can deploy to test staging or production. 

The trigger is set to automatic, so this will kick off as soon as the previous step finishes. We start off by doing a Terraform init to initialize Terraform. We're using Terraform workspaces, so we've got a workspace for staging and a workspace for productions. We select the staging one. We then do a Terraform validate, followed by Terraform apply. If we jump back into the main .tf Terraform file, as earlier mentioned, we're using a backend of S3 to store the Terraform state file, which allows us to track the state. So you'll notice within our Terraform file, we set the DNS subdomain to staging, so this is pushed into the route 53 alias command by Terraform. Following on, we have a production step where the trigger is set to manual, so you need to actually approve this before this step kicks off. Here, the DNS subdomain is set to prod, and the Terraform commands are similar. This time, we'll navigate to the staging.dmlclouding.com which is our route 53 A record that Terraform has created for us. Next, we'll navigate back into the pipeline. 

And this time, we'll take a look at deployments. So here, if you look at deployments, we have Staging and Production. So if we click on Staging, we get a deployment summary of what's been pushed out to staging so which is the last commit that we pushed into the master branch. So, finally, let's jump back over into our devopsengineering slack channel, and just check to see whether our notification came through, and indeed it has. So we can see that the latest run of the pipeline number 43 passed for the master branch. 

So let's quickly summarize what we accomplished in this demonstration. We created a pipeline that consisted of multiple stages, or steps. The first step was to do a React build on our web app. The second step was to use Docker to build a Docker image. The third step leveraged infrastructure as code using Terraform to build an ECS cluster on AWS. Then launching our Docker image as containers on that cluster, so this was all done through our pipeline automatically and very quickly. Here, you can see how infrastructure as code can really empower devops automation, providing a very quick and reliable method to provision infrastructure. In the last demonstration, we'll promote staging right through to production, and run some gauntlet tests on our production infrastructure.

About the Author
Students
125628
Labs
66
Courses
113
Learning Paths
180

Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, Azure, GCP), Security, Kubernetes, and Machine Learning.

Jeremy holds professional certifications for AWS, Azure, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).