DevOps Adoption Playbook - Part 2 - Intro
DevOps Adoption Playbook - Part 2
DevOps Adoption Playbook - Part 2 - Demonstration
DevOps Adoption Playbook - Part 2 - Review
The course is part of these learning paths
In this course, we introduce you to the DevOps Playbook Part 2.
The DevOps Playbook Part 2 course continues with Books 8 through to 12, covering the topics, Infrastructure as Code, Configuration Management, Continuous Delivery, Continuous Deployment, and Continuous Monitoring, where each book documents a required DevOps competency, one in which you’ll need to adopt and establish skills in to be effective in DevOps.
- Book 8 - Infrastructure as Code
- Book 9 - Configuration Management
- Book 10 - Continuous Delivery
- Book 11 - Continuous Deployment
- Book 12 - Continuous Monitoring
The DevOps Playbook Part 2 course includes 2 demonstrations where we put into practice some of the DevOps theory presented.
- Atlassian BitBucket Pipelines and Terraform
- Atlassian BitBucket Pipelines Promoting Staging to Production
Note: the source code as used within these demonstrations is available at:
- [Instructor] In this demo, we're going to introduce Terraform as a method for building infrastructure as code, and we're going to do so within our pipeline. So let's jump into our repository. This time, we've a new repository called terraform-test. Let's navigate into it. Here we can see the layout of the files within this repository. They're similar to the previous repository that we worked in. We'll jump into our terminal, and we'll run the tree command to see the layout. Here again we have a Dockerfile, which will be used to build our Docker image. We have our bitbucket-pipelines.yml file, which drives the overall pipeline. We have a new directory, gauntlet, for running our gauntlet attacks. Under our source directory, we have a React web app. Under the Terraform directory, we have our Terraform templates which we'll use to do infrastructure as code, or programmable infrastructure.
So now it's doing the Docker Build. And if we swap over to Docker hub, and again, we do a refresh on this, we'll see shortly the new docker image show up, which means it's being built and registered. So we're still building in the pipeline, and the Docker Build has just completed, and it was successful. So now, again, if we go back to Docker hub, and we take the commit ID, d132, we reload and here we can see d132 for the start of the commit ID. So that's a good result, again, we've got our Docker image built within our pipeline and registered within Docker Hub. Okay, so now we're moving onto the infrastructure as code stage where we're going to build an ECS cluster on AWS. The Terraform templates that we use builds everything from the ground up, so we're building all of the networking, the VPC, the route tables, the security groups.
We're going to deploy a net gateway, and then we're going to move on and deploy an ECS cluster on top of that network. So again, we're using Terraform here as our technology for doing infrastructure as code. So you can see we're well underway, and a number of resources have been launched on AWS. So when we start to do this creation when there's no infrastructure there, it will take longer than any subsequent time, so if we do another build later on because the infrastructure is already there, Terraform's clear enough to know to not have to recreate it, which is a really useful feature. And the way that it does that we'll go into later on. But let's take a look at ECS, and we'll refresh and see that our staging cluster has been launched and created, and under Task Definitions, we have a new task definition staging app.
So Terraform has already done a lot of work quite quickly. So back within our pipeline, we can see that we're still building. We'll jump back into Visual Code, and we'll take a closer look at the Terraform files. So we've got a main.tf file, which is the main terraform file, and at the very top is a terraform resource for maintaining state on S3 within an S3 bucket. We see here that we have a, we're building a VPC, and as we scroll down, we're building a number of networking resources on AWS. Here we have an AWS security group for our load balancer. Further down, we have a application load balancer. We have a target group. We have a couple of listeners for ports 80 and 443 htdp and htdps, and then we have our ECS cluster. We have a task definition. The task definition requires Firegate, and then we have a service, an ECS service.
So this was all done by our first step within our pipeline. So going back into AWS, if we look at our cluster, ECS cluster, you can see we have one service and two running tasks. Again, within Task Definitions, if we drill into the task definition itself, slipping the JS on tab we can see, for example, the image attribute points back to our Docker image hosted in Docker Hub, having the same commit ID as the tag. So we'll go into Visual Code, and we'll take a closer look at the Bitbucket pipeline.yml file. So under pipelines, we have branches, we have a feature, a feature branch, which we're not using at the moment, and we have a master branch. So master was the one that we committed to, so this is the branch that executed. So the first step was to do the React build, which calls a number of npm commands, resulting in an artifact, which is used by the next step.
So the next step was to do a Docker Build, and this would create our Docker image with an image name, involving the Bitbucket commit ID. We do the build, we do a Docker login to Docker Hub and then we push our Docker image into Docker Hub. Again, we have a couple of environment variables that need to be set up to allow us to authenticate into Docker Hub. So if we go back into the pipeline, into settings, additional to the environment variables for Docker Hub, are a couple of other variables that we need to set up to allow terraform to interact with AWS. So we've got our Docker Hub credentials username and password, and then we've got our AWS credentials, an access key and secret access key. Okay, so back within our pipeline.yml file, our staging Terraform step involves a deployment to staging, so we can deploy to test staging or production.
The trigger is set to automatic, so this will kick off as soon as the previous step finishes. We start off by doing a Terraform init to initialize Terraform. We're using Terraform workspaces, so we've got a workspace for staging and a workspace for productions. We select the staging one. We then do a Terraform validate, followed by Terraform apply. If we jump back into the main .tf Terraform file, as earlier mentioned, we're using a backend of S3 to store the Terraform state file, which allows us to track the state. So you'll notice within our Terraform file, we set the DNS subdomain to staging, so this is pushed into the route 53 alias command by Terraform. Following on, we have a production step where the trigger is set to manual, so you need to actually approve this before this step kicks off. Here, the DNS subdomain is set to prod, and the Terraform commands are similar. This time, we'll navigate to the staging.dmlclouding.com which is our route 53 A record that Terraform has created for us. Next, we'll navigate back into the pipeline.
And this time, we'll take a look at deployments. So here, if you look at deployments, we have Staging and Production. So if we click on Staging, we get a deployment summary of what's been pushed out to staging so which is the last commit that we pushed into the master branch. So, finally, let's jump back over into our devopsengineering slack channel, and just check to see whether our notification came through, and indeed it has. So we can see that the latest run of the pipeline number 43 passed for the master branch.
So let's quickly summarize what we accomplished in this demonstration. We created a pipeline that consisted of multiple stages, or steps. The first step was to do a React build on our web app. The second step was to use Docker to build a Docker image. The third step leveraged infrastructure as code using Terraform to build an ECS cluster on AWS. Then launching our Docker image as containers on that cluster, so this was all done through our pipeline automatically and very quickly. Here, you can see how infrastructure as code can really empower devops automation, providing a very quick and reliable method to provision infrastructure. In the last demonstration, we'll promote staging right through to production, and run some gauntlet tests on our production infrastructure.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, and Kubernetes (CKA, CKAD, CKS).