DevOps Adoption Playbook - Part 2 - Intro
DevOps Adoption Playbook - Part 2
DevOps Adoption Playbook - Part 2 - Demonstration
DevOps Adoption Playbook - Part 2 - Review
The course is part of these learning paths
In this course, we introduce you to the DevOps Playbook Part 2.
The DevOps Playbook Part 2 course continues with Books 8 through to 12, covering the topics, Infrastructure as Code, Configuration Management, Continuous Delivery, Continuous Deployment, and Continuous Monitoring, where each book documents a required DevOps competency, one in which you’ll need to adopt and establish skills in to be effective in DevOps.
- Book 8 - Infrastructure as Code
- Book 9 - Configuration Management
- Book 10 - Continuous Delivery
- Book 11 - Continuous Deployment
- Book 12 - Continuous Monitoring
The DevOps Playbook Part 2 course includes 2 demonstrations where we put into practice some of the DevOps theory presented.
- Atlassian BitBucket Pipelines and Terraform
- Atlassian BitBucket Pipelines Promoting Staging to Production
Note: the source code as used within these demonstrations is available at:
- [Instructor] Okay, in this last demonstration, we're going to promote staging through to production. We'll click on our pipelines menu item to take us back into our pipeline. So here we can see our last successful pipeline build, number 43. We have the option of deploying it from the pipeline itself. Let's jump into our browser and we'll navigate to the staging URL. When we last built it, we automatically created a staging subdomain off democloudinc.com. Jumping back into our terminal, we'll do a curl against the URL just to confirm that the staging deployment has worked successfully.
Again here, we can see that we've got a HTTP 200. Okay, so we've confirmed that staging is up and running. Let's go back to our Bitbucket environment and we'll click on deployments. What we're gonna do now is promote staging to production. We click on the promote button within the staging box, and then we click the deploy button. This will jump us back into pipelines. We'll kick off the pipeline where we last left, meaning the production terraform step should begin. Here we can see that the step is indeed starting, and we begin our terraform setup. This time we'll run the terraform workspace select prod command to swap out our workspace to prod. Here, the terraform has kicked off and we're getting a number of AWS resources already starting to be created. If we refresh the VPC window, we can see now that we have a prod VPC with a different IPv4 CIDR block. If we jump over into the Elastic Container Service, we should see our production cluster, and indeed we do. So we've got a production in staging ECS cluster.
Let's jump into visual code, and we'll take a look at the production terraform step. So here the deployment is to the production environment, and the trigger here is manual, so someone has to actually approve this, as we did when we promoted. You see we've set the terraform workspace to prod, we do a validation on the terraform, we set the subdomain to prod, and then we run terraform apply. When the terraform completes, the following step will be to use Gaultlt to run some Gaultlt attacks on the production infrastructure. Here we're gonna do a curl 200 attack, and really all we're doing here is examining the response. And then we'll do an SSL protocol attack. So let's take a look at those attack files. The curl attack runs the curl command in silent mode, and looks at the HTTP response to see that it's a 200.
The specific curl command is this, and the host name we replace as the build runs. Looking at the SSL attack file, we can see that we use this command to examine and negotiate SSL against these protocols, SSL V2 and SSL V3. And here, we're making sure that we don't have accepted connections, the reason being is that these two protocols are now considered insecure. So let's jump back into our pipeline, and we can see that our production terraform step is still running, but it's almost completed. Now we're doing the route 53 A record as a pointer to an alias. So we jump back into the ECS, when we refresh we can see now that we've got one service with two tasks running behind it. If we go back to our pipeline, and now we can see that it's almost complete, and it's completed now.
Now the Gaultlt tests kick in, and that'll run against the production infrastructure. So let's watch this step. We do some pre-work, the tests have executed, let's take a look at each of them individually. So the curl command that was used within this attack file simply looks for an HTTP response code of 200, and here we can see that the Gauntlt test has passed. Likewise, with the SSL protocol attack, we can see that indeed, our SSL endpoint doesn't negotiate SSL V2 or SSL V3.
Okay, let's quickly review what we accomplished in this demonstration. First, we promoted staging into production. This kicked off a production terraform step, which used again infrastructure as code to set up our production environment. After that was up and running, we ran a couple of Gauntlt attacks against it to test the stability and the security of our environment. All up, again this shows how automation within a pipeline can be used to help with our DevOps adoption, and to automate things to happen very quickly.
Jeremy is a Content Lead Architect and DevOps SME here at Cloud Academy where he specializes in developing DevOps technical training documentation.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 25+ years. In recent times, Jeremy has been focused on DevOps, Cloud (AWS, GCP, Azure), Security, Kubernetes, and Machine Learning.
Jeremy holds professional certifications for AWS, GCP, Terraform, Kubernetes (CKA, CKAD, CKS).