Implementing Continuous Delivery
This course provides you with an understanding of the concepts and principles surrounding Continuous Integration/Continous Delivery (CI/CD) to help you prepare for the AWS Certified Developer - Associate exam.
- How to set up your development environment
- How version control works
- How to begin implementing testing in your environment
- The why and how of database schema migrations
- What Jenkins is and why you should care
- Define continuous delivery and continuous deployment
- Describe some of the code level changes that will help support continuously delivery
- Describe the pros and cons for monoliths and microservices
- Explain blue / green & canary deployments
- Explain the pros and cons of mutable and immutable servers
- Identify some of the tools that are used for continuous delivery
Welcome back to introduction to continuous delivery. I'm Ben Lambert, and I'll be your instructor for this lecture.
So far, we talked about what continuous delivery is in our opening lecture. Then we talked about what type of code changes might be required to ensure that our application is in a state that can be deployed at any time. We moved on to have a similar discussion regarding the applications architecture. We talked about monoliths and micro services. And then we talked just now about mutable versus immutable servers.
So now it's time to review some deployment strategies.
We'll start with blue-green deployments. This is also sometimes called red-black, however the colors are not important. They're just placeholders to represent a group of servers.
Now, I really like the blue-green deployment strategy, because it helps to deploy new releases while minimizing downtime. Here's how it works at a very high level. Now this isn't specific to any particular cloud vendor, this is just a generic implementation. So you'll be able to implement this pattern with just about any cloud platform. You have two environments, named green and blue. And you have some sort of routing mechanism to choose which one is live, based off of which one it sends traffic to.
Let's say that green is currently live, and you want it to deploy your latest changes, then you'll deploy the latest build to the blue environment, and if everything looks good, you can tell the router to swap the traffic from green to blue. And now blue is live and the green has the previous version of our code. So, if we need it to roll back, all we need to do is tell that router to switch back to green. This doesn't mean that you have two production environments running at all times, because all of your servers should be phoenix servers anyway. You should be able to have the second environment created whenever it's needed. And once the deployment is complete, and everything looks good, you can remove that environment. After all, in a worst case scenario, you could always have the environments spooled up with the previous version, and have the router send traffic to that. So, blue-green allows you to swap out environments with the flip of a switch, and if something goes wrong, you can reset it by flipping that switch again.
Let's look at how to deploy with blue-green deployments. In this example, we're going to perform a blue-green deployment by deploying the react.js to do application. We'll use Amazon Web services for this example. However the principles are the same for any provider.
Here we have an AWS elastic load balancer. And attached to it is one instance. This is the blue server group. Now, I'm only using one server in each group, but the number of servers doesn't really matter. You can have as many as you need to. If we look at the auto scaling groups, we can see that we have two. One for green, and one for blue. And they pass the tag onto the instance that they create. So, blue passes on the color tag of blue, and green passes on the color tag of green to any of the two instances it creates. These tags are how we'll identify which machines need to be updated.
Currently, the blue server is attached to the load balancer. If we look at the instance ID of the instance attached to the load balancer, and compare it to the EC2 instances listing, we can see that the tag is set to blue. If we look at individual servers, we can see that we currently have different versions of our application. We're currently looking at the green server directly. It's the one with the IP address ending in 67. Notice that below the text box, left aligned, we have a version number set to 1.0. And if we check out the blue server directly, it's set to 1.1. And the load balancer is pointing to our blue servers, so it also is 1.1. Now, if version 1.1 isn't working out for any reason, maybe it's throwing errors, we can roll it back to the previous version by swapping into our green environment.
Let's check that out.
So we select the environment we want to roll back to. And our Jenkins job is tasked with handling the rest. The blue one is now gone, and the green one is in its place. So if we refresh the page, we can see that our load balancer is now running version 1.0.
Let's restore that back. Let's set it back to 1.1 by rolling it back again, and setting it to the blue environment this time.
After a few moments, the blue is added to the load balancer, and then the green one goes away. And when we check the browser, we can see that we're back to 1.1 for the load balancer.
Let's make a change to the version number and push it to Github, and then deploy the change through our green environment.
So if all goes well, green will be version 1.2, and blue will stay 1.1. We start with making a change in the text editor. And then we'll go to our command line and commit this. And we'll push this to Github. And when we go back and look at the user interface in Jenkins, we can see that the CI job has picked up on the changes and ran its process to create an installer. And then, the CD task runs, which is where things like automated acceptance tests will go.
Now that those have run, we can deploy using our deploy task. This task need to know if we're going to deploy to green of blue. And once we've selected the environment, we can run it. If we watch the console, we can see that this is gonna take a little while. Though it's running through some answerable tasks, it's reaching out to the green servers, again, in this case we only have the one, and it's going through the process of connecting in, pulling down our latest code from our app, Git repo, and then installing it, and then it restarts the Apache web server. And then it adds the green server to the load balancer before removing the blue ones.
If we have a look in the AWS console, we can see that the green server has been added, and then the blue one can go away. Looking at the browser we can confirm that the load balancer is pointing to the green server, and is up and running with version 1.2. Looking at the blue server directly, we can see that it's still serving 1.1, though not through the load balancer. The green is the one running version 1.2 and we've confirmed that it is attached to the load balancer.
So this is a basic blue-green implementation. And in this case, we're using a mutable server model. If we wanted to change it up to be an immutable server model, we'd have the same similar process, however, we'd need to include a step that's responsible for taking our code and baking it into the AMI. And then we could create a new launch configuration and add it to our auto-scaling group, and from there, the process is pretty much the same.
So to summarize, blue-green, you have two environments, and you can direct traffic to them on the flip of a switch.
So, now that we've seen how blue-green works, let's look at a strategy called, canary deployments.
Canary deployments are where you deploy the latest version of your software into a production environment and you have a router select a group of users to route to it, and you can see how it behaves before making the decision to roll it out further, or remove it entirely.
Canary deployments get its name from the practice of coalminers bringing canaries into the mines with them. Canaries, being more sensitive to the effects of toxic gases like carbon monoxide, serve as an early warning system for the miners. While the canary was happy and healthy, then the miners weren't at risk of carbon monoxide poisoning. However, if the carbon monoxide gases were present, then the canary would succumb to the effects, and sadly, it would die. And in this case, it would alert the miners to the threat. Now as sad as this was, these little heroes detected the threat early and potentially saved many lives.
So this style of deployment, like the practice it's named for, is about detecting problems early. Canary deployments require the application to be rolled out to a small number of users. And you get to choose who that group of users is. They could be a random sampling, or it could be a specific group, possibly based on geographic location or some other attribute. However, you break it down, you start with a small group. And then you monitor the usage, looking for any problems. After all, it's supposed to be an early warning system.
If you monitor your environment, then you'll be able to automate the process of comparing the metrics of the baseline against the metrics of the canaries, and get a basic score for how the canaries are doing. If the score's high enough, based on the threshold that you set, then everything looks good. So if everything is going well, you can either choose to increase the roll out incrementally, or just fully deploy that version. And if things are not going well, then you can just remove those canary servers from that environment, or redirect traffic back exclusively to your production environment.
Being an introductory course, we won't show canary deployments as an example like we did with the blue-green just a moment ago. However, we'll lightly walk through how to implement it for different cloud platforms in the future.
So, that's canary deployments at a high level. There are other deployment strategies, however, these are two common ones that will help to minimize downtime when you're deploying your software. And minimizing disruptions is a primary concern.
So, in our next lecture, we're gonna talk about some of the tools that are out there that will help you to implement a continuous delivery pipeline.
All right, let's get started.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.