1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Continuous Delivery

Putting it All Together

Start course
Overview
DifficultyBeginner
Duration1h 7m
Students3039
Ratings
4.8/5

Description

Introduction to Continuous Delivery

There was a time where it was commonplace for companies to deploy new features on a bi-monthly or monthly, and in some cases even quarterly basis.

Long gone are the days where companies can deploy on such an extended schedule. Customers expect features to be delivered faster, and with higher quality. And this is where continuous delivery comes in.

Continuous delivery is a way of building software, such that it can be deployed to a specified environment, whenever you want to. And deploy only the highest quality versions to production. And ideally with one command, or button push.

With this level of ease for a deployment, not only will you be able to deliver features to users faster, you'll also be able to fix bugs faster. And with all the layers of testing that exist between the continuous integration and continuous delivery processes, the software being delivered will be of higher quality.

Continuous delivery is not only for companies that are considered to be "unicorns," it's within the grasp of all of us. In this course, we'll take a look at what's involved with continuous delivery, and see an example.

This introductory course will be the foundation for future, more advanced courses, that will dive into building a complete continuous delivery process. Before we can start in on trying to implement tools, we need to make sure that we have an understanding of problem we need to solve. And we need to know what kind of changes to our application may be required to support continuous delivery.

Understanding the aspects of the continuous delivery process can help developers and operations engineers to gain a more complete picture of the DevOps philosophy. Continuous delivery covers topics from development through deployment and is a topic that all software engineers should have experience with.

Course Objectives

By the end of this course, you'll be able to:

  • Define continuous delivery and continuous deployment
  • Describe some of the code level changes that will help support continuously delivery
  • Describe the pros and cons for monoliths and microservices
  • Explain blue / green & canary deployments
  • Explain the pros and cons of mutable and immutable servers
  • Identify some of the tools that are used for continuous delivery

Intended Audience

This is a beginner level course for people with:

  • Development experience
  • Operations experience

Optional Pre-Requisites

What You'll Learn

Lecture What you'll learn
Intro What will be covered in this course
What is Continuous Delivery? What Continuous Delivery is and why it's valuable
Coding for Continuous Delivery What type of code changes may be required to support constant delivery
Architecting for Continuous Delivery What sort of architectural changes may be required to support continuous delivery
Mutable vs. Immutable Servers What are the pros and cons for mutable and immutable servers
Deployment Methods How we can get software to production without downtime
Continuous Delivery Tools What sort of tools are available for creating a continuous delivery process
Putting it All Together What a continuous delivery process looks like
Summary A review of the course

 

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Welcome back to Introduction to Continuous Delivery. I'm Ben Lambert and I'll be your instructor for this lecture.

In this lecture we're going to take a walk through a continuous delivery process that I've created and review the different steps.

I'm going to be using Jenkins, Ansible, and AWS for this demo. These are tools that I have come to leverage when building Linux-based projects. However as we said in the previous lecture they are by no means the only option. And they're not even necessarily the best tools for all builds. They are just the ones I've selected based on my preferences.

So let's start by looking at Jenkins. We'll start with our continuous integration job.

If the CI job runs successfully, then we know we are working from a tested code base. If you took the Introduction to Continuous Integration course you may recall that that the CI process is in charge of testing our code and making sure that it behaves how we expect. So we only trigger our continuous delivery process after CI succeeds. Our continuous delivery process starts out by deploying our code to a staging environment for testing. If you look at the line that starts with Ansible playbook you can see that the way I'm deploying to the testing environment is through Ansible.

Your continuous delivery process should start out by taking the OS installer package that you're continuous integration process created, and deploying it for automated acceptance testing. By deploying it on an environment that matches production we can ensure that our tests are more accurate.

So I'm using Ansible however you can use whatever method you prefer. Let's check out the Ansible playbook and see how it does the deploy.

Starting off you can see that it says deploy application {{ environment }}. Those brackets are template tags that Ansible will swap out for us with the environment variable that we pass in to it on a command line. Looking back at Jenkins you can see that it passed in via the --extra-vars flag. Line two is where things get interesting. It says host; tag_environment_ and then we have that same environment variable in our brackets.

So this tells Ansible to look at at any host in this case it checks out our AWS servers and it looks for anything that has a tag named environment with a value that is equal to our environment variable that we passed in earlier, in this case its staging. So if you look at AWS you can see we have a server with that tag that is equal to staging. Now this is cool because we don't have to worry about IP addresses or anything like that. We can just tag a server with the environment set to staging and our script will provision it. This means we can dynamically create our testing environment whenever we need it and not have it running all the time.

Then going through the tasks we start with loading the assigning key for our apt get package and making sure that these new servers know where to look for our repo.

Next we update the app kit cache and we make sure that the Apache server is installed. Then we install the latest version of our application.

Keep in mind you could use a parameter here to pass through a specific version of your application if you wanted to. And we wrap it up by restarting the Apache surface. And once that's all done we can run our automated acceptance tests.

Now these tests can take a while because they run through the complete application stack by simulating a user sitting at the browser actually using our software.

This test takes the URL passed in and goes to that page and looks for the text to do And if it finds it, it passes. And if not, it fails. Now once the acceptance tests are run you can run other non-functional tests. You can run things like load testing and security audits.

Up to this point you've run all kinds of functional tests between the tests you've run in your CI tasks and the acceptance test here. However non-functional tests are the ones that don't really care if the software is functioning as a specification says. They care about things like, will the software stand up to the typical load that it needs to, or does it pass all the basic security audits. Now any automated tests that you can run here to try and show that the build is not production quality should be run.

Remember we talked about it before how your job from the CI server is to prove that the build is not production ready. And automated tests will help to ensure that people aren't spending time on builds that aren't going to make it into production. Once all of your non-functional automated tests are complete and they are passing, we can then allow people to perform any tests that they may need to. There are some tests that are still better for humans to perform.

You know who is really good at making sure that you're responsive styles are working well? Humans. Making sure that buttons line up and that the logo is exactly where you want it to, these are things that you could automate however the time and effort that would go into it really would be wasted.

So once you've run any manual testing and everything has passed then it becomes a business decision what to do next. At this point you haven't found anything wrong with the build so it can be promoted to production if that is what you choose.

Let's run through a complete process and see from code to production. We're currently running version 1.2 in our production environment. So let's edit that file and commit it. What we're going to do is update the version number. This is something that would ideally be done as a build step however I'm doing it manually here for the demo. So we're going to add a little bit of text and then we're going to commit this. And once its committed we'll push it to GitHub.

Now if we look at the Jenkins UI we can watch and see that its going to pick up the changes and run the continuous integration task. Notice the build button is blinking on the right side of the screen and it's running the CI task. And when this is done it's going to call the continuous delivery task if the CI tasks was successful.

The CD task takes a little bit of time to run. And we can see that its kinda going through its motions if we look at the console. So here its running through its deployment to staging. And the acceptance tests. And when its done we can see that staging has the latest environment. And production, which is the version our load balancer is serving, is is still running the previous version. And since everything looks good we can now push our code to production.

So we need to select which version we want to deploy to. Now this is an okay method for demonstrating blue-green deployments however keep in mind that if you implement this pattern yourself you'll want the process to determine for itself which color is live and which one needs to be deployed. We can check which color is attached to load balancer and it looks like the answer is green. So that means we need to deploy to our blue environment. So if we click the build button on the Jenkins UI then the process will start.

If we look at the Console we can see that Jenkins is running some of our Ansible tasks. Let's look at the playbook and look back at production when everything is deployed. Here you can see that the playbook looks a lot like the one from earlier when we deployed to staging. And that's because it basically is. The only real difference are the commands on lines 28 and 31 that are responsible for adding the new color to the load balancer and removing the old one. These playbooks are not as succinct as they could be. They could be one parameterized playbook. However I thought this would be a more readable way to show them.

So what Jenkins does when we tell it to deploy is run through this playbook and sets the color that we've told it to. And once the updates to the software are complete, it adds that color to the load balancer and unregisters the others. And since connection draining is enabled it will take the time to wrap any connections.

Okay, let's go back to Jenkins. Our job here is complete and if we look at the version running under the load balancer we can see that its 1.3. And if we went back to the green environment directly, that is the one with the IP address ending in .67, we can see its still on 1.2. And our production environments stays running with no down time And that's my basic CD process.

In our final lecture we're going to summarize what we've covered in this course.

So if you're ready let's check out the final lecture.

About the Author

Students36644
Courses29
Learning paths15

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.