Implementing Continuous Delivery
The course is part of this learning path
This course provides you with an understanding of the concepts and principles surrounding Continuous Integration/Continous Delivery (CI/CD) to help you prepare for the AWS Certified Developer - Associate exam.
- How to set up your development environment
- How version control works
- How to begin implementing testing in your environment
- The why and how of database schema migrations
- What Jenkins is and why you should care
- Define continuous delivery and continuous deployment
- Describe some of the code level changes that will help support continuously delivery
- Describe the pros and cons for monoliths and microservices
- Explain blue / green & canary deployments
- Explain the pros and cons of mutable and immutable servers
- Identify some of the tools that are used for continuous delivery
Welcome back to Introduction to Continuous Integration. I'm Ben Lambert and I'll be your instructor for this lecture.
In this lecture we're going to look at the continuous integration process using Jenkins. We're going to pull down some code from GitHub, install any dependencies that we need, and run some tests and then report back the state of the build. Now, there are a lot of great tools out there that can create a continuous integration process. Jenkins is a pretty common tool and one that I like to use. And it just came out with a new version so we're going to check out version 2.0. Jenkins allows for several different types of projects.
We're going to take a look at freestyle projects and pipeline projects. And we're going to start with freestyle. Here we have a freestyle project to build and test a Django application. On the dashboard for the job we can see the build history. Let's check out it's configuration. We start out with a project name. You should set some sort of standard for naming your projects so that jobs are consistent.
Next, we come to the source code repository. Because I'm using Git and it's a public repo, I don't need to set any credentials. However, for your repositories, which are probably private, you'll need to set the credentials in the settings. Next, we have build triggers. I'll be pulling Git every minute for changes. However, we do have some options here. We can trigger a build remotely. We can watch another project and trigger the build when the project is complete. We can have Git trigger a build post-commit with GitHub post-commit hooks. And we can check periodically for any changes.
So, I'm going to use polling, but I recommend some form of post-commit triggers since it's more efficient. Though it also requires some additional setup and security so you'll have to consider those options. Next, we have the build environment options. For this I'm using the options to clean the work space before we build and abort the build if it hangs. The clean work space before option allows us to have a higher confidence in the build because we're starting with a clean slate. Next, we have our build tasks. In this project we're going to start by running the build shell script. It's job is to install virtual ENV and install any Python libraries that we need. Then, it runs any pending database migrations and runs the test.
Notice that the shell scripts are stored in version control and are part of the project. Keeping build scripts in version control will ensure that our continuous integration server doesn't become a snowflake server. That's a term that basically means that a server is so unique it can't be reproduced easily. Now, we could stop this job here and report back the status to the developers. However, I want to show that it's possible to add in some additional non-functional tests that are useful at this stage. Again, the more useful testing that you run, and the earlier that you run it, the fast you'll be able to identify problems.
So I've added in a basic OWASP Top 10 scanner. Which basically does a scan for a small amount of known security holes. Keep in mind, this is not a replacement for your security team, as we mentioned previously, but automated scans will help to ensure that you're catching some of these issues early on. I've added in a Python linter, as well, which can help us to determine code quality. Any other useful code level testing that can be run here that doesn't take up too much time is going to help improve your code base. Finally, we come to our post build tasks. These usually include reports and notifications. And that's a basic freestyle job in Jenkins.
Next let's take a look at a pipeline job. You can see that the pipeline job already has a different dashboard than our freestyle. We're greeted with this pipeline view, showing the status of the last few built. This at a glance pipeline view is a nice way to represent your tasks. It tells us how long each step takes, allowing us to identify where any bottle necks may have occurred. When we look at the configuration it looks fairly similar at first. We get the same general info and build triggers, but then we're greeted by this new pipeline section. This is a cool feature.
Because it allows you to specify a source code repository and the name of the file that contains your pipeline code. And it will run that pipeline code when the build is triggered. In the pipeline code we use a groovy DSL to define our jobs. Now, I'm not going all 1960s hippie on you, Groovy is a language that runs on the Java virtual machine. And a DSL is a Domain Specific Language. That means that it's a language used for a single purpose, unlike more general purpose programming languages like Java, Python, C-Sharp, etc.
In this file you can see that we have different stages defined and some code to handle each stage. We start with the same build and run tasks that we did in our freestyle job. And then we move on to using FPM to create a Debian installer package for our application. Once it's done, hit command to copy our installer to an artifact repository is run. Now, in this example we're just using the Linux copy command, but you could use anything. You could store the files on S3 or in an artifact repo. This will be a different repository than your source code uses. Using source code repositories for this sort of thing is not optimal. Pipeline jobs allow developers and operations to treat the pipeline as code and store it with your source code. They can easily add things like new notifications or build steps without needing to use the Jenkins user interface. Now, don't get me wrong, user interfaces are great, but they can also become a bit of a bottle neck. So, pipelines are a pretty powerful and we've only scratched the surface.
We're not going to go into detail anymore in this course, though we likely will in future courses. So, that's a basic implementation of a continuous integration process with Jenkins. Developers check code into a version control system, such as Git, and the continuous integration server picks up on those changes. The continuous integration server should build the project, run the tests, and report back the status to the team.
Once a project has passed the tests and artifacts should be created and save. Because the code was built and tested, this installable artifact has shown it's at least of sufficient quality to be deployed to a staging environment for additional rounds of testing to be run. The deployment to a staging server and these additional tests will be part of the continuous deliver process, which picks up where continuous integration leaves off. And it will be the subject of a future course.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.