What is CD?
What's Involved in Continuous Delivery
Getting Software to Production
The Complete Picture
The course is part of these learning pathsSee 1 more
There was a time where it was commonplace for companies to deploy new features on a monthly, bi-monthly, and, in some cases, even quarterly basis.
Long gone are the days where companies can deploy on such an extended schedule. Customers expect features to be delivered faster, and with higher quality. And this is where continuous delivery comes in.
Continuous delivery is a way of building software, such that it can be deployed to a specified environment, whenever you want to. And deploy only the highest quality versions to production. And ideally with one command or button push.
With this level of ease for a deployment, not only will you be able to deliver features to users faster, you'll also be able to fix bugs faster. And with all the layers of testing that exist between the continuous integration and continuous delivery processes, the software being delivered will be of higher quality.
Continuous delivery is not only for companies that are considered to be "unicorns," it's within the grasp of all of us. In this course, we'll take a look at what's involved with continuous delivery, and see an example.
This introductory course will be the foundation for future, more advanced courses, that will dive into building a complete continuous delivery process. Before we can start in on trying to implement tools, we need to make sure that we have an understanding of the problem we need to solve. And we need to know what kind of changes to our application may be required to support continuous delivery.
Understanding the aspects of the continuous delivery process can help developers and operations engineers to gain a more complete picture of the DevOps philosophy. Continuous delivery covers topics from development through deployment and is a topic that all software engineers should have experience with.
By the end of this course, you'll be able to:
- Define continuous delivery and continuous deployment
- Describe some of the code level changes that will help support continuously delivery
- Describe the pros and cons for monoliths and microservices
- Explain blue / green & canary deployments
- Explain the pros and cons of mutable and immutable servers
- Identify some of the tools that are used for continuous delivery
This is a beginner level course for people with:
- Development experience
- Operations experience
What You'll Learn
|Lecture||What you'll learn|
|Intro||What will be covered in this course|
|What is Continuous Delivery?||What Continuous Delivery is and why it's valuable|
|Coding for Continuous Delivery||What type of code changes may be required to support constant delivery|
|Architecting for Continuous Delivery||What sort of architectural changes may be required to support continuous delivery|
|Mutable vs. Immutable Servers||What are the pros and cons for mutable and immutable servers|
|Deployment Methods||How we can get software to production without downtime|
|Continuous Delivery Tools||What sort of tools are available for creating a continuous delivery process|
|Putting it All Together||What a continuous delivery process looks like|
|Summary||A review of the course|
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
Welcome back to Introduction to Continuous Delivery. I'm Ben Lambert, and I'll be your instructor for this lecture. In this lecture, we're going to talk about servers.
Unless your code is all running in some serverless container, you'll need servers to run your code, which means you need to think about how you're going to manage your servers. In particular, how you manage getting your software running on them, and how you go about handling changes in your software.
Years ago, servers were setup and configured one at a time by people, they'd get the operating system loaded, and get any software that was required running, and then they'd manage changes manually. Over time things started to get scripted out, adding a level of automation, repetitive tasks are boring and scripting them out saves a lot of time. Eventually, configuration management tools became the ideal way to manage server setups, configuration management allowed engineers to specify the desired state of the server, and how does a configuration management tool ensure that it happens.
Tools like Chef, Ansible, Puppet and SaltStack perform this task quite well. They allow for engineers to basically list, in code, the things that need to be installed and the versions of those things, and these tools can ensure that it happens. With these tools, server setup and configuration no longer required engineers to handle it manually, and, they replaced random scripts since these tools added some level of consistency across the industry.
With tools like these, we no longer had to worry about snowflake servers, which is a term that means a server that's unique and difficult to reproduce, and instead what we have are phoenix servers, which means that if a server was to die, it could be reborn. Having the ability to treat servers as disposable, because we can recreate them at will, gives us a lot of power. With such a wide range of tools for configuration management, we shouldn't have snowflake servers any more.
All servers should be phoenix servers, and the configuration scripts should be under version control. So, if the ability to get a server into our desired state is good, because we're working off a known working configuration, then immutable servers are the next step in that evolution.
So what do I mean by immutable servers? First, let me explain what a mutable server is. Mutable servers are servers whose configuration and settings will change over time, and if you're updating the operating system, or your software, adjusting firewall rules or really, any change, then it's a mutable server. Which means, an immutable one is a server whose settings don't change, the server is only ever replaced.
I want to make two points of clarification. First, when I talk about servers here I'm really referring to virtualized instances, I'm not necessarily suggesting that physical servers be treated as disposable.
And second, the term immutable in the context of a server, is a bit of a misnomer, because things like memory and log files will change, however, the term is meant to apply once the configuration is set and everything is loaded, no other outside changes will be made.
So we have these two models, mutable and immutable. If you're using some form of configuration management, to either configure the server in the case of a mutable server, or to configure a base server image in the case of an immutable server, then both of these options are viable.
Both options can be deployed in a sustainable way, they'll probably just require some slightly different tools. So what are the pros and cons for each? Let's start with mutable. As we talked about before, if you're using some form of configuration management, so that your servers can be easily configured into a known working state, then mutable servers are perfectly viable.
Here are some of the pros.
There are some fantastic tools out there for handling configuration management.
It can be useful for small projects or small teams that don't want the extra overhead of managing virtual machine images.
Next, ad hoc commands for things like security patches are really rather simple.
There's a lot of good resources for configuration management tools regarding deployments.
And finally, having configuration management scripts under version control allows shared ownership and the ability to rollback changes in the scripts when needed.
Here are some of the cons.
When an upgrade or deployment fails, the server can be left in a broken state, resulting in either troubleshooting what went wrong, or killing those instances and building based off of the previous configuration management settings.
Next, because we're making changes over time, we're not starting with a known working configuration each time we deploy.
And, depending on how we implement our configuration, we may have to use our configuration management tool to handle things like scaling, and we could lose out on some of the functionality that's built in to our cloud platform.
And finally, any change to the OS needs to be tested separately to ensure that nothing breaks, what I mean by this is, if we did something like remove an OS package that is no longer needed, then we need to first test it out in a testing environment to ensure that it won't break anything in our production environment.
So, let's talk about immutable. Like we talked about previously, immutable is the natural evolution of configuration management. Once a server is in a known working state, we can snapshot it and consider it to be production ready. This is a lot of value, and has become the method of choice for me.
Here are some of its pros.
Having the server in a known working state for each deployment gives us a higher level of trust in it.
Next, we can typically use deployment and scaling features that are available with our cloud platform, such as auto scaling with AWS.
Once a server image has been created, scaling out is a relatively quick process.
Next, if you attempt to deploy changes and for any reason they fail, it's a matter of using the previous server images. And since ad hoc commands shouldn't be run, you ensure that your operating system changes trigger a kickoff of the complete continuous delivery process, which allows your OS changes to be tested via our testing gates.
So, no additional testing process is required for OS level changes.
And here are some of the cons.
The build times are longer because you're merging a base operating system with your application and creating a server image based on that.
So, the baking process also creates server images that you need to store and manage.
Next, any changes in the operating system require a bake and redeploy, which again, can be more time consuming.
And finally, in addition to using configuration management tools to configure our base image, we'll need additional tools to handle the baking and deployment. Now this isn't a problem per se, however I don't like introducing new tools without a really good reason.
So, both options are viable, as long as you're using phoenix servers. I like to use an immutable server model because your code and operating system are tested together, and then, once it's working, it's basically shrink-wrapped, so you're always using the exact same, well-tested configuration. And there are a lot of great options for tools out there that will help with this. It does add some time to the deployment process upfront by baking these images, but, making changes go live and scaling them is pretty easy.
So, now that we've covered both mutable and immutable servers, we should take a look at actually deploying an application, and that's what we'll cover in our next lecture.
Alright, let's get started.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.