There was a time where it was commonplace for companies to deploy new features on a monthly, bi-monthly, and, in some cases, even quarterly basis.
Long gone are the days where companies can deploy on such an extended schedule. Customers expect features to be delivered faster, and with higher quality. And this is where continuous delivery comes in.
Continuous delivery is a way of building software, such that it can be deployed to a specified environment, whenever you want to. And deploy only the highest quality versions to production. And ideally with one command or button push.
With this level of ease for a deployment, not only will you be able to deliver features to users faster, you'll also be able to fix bugs faster. And with all the layers of testing that exist between the continuous integration and continuous delivery processes, the software being delivered will be of higher quality.
Continuous delivery is not only for companies that are considered to be 'unicorns', it's within the grasp of all of us. In this Course, we'll take a look at what's involved with continuous delivery, and see an example.
This introductory Course will be the foundation for future, more advanced Courses, that will dive into building a complete continuous delivery process. Before we can start trying to implement tools, we need to make sure that we have an understanding of the problem we need to solve. And we need to know what kind of changes to our application may be required to support continuous delivery.
Understanding the aspects of the continuous delivery process can help developers and operations engineers to gain a more complete picture of the DevOps philosophy. Continuous delivery covers topics from development through deployment and is a topic that all software engineers should have experience with.
Course Objectives
By the end of this Course, you'll be able to:
- Define continuous delivery and continuous deployment
- Describe some of the code level changes that will help support continuously delivery
- Describe the pros and cons for monoliths and microservices
- Explain blue / green & canary deployments
- Explain the pros and cons of mutable and immutable servers
- Identify some of the tools that are used for continuous delivery
Intended Audience
This is a beginner level Course for people with:
- Development experience
- Operations experience
Optional Prerequisites
What You'll Learn
Lecture | What you'll learn |
---|---|
Intro | What will be covered in this Course |
What is Continuous Delivery? | What Continuous Delivery is and why it's valuable |
Coding for Continuous Delivery | What type of code changes may be required to support constant delivery |
Architecting for Continuous Delivery | What sort of architectural changes may be required to support continuous delivery |
Mutable vs. Immutable Servers | What are the pros and cons for mutable and immutable servers |
Deployment Methods | How we can get software to production without downtime |
Continuous Delivery Tools | What sort of tools are available for creating a continuous delivery process |
Putting it All Together | What a continuous delivery process looks like |
Summary | A review of the Course |
If you have thoughts or suggestions for this Course, please contact Cloud Academy at support@cloudacademy.com.
Welcome back to Introduction to Continuous Delivery. I'm Ben Lambert, and I'll be your instructor for this lecture.
Just like at the source code level, modularity at the application architecture level helps to improve the ability to practice continuous delivery. In this lecture, we're going to talk about monoliths and microservices, and we'll compare the two.
It's not uncommon for web applications to start out as monoliths, which basically means that all of the modules that comprise your software are in one application. For context, this is the opposite of the microservices architecture where different services are broken out into different discreet deployable applications. Amazon started out as a monolith, so did Netflix and Etsy, and while Amazon and Netflix have moved towards microservices, Etsy remains a monolith.
So, what's the difference? A monolith is a singular application that contains all of the modules needed to perform its job. Now, that doesn't mean that it can't reach out to external services, it just means that all of the logic is in the same application, and it's typically deployed as a whole.
Microservices are discreet services that serve a specific purpose, and microservices communicate at the API layer, allowing them to be replaced with anything else that reassembles that API. It's similar to how dependency injection in our previous lecture allowed software modules to be swapped out with anything that implemented that shared interface.
Now, you may be wondering how any of this falls under the course topic of continuous delivery. When it comes to deploying software often, a monolith can grow to a certain point where testing and deployments become very time consuming. Sure, they're automated, though they can still become a bottleneck. Since continuous delivery is about deploying higher quality software, there are implications here as well. Now, I'm not suggesting that either one is inherently higher quality than the other.
However, monoliths can cause technology lock in, so older monoliths may be built on a technology stack that doesn't promote best practices for modern software development, and large code bases in general risk getting to the point where it seems easier for developers to get something working rather than get it done correctly. Monoliths aren't alone here. Because microservices are not all built with the same technology stack, it's possible that the wrong tech for the job has been selected. However, in this case, refactoring should in theory be simpler.
The logical question becomes, when should you use a monolith, and when does it make sense for microservices? The short answer is, it depends. There's really no one right answer, however I can share some of my opinions based on my time as a developer.
For greenfield development, which is a term that basically means a new project, I like to recommend that people start with a monolith, and here's why. If you start by trying to break everything out into its own service at the beginning, you won't have a clear enough picture where to start. You'll try and define your service boundaries as best you can. However, you don't know what you don't know. There's an expression that says hindsight is 20/20, which means, when looking at something after it's happened, things become obvious. After all is said and done, you have all the information you need to know what you should have done. To start out with a monolith allows you to gain that hindsight as you go. Don't be afraid to develop something that you know will be replaced when you have a clearer understanding of what you need.
Once a monolith grows too large, it becomes more difficult to have a lot of different teams working on it in parallel. They can take awhile to build and test as well, and they can cause technology lock in. However, up until that point, it does remain a valid option.
Now, if you already have a large monolith, or you're doing brownfield development, which is building around or on an existing application, this is where microservices start to become viable. Once you have an understanding of the application, it's requirements, the requirements of the users, etc., you can start to identify the areas of the application that could be refactored out into their own services. Or, identify new functionality that should be created as a microservice. This is where you get to rethink the technology that's being used.
When considering breaking things out into their own tech stack, you can select the tool that's right for that particular task. Microservices are basically single-purpose applications that interact with the rest of the world through a well established API, and because they're single-purpose, they tend to be on the smaller side, at least compared to a large monolith. Because of this, developers tend to like working on them more, because unlike the monolith, they're more easy to understand, because you can kind of review them holistically and understand all of the code. Microservices are a natural extension of modular software development, because you can replace an entire unit of functionality as long as it implements the same APIs.
However, because they're isolated and they interact at the API level, they can also become a black box. With a monolith, tracing your request through the entire stack tends to be fairly simple, and there are a lot of great tools out there that can help give visibility to the inner workings of your application. If you implement microservices, you need to make sure you keep that same ability. If you can't trace the request through the complete lifecycle, your ability to identify and resolve problems goes way down, and when bugs arise as they inevitably will, then you'll struggle to fix even the simplest of them, and you'll notice a drop in customer satisfaction because of this.
If you're going to start implementing microservices, make sure you carefully consider the API implementation. Once it's up and running, you'll need to be careful about making changes that break things for other services that depend on your API. When implementing microservices, strongly consider the tech stack you plan to use. Make sure you're not using something that's on the bleeding edge of technology, unless you can wait for critical bugs in the tech to be fixed at the vendor's or community's schedule.
There's no perfect architecture. Even the best thing we have at any moment in time may not be suitable as technology continues to evolve. However, if you strive to build things in a modular and traceable way, then you'll be able to better adapt to future changes. So, at a certain size, monoliths become a bit of a bottleneck. They can take longer to build and test, and having teams build out different sections can impact other teams.
However, until you hit these limits, monoliths are a reasonable way to go. Once you do hit these limits, microservices can help break down the application into a more manageable set of services, allowing you to build, test, and deploy faster. However, it's not without its challenges as we've talked about.
Microservices and monoliths are developed, deployed, and operated in similar, yet different ways. When considering your continuous delivery plan, you need to think about where your application is now and where it's going.
We talked about code-level changes that may be required, and we've talked about some architectural changes that might be required. In our next lecture, we're gonna talk about some of the infrastructure choices that you'll have to consider.
We're gonna talk about mutable versus immutable servers.
Okay, if you're ready, let's dive in.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.