1. Home
  2. Training Library
  3. Amazon Web Services
  4. Courses
  5. Advanced Deployment Techniques on AWS


The course is part of these learning paths

Solutions Architect – Professional Certification Preparation for AWS (2019)
course-steps 45 certification 1 lab-steps 18 description 2
SysOps Administrator – Associate Certification Preparation for AWS - 2018
course-steps 33 certification 4 lab-steps 30 quiz-steps 4 description 5
DevOps Engineer – Professional Certification Preparation for AWS
course-steps 16 certification 1 lab-steps 10 quiz-steps 2


Start course
Duration1h 11m


As modern software expectations increase the burden on DevOps delivery processes, engineers working on AWS need increasingly reliable and rapid deployment techniques. This course aims to teach students how to design their deployment processes to best fit business needs.

Learn how to: 
- Work with immutable architecture patterns
- Reason about rolling deployments
- Design Canary deployments using AWS ELB, EC2, Route53 and AutoScaling
- Deploy arbitrarily complex systems on AWS using Blue-Green deployment
- Enable low-risk deployments with fast rollback using a number of techniques
- Run pre-and-post-deploy whole-cloud test suites to validate DevOps deployment and architectural logic

If you have thoughts or suggestions for this course, please contact Cloud Academy at


Welcome to another course on I'm your instructor, Andrew Templeton, and today we'll be walking through deployment on Amazon Web Services. Let's go over what we'll learn in this lecture today. 

Within this lecture, we will go over how we define deployment, with a specific focus towards the cloud and Amazon Web Services in particular. Once we define deployment, we need to briefly talk about how modern requirements have changed the nature of deployments over time. Because the business requirements relating to deployment have changed over time, we also need to look at how these changes and requirements have altered the challenges that technical people face when running deployments. We should set some goals for our deployment systems so we know what aspects of our systems we should be improving as we move forward. After defining goals, we will discuss how to quantitatively measure process quality, so we know if we are trending towards our deployment goals.

Then we'll wrap it up and go over briefly what the course should teach you in general. All right. Let's get started. Let's get started on our deployment introduction. By the end of this lecture, you should have a pretty good sense for what a deployment is and why we are focusing on them.

First, let's define deployment. If you were to try teaching yourself about software deployment and went onto Wikipedia to understand what it was you'd find this definition. "Software deployment is all of the activities that make a software system available for use." Kind of dry. Now what does this mean in plain English? Well, it means that our software deployment is all of the logic, sets of processes, and everything that goes into making sure that our software is usable by those whom we need to use it. We should stick to this simple definition, because when we get down to it in the cloud, in particular, this is what our deployments look like. We take a cloud that's empty and has no logic in it, add our code, which is our logic, run a deploy, and then we are presented with a running service that your users can use.

Okay, so very simply again. A cloud plus code deployed, gives you a running service. Now, this may seem like a very basic representation of how we do a deployment, but this is important for how we will rethink how to do deployments as we move into more advanced automation. So let's take a look at modern requirements for how we should expect deployments to go while in the cloud. Unfortunately for us technical folks, there are dramatically increased expectations when it comes to running deployments, as software has become more sophisticated and reached the lives of more and more business and consumer users.

First off, we have shorter timelines. Back in the day when we were releasing something like a word processing program, we would release the software onto physical media, like a floppy disk or a CD, and had many, many months before we expected to make a change of any kind. Now, with our modern cloud era, while technology has enabled us greatly, it has also dramatically increased the expectation for how quickly we should be able to release a build. It is now part of our job, as software engineers, to reduce the amount of time it requires to put new features and software in the hands of users, whether they be business or consumer. We also are now expected to be able to deploy new software with zero downtime, particularly for large, consumer-facing sites. Consumers cannot tolerate maintenance windows anymore, so we cannot hide behind scheduled maintenance windows in the middle of the night, during which we can take our infrastructure down and do any updates we want. We need to find ways to run these deployments without destroying the service up time.

And finally, modern software is expected to be released and usable globally, which means that in some cases, we might need to release our software to geographically disparate data centers, such that we may have issues timing or synchronizing deployments across multiple clouds. All in all, the modern technologist must content with significantly increased expectations for how software is deployed and maintained, as compared to years past. Not only are expectations placed upon us greater than ever before, but we also have to contend with a constantly evolving set of tools.

For instance, Amazon Web Services has over 10 releases per week in an average week. We have problems with data consistency and availability, as many companies move to more geographically disparate regions, such that we have to synchronize not only across geography, but also across time. Whenever we release new releases, we need to make sure that we do not drop any data. We have to deal with the distributed nature of systems. For instance, in a micro services architecture, we may need to upgrade one component of the system without upgrading the rest and have this new, upgraded component in the system work well with the un-upgraded parts of the system. We also have increased flexibility in our courses of action. Whereas we used to only be able to deploy software via CD or for very small programs, via the internet. Now that everything is cloud and distributed, we can deploy our software in many different ways, and we have to be able to choose which one is the best. This is a lot to deal with.

Let's set up some easy-to-remember goals. The deployment should be the three Rs. Rapid, meaning we should be able to deploy rapidly in a chronological time scale, such that when I click the deploy button, the system should go out as fast as possible, but also, I shouldn't have to spend a lot of time preparing each build. Repeatable, such that I should be able to kick my deployment process off and let it run by itself and expect at the end, that I should have a working deployment. And recoverable, meaning that if something actually goes wrong with, for instance, the application code or business logic that I am deploying, I should be able to reverse the course of the deployment, in the middle of the deployment, and prevent end users from seeing anything breaking. How do we measure process quality as we move towards the three Rs? Well, it's simple. An improved deployment process should decrease both real hours and man hours, ideally to zero in a couple of different ways.

Firstly, we should be able to decrease our maintenance downtime to zero. That is, we should not have any scheduled maintenance windows, nor should we have any times where we have unscheduled maintenance windows and we're doing a fire drill to recover from any kind of issues during new deployments. We should also be driving our deploy run time down to zero. That is either the processes that I have to run through as a human user to prepare for deployment, and the amount of time after I kick it off, how long it should take should be driven down to zero. I should also be driving the amount of time it requires to change the deployment process down to zero. For instance, if I have a certain configuration for my cloud, and then the architecture changes a little bit, I should be able to update or modify my deployment process or script without having to spend a ton of time on preparation. And finally, I should be decreasing my time and hours to revert a deployment down to zero, whether I want to revert due to issues within the application code after a testing, or if there's an issue within the deployment script itself, I want to be able to abort as quickly as possible.

In summary, this course will teach you how to make your Amazon web services deploys much better. Next up, we'll be talking about immutable infrastructure, an exciting new capability that is just now possible due to the cloud and advanced automation systems within Amazon Web Services.

About the Author

Learning paths1

Nothing gets me more excited than the AWS Cloud platform! Teaching cloud skills has become a passion of mine. I have been a software and AWS cloud consultant for several years. I hold all 5 possible AWS Certifications: Developer Associate, SysOps Administrator Associate, Solutions Architect Associate, Solutions Architect Professional, and DevOps Engineer Professional. I live in Austin, Texas, USA, and work as development lead at my consulting firm, Tuple Labs.