1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Rolling Updates and Rollbacks


Course Introduction
Deploying Containerized Applications to Kubernetes
12m 29s
5m 49s
9m 16s
14m 3s
The Kubernetes Ecosystem
Course Conclusion

The course is part of these learning paths

Building, Deploying, and Running Containers in Production
Introduction to Kubernetes
more_horizSee 2 more
Start course
Duration2h 12m


Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers


You should be familiar with:

  • Working with Docker and be comfortable using it at the command line


August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics



The last topic we will discuss on deployments is how updates work. Kubernetes uses roll outs to update deployments. A Kubernetes rollout is the process of updating or replacing replicas with replicas matching a new deployment template. Changes may be configuration, such as changing environment variables or labels, or also code changes which result in updating the image key of the deployment template. In a nutshell, any change to the deployment's template will trigger a rollout. Deployments have different rollout strategies. Kubernetes uses rolling updates by default. Replicas are updated in groups instead of all at once until the rollout completes. This allows service to continue uninterrupted while the update is being rolled out. However, you need to consider that during the roll out there will be pods using both the old and the new configuration and the application should gracefully handle that. As an alternative deployments can also be configured to use the recreate strategy which kills all the old template pods before creating all the new ones. That of course incurs downtime for the application. We’ll focus on rolling updates in this course.


We actually rolled out an update in the last lesson when we added the cpu request to the app tier deployment’s pod template. Scaling is an orthogonal concept to rolling updates so all of our scaling events do not create rollouts.


kubectl includes commands to conveniently check, pause, resume, and rollback rollouts. Let's see how all of this work.


We’ll use our deployments namespace again and focus on the app tier deployment. First, we will delete the existing autoscaling configuration.

kubectl delete -n deployments hpa app-tier

Autoscaling and rollouts are compatible but for us to easily observe rollouts as they progress We'll need many replicas in action. Deleting the autoscaler will help us with that. Next, let’s edit the app tier deployment with 

kubectl edit -n deployments deployment app-tier


/ [space] 2 

To jump down to the replicas, press 

A to start editing then press right arrow to move the cursor after the 2 and enter backspace 1 0 to set the replicas to 10

It'll be easier to see the rollout in action with a large number of replicas. 

Also remove the resource request by pressing

Escape to stop editing, then /resources to jump to the resources field and press d 3 d to delete the 3 lines comprising the resource request. 

This will avoid any potential problems with scheduling the replicas if all 10 of the cpu requests can be satisfied.

Press colon wq to write the file and quit then watch the deployment until all the replicas are ready.

watch -n 1 kubectl get -n deployments deployments app-tier 


Now it’s time to trigger a rollout. Open the app-tier deployment with kubectl edit. 

kubectl edit -n deployments deployment app-tier

From here we can see the server added the default values for the deployment strategy, specifically the type is rolling update and the corresponding maxSurge and maxUnavailable fields control the rate at which updates are rolled out. Maxsurge specifies how many replicas over the desired total are allowed during a rollout. A higher surge allows new pods to be created without waiting for old ones to be deleted. Maxunavailable controls how many old pods can be deleted without waiting for new pods to be ready. We’ll keep the defaults of 25%. You may want to configure them if you want trade off the impact on availability or resource utilization with the speed of the rollout. For example, you can have all the new pods start immediately but in the worst case you could have all the new pods and all the old pods consuming resources at the same time effectively doubling the resource utilization for a short period. 


With those fields out of the way, we can trigger a rollout. Remember that any change to the deployment's template triggers a rollout. Let’s change the name of the container from server to api by typing

Colon %s/[space] server/[space] api/ enter

That command substitutes any occurence of [space] server with [space] api causing the name to change.  This is just a non-functional change for us, but it still demonstrates the rollout functionality. Apply the change by entering

Colon wq to write the file and quit.

Then we can immediately watch the rollout status with kubectl if we're fast enough. 

kubectl rollout -n deployments status deployment app-tier

kubectl rollout status streams progress updates in realtime. You'll see new replicas coming in and old replicas going out. Repeat this exercise until you see the entire flow. Experiment with the number of replicas, max surge, and max unavailable as you please. 


Rollouts may also be paused and resumed. I’ll split my window into two to better illustrate what is going on. Enter 


To start the terminal multiplexor and press ctrl+b followed by the percent symbol to split the terminal vertically into 2. To switch between the two terminals you can enter ctrl+b followed by the left or right arrow key. In the right terminal I’ll prepare the same rollout status command we used before so that I can watch the status changes as soon as we apply an update.

[right] kubectl rollout -n deployments status deployment app-tier

Now switch to the left terminal

Ctrl+b [left arrow]

And edit the app-tier deployment again

kubectl edit -n deployments deployments app-tier

Let’s change the container name again by entering

:%s/ api/ pause-me/

Next we will quickly write the file to apply the changes, then watch the status rollouts in the right terminal and pause the rollout mid-flight in the left terminal.


Ctrl+b ->


Ctrl+b <-

kubectl rollout -n deployments pause deployment app-tier


Now the rollout is paused, but pausing is won’t pause Replicas that were created before pausing. They will continue to progress to ready. However, there will be no new replicas created after the rollout is paused. We can try a few things at this point. One thing you can do is inspect the new pods before deciding to continue or rollback. We’ll simply get the deployment

kubectl get deployments -n deployments app-tier

And say that everything is a-okay and opt to continue. We can use the rollout resume command for that

kubectl rollout -n deployments resume deployment app-tier

The rollout picks up right where it left off and goes about its business. I’ll stop the terminal multiplexer now by entering

Ctrl+b & y

So now consider you found a bug in this new revision and need to rollback. kubectl rollout undo to the rescue


This will rollback to the previous revision. You may also rollback to a specific version. Use kubectl rollout history to get a list of all versions, then pass the specific revision to kubectl rollout undo.

kubectl rollout -n deployments undo deployment app-tier


That’s all for this demonstration of rolling updates and rollbacks, but before we move on

let’s scale back the app tier to one replica to give back some CPU resources

kubectl scale -n deployments deployment app-tier --replicas=1


Deployments and rollouts are very powerful constructs. Their features cover a large swath of use cases. 


Let's reiterate what we covered in this lesson. We learned that rollouts are triggered by updates to a deployments template.

Kubernetes uses a rolling update strategy by default.

We also learned how to pause, resume, and undo rollouts of deployments.


There's still so much more we can do with deployments. Rollouts depend on container status. K8s assumes that created containers are immediately ready and the rollout should continue. This does not work in all cases. We may need to wait for the web server to accept connections. Here's another scenario, consider an application using a relational database. The containers may start, but it will fail until a database and tables are created. These scenarios must be considered to build reliable applications. This is where probes and init containers come into the picture. We'll integrate probes and init containers in the next two lessons. Please join me there when you are ready.

About the Author
Learning paths22

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.