1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Rolling Updates and Rollbacks

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW3m 31s
Deploying Containerized Applications to Kubernetes
6
Pods
14m 55s
7
Services
7m 29s
10
11
13
Probes
10m 34s
15
Volumes
13m 20s
The Kubernetes Ecosystem
Course Conclusion
18

The course is part of these learning paths

Building, Deploying, and Running Containers in Production
5
3
14
1
Introduction to Kubernetes
1
1
3
1
more_horizSee 2 more
Start course
Overview
Difficulty
Beginner
Duration
2h 30m
Students
14574
Ratings
4.4/5
starstarstarstarstar-half
Description

Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers

Prerequisites

You should be familiar with:

  • Working with Docker and be comfortable using it at the command line

Source Code

The source files used in this course are available here:

Updates

August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics

May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics

 

Transcript

The last topic we will discuss on deployments is how updates work. Kubernetes uses rollouts to update deployments. And a Kubernetes rollout is a process of updating or replacing replicas with new replicas matching a new deployment template. Changes may be configurations such as environment variables or labels, or also code changes which result in the updating of an image key of the deployment template. In a nutshell, any change to the deployment's template will trigger a rollout.

Deployments have different rollout strategies, and Kubernetes uses rolling updates by default. Replicas are updated in groups, instead of all at once until the rollout is complete. This allows service to continue uninterrupted while the update is being rolled out. However, you need to consider that during the rollout there will be pods using both the old and new configuration of the application. In such, it should gracefully handle that.

As an alternative deployments can also be configured to use the recreate strategy which kills all of the old template pods before creating the new ones. That, of a course, incurs downtime. So we're going to be focusing on the rolling updates in this course. We actually have already rolled out an update in the last lesson when we added the CPE request to the app tier deployments pod template.

Scaling is an orthogonal concept to rolling updates. So all of our scaling events do not create roll-outs. Kubectl includes commands to conveniently check, pause, resume, and rollback rollouts. So let's check out those now. We'll use our deployments namespace again and focus on the app tier deployment.

First, we will delete the existing auto scaling configuration. Auto-scaling and rollouts are compatible, but for us to easily observe rollouts as they progress we'll need many replicas in action. Deleting the autoscaler is going to help us with that.

Next let's edit the app tier deployment with the following command. We're gonna be jumping down to replicas and start editing them just change them after two and instead enter 10. It'll be easier to see the raw in action with a large number of replicas. Also remove the resource request by pressing escape to stop editing. Then jumping down to resources and D three D to delete the three lines comprising the resource request. This will avoid any potential problems with scheduling the replicas if all 10 of the CPU requests can be satisfied. We'll go ahead and write quit now. And we're going to be watching this with the Linux watch command.

Now it's time to trigger a rollout. Open the app to your deployment with Q+control+edit. From here, we can see the server added the default values for the deployment strategy. Specifically, the type is rolling update in the corresponding match surge specifies how many replicas over the desired total are allowed during a rollout. A higher surge allows new pods to be created without waiting for old ones to be deleted.

In the maxunavailable controls how many old pods can be to be deleted without waiting for new pods to be ready. We'll keep the defaults of 25%. You may want to configure them if you want to trade off the impact on availability or resource utilization with the speed of the rollout. For example, you can have all of the new pods start immediately, but in the worst case you can have all of the new pods and all the old pods consuming resources at the same time effectively doubling the resource utilization for a short period.

With those fields out of the way, let's trigger a rollout.This command will replace server with name cloudacademy for all of our previous pods. This is just a nonfunctional change for us but it will demonstrate the rollout functionality. So we're go ahead and apply this with right click. Then we can immediately watch the rollout status with Q+control if we're fast enough Q controlled rollout status streams progress updates in real time. You'll see the new replicas coming in and old replicas going out.

To repeat this exercise until you see the entire flow and experiment with the number of replicas maxsurge and maxunavailable as you please. Rollouts may also be paused and resume. I'm gonna be splitting my window into two to better illustrate what is going on by entering tmax. This is a terminal multiplexer and I'm going to press control+B followed by the percent symbol to split the terminal vertically.

To switch between the two terminals, you can enter control+B followed by the left or right arrow. In the right terminal, I'll prepare the same rollout status command we used before so that I can watch the status change as soon as we apply an update. And then we'll jump over to the left terminal and edit the app tier deployment again. Let's change the container name again by entering the following.

Next, we will quickly write the file to apply the changes then watch the status rollouts in the right terminal and pause the rollout mid-flight in the left terminal. Now the rollout is paused, but pausing won't pause replicas that were created before the pausing. They will continue to progress to ready. However, there will be no new replicas created after the rollout is paused. We can use the rollout resume command for exactly that purpose. The rollout picks up right where it left off and goes about its business.

So I'm going to stop the terminal multiplexer now by doing control+B, and Y. So now consider you found a bug in the new revision and you needed to roll back. So kubectl has a handy command exactly for that. With kubectl, rollout, undo. This will roll back to the previous revision. You can also roll back to a specific version. You can also use kubectl rollout history to get a list of all versions and then grab the specific version and pass it into that.

That's all for this demonstration of rolling updates and rollbacks. But before we move on let's scale back the app tier to one replica to give us some more CPU resources. Deployments, and rollouts are very powerful constructs. Their features cover a large swath of use cases.

So let's reiterate what we've covered in this lesson. We learned that rollouts are triggered by updates to a deployments template. Kubernetes uses a rolling update strategy by default. We also learned that we can pause, resume, and undo rollouts of deployments. There's still so much more that we could do with deployments and rollouts depend on container status.

Kubernetes assumes that created containers are immediately ready and the rollout should continue. But this does not work in all cases. We may need to wait for the web server to accept connections. So here's another scenario. Considering an application using a relational database, the containers may start but it will fail until a database and tables are created.

These scenarios must be considered to build reliable applications. This is where probes in init containers come into the picture. So we'll take a look at integrating probes and init containers in our next two lessons.

About the Author
Avatar
Jonathan Lewey
DevOps Content Creator
Students
17211
Courses
8
Learning Paths
3

Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.