Measuring DevOps Success: What, Where, and How

The DevOps methodology relates technical and organization practices so it’s difficult to simply ascribe a number and say “our organization is a B+ on DevOps!” Things don’t work that way. A better approach identifies intended outcomes and measurable characteristics for each outcome. Let’s consider a common scenario: a team adopts DevOps to increase their velocity. High velocity is a common trait among DevOps teams, plus it may be measured, tracked, and used to prioritize decisions. In this post, we’ll look at the traits and measurements of DevOps success.

Before we dive into it, check out Cloud Academy’s Recipe for DevOps Success webinar in collaboration with Capital One and don’t forget to have a look at Cloud Roster, the job role matrix that shows you what kind of technologies that DevOps professionals should be familiar with. If you’re responsible for implementing DevOps practices at your company, we also suggest reading The Four Tactics for Cultural Change in DevOps Adoption.

DevOps in Four Metrics

The DevOps Handbook defines three primary principles in a DevOps value stream. First, the principle of flow calls for fast feedback from development through production. This manifests as continuous delivery (or deployment). Second, the principle of feedback calls for fast feedback from production back to development. The idea is to understand production operations, such as performance, end-user activity, or outages, and react accordingly. This manifests itself as tracking and monitoring real-time telemetries like time series data, metrics, and logs. The third principle calls for experimentation and continuous learning. DevOps does not have an end state. It’s similar to a game where the goal is to just keep playing but such that you get better each time. Practically speaking this manifests itself more of an organizational zeitgeist that appears in the way team members approach their work, handle outages, and seek out improvements which may directly or indirectly impact other areas.

Accelerate correlates these ideas and technical practices with four metrics:

  1. Lead Time: The time from code written to entering production
  2. Deployment Frequency: How often deploys happen
  3. Mean-Time-To-Recover (MTTR): How quickly can teams restore service after production outages
  4. Change Fail Rate: What percentage of deploys result in service impairment or an outage

The first two metrics capture velocity and the second two measure quality. Applying the third principle (continuous experimentation and learning) should drive positive long term trends in all four metrics.

Measuring Lead Time

Lead time measures the time required to implement, test and deliver a single bit of functionality. Measuring lead time requires starting a clock when development starts and stopping it when said code enters production. Gathering this data requires integrating a feature/issue tracker and/or source control data. The final implementation depends on how your team works. Here’s one possible implementation.

Consider a Kanban board with three columns: Todo, Work In Progress and Done. Work-in-Progress is defined as any kind of ongoing work. Done is defined as delivered in production. Each item in the board has an ID. Team members move cards across the board as work completes. A developer takes a card from Todo into WIP when they begin work. Eventually, it’s deployed and a product owner verifies the functionality in production and moves the card to Done. The team may integrate with the Kanban board’s API to retrieve the timestamp when the card entered WIP and entered done, subtract the timestamps, then record the value. Values should also be tagged with “feature” or “bug” for grouping and analysis.

A second implementation could use pull-requests. A developer may open a PR when they begin work and close it when it’s verified in production. The lead time could be calculated from the time of the first commit (or when the PR was opened) to when it was merged (signifying that code has been accepted in production). This solution depends on how the team works with branches.

The technical process for measuring lead time in your team is really an implementation detail. It’s more important to pick the start and stop points and automate the measurement from there. If it’s not automated, then it won’t happen.

Measuring Deployment Frequency

Deployment frequency is the easiest to measure out of the bunch. It’s a simple counter increment each time a deploy happens. Technically speaking this could be pulled from your pipeline provider’s API or tracked via dedicated step at the end of the pipeline. Values should be tagged with the project and team member. Tracking this value yields a “deploys per day” metric. You may take it even deeper by going to into “deploys per day per developer” when things really start to pick up.

You should also consider tracking a correlated metric. Batch size strongly correlates with velocity. Smaller batches are easier to implement, test and deploy. This is intuitive. Put a 10,000 line PR to code review compared to a 1,000 line PR. We know which one will be reviewed and merged faster. This metric could be measured as the total diff in each PR.

Measuring Mean Time to Recover

MTTR is measured as the time from when impairment occurs until the time it’s resolved, then the averaging of all those values.

Any outage or pager software, such as PagerDuty, VictorOps, or Opsgenie, will provide this value for you. If you don’t already have software in place, then you’ll need to track the values somehow. Manual entry can suffice here (assuming the production issues are seldom enough) until an automated system is available.

Measuring Change Failure Rate

Change failure rate is measured as the total failed deploys divided by the total deploys. The second metric (Deployment Frequency) provides that value. That leaves the first value. A simple implementation may just count the number of outages or production issues coming from monitoring software (again, such as Pagerduty or VictorOps). That may be fine to start but ignores a subtle distinction. This metric should reveal the flaws in your deployment pipeline and not the outside world. Code and changes the team introduces and the outside world may both cause failures, but we’re only interested in one. Consider the case where your infrastructure provider goes down (yes, even AWS goes down and it takes large portions of the internet down with it). That shouldn’t count against you. If you can, it may be worth it to flag production issues as such and tag them with the related system. This data can be used to calculate the global failure and also the failure rate per system.

Conclusion: Setting High-Performance Targets

Now your team is equipped with the 4 metrics of IT performance. Let’s see how those stack up against industry leaders. Accelerate’s 2017 research shows high value for each metric:

  • Deployment Frequency: On-demand (multiple deploys per day)
  • Lead Time: Less than an hour (Small batches required!)
  • MTTR: Less than one hour
  • Change Failure Rate: 0-15%

Your team may not be hitting targets right away. However, the team can accomplish anything with a direction, measured progress, and strong leadership. These topics also dovetail into Site Reliability Engineering, the discipline of improving reliability and removing toil from everyday work. You’ll find that good SREs care deeply about these numbers.

I’ll leave with a quote from Accelerate that emphasizes the importance of these metrics and false premise that going fast means sacrificing stability or speed1:

Astonishingly, these results demonstrate that there is no tradeoff between improving performance and achieving higher levels of stability and quality. Rather, high performers do better at all of these measures. This is precisely what the Agile and Lean movements predict, but much dogma in our industry still rests on the false assumption that moving faster means trading off against other performance goals, rather than enabling and reinforcing them.

Now you’re equipped with context and data. It’s time to start. CloudAcademy has courses and learning paths to guide you towards DevOps success. The DevOps Fundamentals learning path will guide you through implementing the first principle (principle of flow) and the second principle (principle of feedback). After that remember that DevOps has no end goal. Focus on iterating, experimenting, learning and improving. Any team can achieve DevOps success.

  1. Forsgren Ph.D., Nicole. Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations. IT Revolution Press. Kindle Edition. ↩︎

 

Avatar

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.


Related Posts

Joe Nemer
Joe Nemer
— September 15, 2020

New Content: Azure DP-100 Certification, Alibaba Cloud Certified Associate Prep, 13 Security Labs, and Much More

This past month our Content Team served up a heaping spoonful of new and updated content. Not only did our experts release the brand new Azure DP-100 Certification Learning Path, but they also created 18 new hands-on labs — and so much more! New content on Cloud Academy At any time, y...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Simran Arora
Simran Arora
— August 21, 2020

Docker Image Security: Get it in Your Sights

For organizations and individuals alike, the adoption of Docker is increasing exponentially with no signs of slowing down. Why is this? Because Docker provides a whole host of features that make it easy to create, deploy, and manage your applications. This useful technology is especiall...

Read more
  • DevOps
  • Docker
  • Security
Avatar
Andrew Larkin
— August 18, 2020

Constant Content: Cloud Academy’s Q3 2020 Roadmap

Hello —  Andy Larkin here, VP of Content at Cloud Academy. I am pleased to release our roadmap for the next three months of 2020 — August through October. Let me walk you through the content we have planned for you and how this content can help you gain skills, get certified, and...

Read more
  • alibaba
  • AWS
  • Azure
  • content roadmap
  • Content updates
  • DevOps
  • GCP
  • Google Cloud
  • New content
Alisha Reyes
Alisha Reyes
— August 5, 2020

New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More

This month our Content Team did an amazing job at publishing and updating a ton of new content. Not only did our experts release the brand new AZ-303 and AZ-304 Certification Learning Paths, but they also created 16 new hands-on labs — and so much more! New content on Cloud Academy At...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Alisha Reyes
Alisha Reyes
— July 2, 2020

New Content: AWS, Azure, Typescript, Java, Docker, 13 New Labs, and Much More

This month, our Content Team released a whopping 13 new labs in real cloud environments! If you haven't tried out our labs, you might not understand why we think that number is so impressive. Our labs are not “simulated” experiences — they are real cloud environments using accounts on A...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Alisha Reyes
Alisha Reyes
— June 11, 2020

New Content: AZ-500 and AZ-400 Updates, 3 Google Professional Exam Preps, Practical ML Learning Path, C# Programming, and More

This month, our Content Team released tons of new content and labs in real cloud environments. Not only that, but we introduced our very first highly interactive "Office Hours" webinar. This webinar, Acing the AWS Solutions Architect Associate Certification, started with a quick overvie...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Luca Casartelli
Luca Casartelli
— June 1, 2020

DevOps: Why Is It Important to Decouple Deployment From Release?

Deployment and release In enterprise organizations, releases are the final step of a long process that, historically, could take months — or even worse — years. Small companies and startups aren’t immune to this. Minimum viable product (MVP) over MVP and fast iterations could lead to t...

Read more
  • decoupling
  • Deployment
  • DevOps
  • engineering
  • Release
Luca Casartelli
Luca Casartelli
— May 14, 2020

DevOps Principles: My Journey as a Software Engineer

I spent the last month reading The DevOps Handbook, a great book regarding DevOps principles, and how tech organizations evolved and succeeded in applying them. As a software engineer, you may think that DevOps is a bunch of people that deploy your code on production, and who are alw...

Read more
  • DevOps
  • DevOps principles
Michael Dehoyos
Michael Dehoyos
— May 13, 2020

Linux and DevOps: The Most Suitable Distribution

Modern Linux and DevOps have much in common from a philosophy perspective. Both are focused on functionality, scalability, as well as on the constant possibility of growth and improvement. While Windows may still be the most widely used operating system, and by extension the most common...

Read more
  • DevOps
  • Linux
Avatar
Logan Rakai
— April 7, 2020

How to Effectively Use Azure DevOps

Azure DevOps is a suite of services that collaborate on software development following DevOps principles. The services in Azure DevOps are: Azure Repos for hosting Git repositories for source control of your code Azure Boards for planning and tracking your work using proven agil...

Read more
  • Azure
  • DevOps
Simran Arora
Simran Arora
— October 29, 2019

Docker vs. Virtual Machines: Differences You Should Know

What are the differences between Docker and virtual machines? In this article, we'll compare the differences and provide our insights to help you decide between the two. Before we get started discussing Docker vs. Virtual Machines comparisons, let us first explain the basics.  What is ...

Read more
  • Containers
  • DevOps
  • Docker
  • virtual machines
Avatar
Adam Hawkins
— October 24, 2019

DevOps: From Continuous Delivery to Continuous Experimentation

Imagine this scenario. Your team built a continuous delivery pipeline. Team members deploy multiple times a day. Telemetry warns the team about production issues before they become outages. Automated tests ensure known regressions don't enter production. Team velocity is consistent and ...

Read more
  • continuous delivery
  • continuous experimentation
  • DevOps