How DevOps Transforms Software Testing

Testing is arguably the most important aspect of software development. Whether manual or automated, testing ensures the software works as expected. Broken software causes production outages, unsatisfied customers, refunds, decreased trust, or even complete financial collapse. Testing minimizes these types of negative consequences and when done well, enables teams to reach increasingly higher quality thresholds.

DevOps transforms testing by promoting it to a critical concern across all phases of the SDLC and by shifting the responsibilities onto all engineers. DevOps also encourages engineers to answer questions like where and how to test more aspects of their software. This impacts workflows across teams, the deployment pipeline, and encourages exploratory testing. This post covers how DevOps transforms the perspective on software quality and what it means in practice.

Fast Feedback with Trunk-Based Development

The DevOps Handbook defines DevOps with three principles and their associated feedback loops. The Principle of Flow builds a fast feedback loop from development to production by establishing an automated deployment pipeline with tests that check production fitness using trunk-based development.

Trunk-based development coupled with automated testing is the best way to achieve fast feedback from development to production since it drives down batches sizes and ensures all changes are in working order. Adopting this workflow assumes that branches are shorted lived and each commit is tested.

Trunk-based development transforms testing workflows since work happens in a single shared space. There’s not much to understand since commits are simple, but any commit can bring the deployment pipeline to a screeching halt. This workflow avoids merge hell and potential development conflicts.

Here’s an example: performance testing can only happen against an integrated environment, so without it, the tests would happen far later in the process with more negative impact if things go wrong. Trunk-based development enables a “shift left” for any kind of testing, thus providing faster feedback on build quality, enabling faster iterations and ultimately increasing the frequency of production deploys.

This workflow forces teams to adopt a Definition of Done similar to the one found in the DevOps Handbook:

“At the end of each development interval, we must have integrated, tested, working, and potentially shippable code, demonstrated in a production-like environment, created from trunk using a one-click process, and validated with automated tests.”

The Definition of Done removes the need for separate test and stabilization phases towards the end of projects since testing happens continuously. Once testing is automated, teams can turn their attention to identifying and improving other quality indicators earlier in the deployment pipeline.

Continuous Security

Security and compliance checks have traditionally taken place at the end of development and have been done manually. Adopting DevOps integrates infosec into everyone’s daily work as part of the automated deployment pipeline. The shift left also causes teams to engage with infosec concerns as early as possible.

Today, it’s possible to test and mitigate a host of infosec issues by adding the following tests to the deployment pipeline:

  • Static analysis inspects the program for possible run-time behaviors, coding flaws, backdoors, and potentially malicious code like calls to exec. Examples of tools to perform static analysis include CodeClimate and Brakeman.
  • Dynamic analysis consists of tests executed while a program is in operation. These tests monitor aspects like system memory, functional behavior, response time, and overall performance. They can probe for known security vulnerabilities. These type of testing can even be done against a production environment. Examples of frameworks used for dynamic analysis include Arachani and the OWASP ZAP.
  • Dependency scanning checks dependency code and executables for known vulnerabilities. Ruby’s “bundler audit” is one example of a dependency scanner.

Applying these kinds of tests provides immediate and fast feedback on a variety of possible infosec issues. The practice also frees up engineers to focus on different software quality practices. Here’s a story from Etsy on how they took steps to proactively identify security issues in their production environment:

The engineering team added metrics for abnormal production operational events like core dumps or segmentation faults, database syntax errors to indicate potential SQL injection attacks, suspicious SQL queries, and password resets. They graphed the results in real time and found they were being attacked far more often than they thought. Here’s the project lead discussing the impact on their team:

“One of the results of showing this graph was that developers realized that they were being attacked all the time! And that was awesome, because it changed how developers thought about the security of their code as they were writing the code.”

Changing the organization’s relationship to code affects how the code is tested and a careful eye for software quality can confirm or deny a team’s assumptions. This example is not something typically associated with software testing and that’s the point. DevOps changes the way the entire team approaches verifying and testing their software. Modern teams are using fault injection techniques like chaos engineering to build more resilient systems.

Testing in Production

Netflix pioneered chaos engineering. The Principles of Chaos describes chaos engineering as:

“…the discipline of experimenting on a distributed system in order to build confidence in the system’s capability to withstand turbulent conditions in production.”

The practice involves random (or targeted) destructive actions in a production environment to stress test the environment’s reliability. The simplest chaos is randomly killing production instances and seeing how the system behaves. Other forms of chaos are increasing network latency or shutting off access to external systems.

This exercise not only builds more reliability into systems, but it teaches the team how to repair their system. Michael Nygaard refers to the “Volkswagen Microbus” paradox in Release It! (2nd Edition):

“You learn how to fix the things that often break. You don’t learn how to fix the things that rarely break. But that means when they do break, the situation is likely to be more dire. We want a continuous low level of breakage to make sure our system can handle the big things.”

Attempting to bucket chaos engineering with a specific engineering skill set is challenging because it doesn’t fit a specific set of skills. The engineer must understand the system, infrastructure, and have the engineering chops to back it all up. Also, resolving faults found through chaos engineering is not a single person’s responsibility, but rather that of the team. Software testing is no longer purely focused on functionality requirements. It is increasingly moving towards identifying unknowns and adherence to non-functional requirements. It may be obvious that engineers should know how to repair their systems, but they can’t learn to do it without practice. Chaos engineering is an interesting approach to creating a regression test for those sort of non-functional requirements.

The adoption of chaos engineering indicates how DevOps is transforming software testing and the team’s approach to ensuring high-quality software.

Future of Software Testing

DevOps shifts responsibility away from specific individuals to a shared responsibility model backed by automation. That’s news for those working in traditional QA teams, especially if they’re doing manual testing or don’t have much software engineering experience. DevOps obviates the need for dedicated manual QA staff. It also forces every team member to become a software engineer. All forms of automation require writing code, so if traditional QA staff don’t learn to code then they’ll be out in the cold. DevOps replaces that face of QA with a more useful, analytical and exploratory one.

[bctt tweet=”DevOps shifts responsibility away from specific individuals to a shared responsibility model backed by automation.” username=”cloudacademy”]

Teams will always need engineers to explore ways to break their systems since that’s a fundamentally creative and experimental process. Experimenting and learning is a key component of DevOps. The DevOps Handbook defines it as the “Third Way”:

“practices that create opportunities for learning, as quickly, frequently, cheaply, and as soon as possible. This includes creating learnings from accidents and failures, which are inevitable when we work within complex systems, as well as organizing and designing our systems of work so that we are constantly experimenting and learning, continually making our systems safer.”

Chaos engineering is an example of constant experimentation and learning from real-world operations to make systems safer. DevOps is orientating software testing in a way that facilitates this. There’s something powerful there. Testing isn’t an activity confined to a specific team, feature, or part of an application. It goes wherever the deployment pipeline goes, be it infosec compliance, functional testing, or fault injection. It’s the test’s job to ensure that the deployment pipeline keeps moving and is regression-free. No matter where you are in your DevOps journey — beginner or advanced, just make sure you can write the code and tests to keep up.

Avatar

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.


Related Posts

Joe Nemer
Joe Nemer
— October 14, 2020

New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More

This month our Content Team released two big certification Learning Paths: the AWS Certified Data Analytics - Speciality, and the Azure AI Fundamentals AI-900. In total, we released four new Learning Paths, 16 courses, 24 assessments, and 11 labs.  New content on Cloud Academy At any ...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Joe Nemer
Joe Nemer
— September 15, 2020

New Content: Azure DP-100 Certification, Alibaba Cloud Certified Associate Prep, 13 Security Labs, and Much More

This past month our Content Team served up a heaping spoonful of new and updated content. Not only did our experts release the brand new Azure DP-100 Certification Learning Path, but they also created 18 new hands-on labs — and so much more! New content on Cloud Academy At any time, y...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Simran Arora
Simran Arora
— August 21, 2020

Docker Image Security: Get it in Your Sights

For organizations and individuals alike, the adoption of Docker is increasing exponentially with no signs of slowing down. Why is this? Because Docker provides a whole host of features that make it easy to create, deploy, and manage your applications. This useful technology is especiall...

Read more
  • DevOps
  • Docker
  • Security
Avatar
Andrew Larkin
— August 18, 2020

Constant Content: Cloud Academy’s Q3 2020 Roadmap

Hello —  Andy Larkin here, VP of Content at Cloud Academy. I am pleased to release our roadmap for the next three months of 2020 — August through October. Let me walk you through the content we have planned for you and how this content can help you gain skills, get certified, and...

Read more
  • alibaba
  • AWS
  • Azure
  • content roadmap
  • Content updates
  • DevOps
  • GCP
  • Google Cloud
  • New content
Alisha Reyes
Alisha Reyes
— August 5, 2020

New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More

This month our Content Team did an amazing job at publishing and updating a ton of new content. Not only did our experts release the brand new AZ-303 and AZ-304 Certification Learning Paths, but they also created 16 new hands-on labs — and so much more! New content on Cloud Academy At...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Alisha Reyes
Alisha Reyes
— July 2, 2020

New Content: AWS, Azure, Typescript, Java, Docker, 13 New Labs, and Much More

This month, our Content Team released a whopping 13 new labs in real cloud environments! If you haven't tried out our labs, you might not understand why we think that number is so impressive. Our labs are not “simulated” experiences — they are real cloud environments using accounts on A...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Alisha Reyes
Alisha Reyes
— June 11, 2020

New Content: AZ-500 and AZ-400 Updates, 3 Google Professional Exam Preps, Practical ML Learning Path, C# Programming, and More

This month, our Content Team released tons of new content and labs in real cloud environments. Not only that, but we introduced our very first highly interactive "Office Hours" webinar. This webinar, Acing the AWS Solutions Architect Associate Certification, started with a quick overvie...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Luca Casartelli
Luca Casartelli
— June 1, 2020

DevOps: Why Is It Important to Decouple Deployment From Release?

Deployment and release In enterprise organizations, releases are the final step of a long process that, historically, could take months — or even worse — years. Small companies and startups aren’t immune to this. Minimum viable product (MVP) over MVP and fast iterations could lead to t...

Read more
  • decoupling
  • Deployment
  • DevOps
  • engineering
  • Release
Luca Casartelli
Luca Casartelli
— May 14, 2020

DevOps Principles: My Journey as a Software Engineer

I spent the last month reading The DevOps Handbook, a great book regarding DevOps principles, and how tech organizations evolved and succeeded in applying them. As a software engineer, you may think that DevOps is a bunch of people that deploy your code on production, and who are alw...

Read more
  • DevOps
  • DevOps principles
Michael Dehoyos
Michael Dehoyos
— May 13, 2020

Linux and DevOps: The Most Suitable Distribution

Modern Linux and DevOps have much in common from a philosophy perspective. Both are focused on functionality, scalability, as well as on the constant possibility of growth and improvement. While Windows may still be the most widely used operating system, and by extension the most common...

Read more
  • DevOps
  • Linux
Avatar
Logan Rakai
— April 7, 2020

How to Effectively Use Azure DevOps

Azure DevOps is a suite of services that collaborate on software development following DevOps principles. The services in Azure DevOps are: Azure Repos for hosting Git repositories for source control of your code Azure Boards for planning and tracking your work using proven agil...

Read more
  • Azure
  • DevOps
Simran Arora
Simran Arora
— October 29, 2019

Docker vs. Virtual Machines: Differences You Should Know

What are the differences between Docker and virtual machines? In this article, we'll compare the differences and provide our insights to help you decide between the two. Before we get started discussing Docker vs. Virtual Machines comparisons, let us first explain the basics.  What is ...

Read more
  • Containers
  • DevOps
  • Docker
  • virtual machines