Identifying & Recommending Quality Metrics
Identifying & Recommending Quality Metrics

Welcome to Designing for Quality and Security with Azure DevOps. This course covers topics to help you learn how to design a quality strategy in Azure DevOps. It shows you how to analyze an existing quality environment and how to identify and recommend quality metrics, as well as what feature flags are and how to manage the feature flag lifecycle.

The course then moves onto technical debt and how to manage it, how to choose a team structure that optimizes quality and how to handle performance testing. You'll look at some strategies for designing a secure development process and the steps you can take to inspect and validate both your codebase and infrastructure for compliance.

We'll wrap things up by covering strategies you can use to secure your development and coding environment, as well as recommended tools and practices that you can use to integrate infrastructure security validation.

If you have any questions, comments, or feedback relating to this course, feel free to contact us at

Learning Objectives

By the time you complete this course, you should have a good understanding of how to design for quality and security with Azure DevOps.

Intended Audience

This course is intended for:

  • IT professionals who are interested in earning the Microsoft Azure DevOps Solutions certification
  • DevOps professionals that work with Azure on a daily basis


To get the most from this course, you should have at least a basic understanding DevOps concepts and of Microsoft Azure.


Azure DevOps allows organizations to deliver software faster and with higher quality. Oddly enough these two metrics are typically viewed as polar opposites. Generally speaking the faster an application is typically rolled out, the lower the quality of that app. The higher the quality of the app, the longer the development cycle. However, by leveraging Azure DevOps processes, developers can identify issues sooner which in turn usually means that they can fix them sooner. The list that you see on your screen shows key metrics that directly relate to the quality of the code being produced, as well as the quality of the build and deployment process. Failed builds percentage refers to the overall percentage of builds that are failing, while failed deployments percentage refers to the overall percentage of deployments that are failing. Ticket volume is the total volume of customer and bug tickets. The bug bounds percentage refers to the percentage of customer or bug tickets that are being reopened. Lastly, the unplanned work percentage refers to the percentage of the overall work being performed that's unplanned. Taking stock of these key metrics allows you to more easily identify and recommend quality metrics in your development environment.



Introduction - Identifying & Recommending Quality Metrics - Feature Flags - Technical Debt - Team Structures - Performance Testing - Inspecting & Validating Code Base for Compliance - Inspecting & Validating Infrastructure for Compliance - Secure Development & Coding - Infrastructure Security Validation Tools & Practices - Conclusion

About the Author
Learning Paths

Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.

In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.

In his spare time, Tom enjoys camping, fishing, and playing poker.