Welcome to Designing for Quality and Security with Azure DevOps. This course covers topics to help you learn how to design a quality strategy in Azure DevOps. It shows you how to analyze an existing quality environment and how to identify and recommend quality metrics, as well as what feature flags are and how to manage the feature flag lifecycle.
The course then moves onto technical debt and how to manage it, how to choose a team structure that optimizes quality and how to handle performance testing. You'll look at some strategies for designing a secure development process and the steps you can take to inspect and validate both your codebase and infrastructure for compliance.
We'll wrap things up by covering strategies you can use to secure your development and coding environment, as well as recommended tools and practices that you can use to integrate infrastructure security validation.
If you have any questions, comments, or feedback relating to this course, feel free to contact us at email@example.com.
By the time you complete this course, you should have a good understanding of how to design for quality and security with Azure DevOps.
This course is intended for:
- IT professionals who are interested in earning the Microsoft Azure DevOps Solutions certification
- DevOps professionals that work with Azure on a daily basis
To get the most from this course, you should have at least a basic understanding DevOps concepts and of Microsoft Azure.
Hi there, welcome to performance testing. Over the next few minutes we're going to look at some strategies for planning and implementing performance testing. Because users are sensitive to slow or malfunctioning applications, organizations need to ensure that they are incorporating performance testing and load testing procedures as part of the building and deployment of applications.
While quick deployment of applications and solutions is important, what's even more important is the ability to deliver well-designed and while performing solutions that provide value to the end-users and to the organization.
Before talking about performance testing, it's important to understand the difference between performance testing and load testing. This is because performance testing is often confused with load testing but these are really two different concepts. Performance testing is used to see how responsive an application is, how stable it is, and how efficiently it uses the resources that are assigned to it. While load testing is used to test an application's performance when it's run under a load.
While it's expected that an organization's testing team should focus on features when designing a solution, it should also be noted that performance is in many ways a feature itself.
A common issue that organizations often run into is that more often than not performance testing only enters the equation after the development team is comfortable with the stability of the code. However, what should be happening is that the organization's performance testers should start running new code as soon as it's available. What this does is ensure feedback makes its way back to the developers sooner rather than later. This allows the developers to more quickly rectify issues.
During performance testing you should be looking at how long user actions take to complete within an application. You should also be working out how much load the solution can handle before things start going south. This might require you to identify where the bottlenecks are as the number of users increases or as the size of the data grows. You may also want to ensure that the application or solution can run for long periods of time without issues like memory leaks which can cause performance degradation.
That all being said it's important to utilize automated testing tools to perform this testing because it's impractical to perform any sort of meaningful performance testing or to integrate within a DevOps pipeline without a suite of automation tools.
Simply compiling a list of tools that you plan to use in your performance testing activities is not a plan. I mean sure it's a good idea but there's more involved. You need to also sort out how you're going to configure your testing environment and what testing processes you're going to use. You'll also need to define success and failure. What will success look like? What will failure look like?
There are also several questions that you need to answer. You'll need to understand the expectations of the business and the expectations of the target users. When creating a performance testing plan you'll also need to define the metrics that you plan to measure.
At the end of the day, it's critical that performance testing be included in your planning right from the start. If you'll be using a Kanban board it might be a good idea to have a space near it that can be used to plan out your testing strategy. Any gaps in your testing strategy should be highlighted during iteration planning and you should also sort out how you plan to monitor performance once a solution has been deployed.
Introduction - Identifying & Recommending Quality Metrics - Feature Flags - Technical Debt - Team Structures - Performance Testing - Inspecting & Validating Code Base for Compliance - Inspecting & Validating Infrastructure for Compliance - Secure Development & Coding - Infrastructure Security Validation Tools & Practices - Conclusion
Tom is a 25+ year veteran of the IT industry, having worked in environments as large as 40k seats and as small as 50 seats. Throughout the course of a long an interesting career, he has built an in-depth skillset that spans numerous IT disciplines. Tom has designed and architected small, large, and global IT solutions.
In addition to the Cloud Platform and Infrastructure MCSE certification, Tom also carries several other Microsoft certifications. His ability to see things from a strategic perspective allows Tom to architect solutions that closely align with business needs.
In his spare time, Tom enjoys camping, fishing, and playing poker.