Securing with Secrets
Compliant Development Process
The course is part of this learning path
Configuration is an important aspect of determining an application’s behavior. Settings files often include sensitive information like passwords and API keys. In this course, we will look at how to protect that sensitive information while the app is being developed and when it is in production.
Azure’s App Configuration Service allows you to manage access to settings data and we will see how to use it within a .Net application. We will look at using Azure Key Vault in conjunction with App Configuration Service, and how to access Azure Key Vault directly from your application and from apps running in a container within a Kubernetes cluster.
Next, we look at the idea of shifting left security testing within your development process, and how we can automate security testing as part of implementing a compliant development process. Much of this will involve using extensions from the Azure marketplace within your DevOps build pipeline.
This course contains numerous demonstrations from the Azure platform so that you can get a first-hand look at the topics we will be covering. If you have any feedback relating to this course, please contact us at email@example.com.
- Learn about app configuration
- Run and deploy apps with the Azure App Configuration service
- Use Azure Key Vault to store secrets and certificates
- Access Key Vault directly from your apps, including those running within a Kubernetes cluster
- Create a compliant development process by integrating code analyzers, branch policies, quality gates, open-source library scanning, and automated penetration into a build pipeline
- Intermediate-level developers, DevOps engineers, and product managers
- Anyone interested in learning how to implement secure app configurations and development pipelines
To get the most out of this course, you should have some pre-existing knowledge of software development and of using Microsoft Azure.
Penetration testing is really the last piece in the DevOps puzzle in terms of application testing and brings us full circle, or rather back to the right-hand end of the DevOps pipeline, so to speak. The question becomes, how can we automate penetration testing as part of our CI, CD process? There are many tools out there that will do this, but I'm going to look at one in particular and how to integrate it with the build pipeline.
If you recall earlier on in the course, when I talked about injection attacks and I said it was the number one on the top 10 list of OWASP, the Open Web Application Security Project Security Risks. Well, it turns out, OWASP has a product called OWASP ZAP, which can perform both passive and active penetration testing on your web app.
Currently, OWASP ZAP is not provided as a service. But to give you an idea of how it works, you can download an executable, which I've done here and I'm running it against my app service. The good people at Microsoft have created an extension you can download for free into your Azure DevOps organization.
Let's go over to the marketplace and find that extension. I'll just search for OWASP ZAP. Here it is, the OWASP ZAP scanner. We'll just have a quick look down the page because there are some interesting scripts here that we're going to use later on for integrating their reports into our DevOps tests. I'll just install it by clicking the Get It For Free button and go back to my project build pipeline.
So the first thing I need to do is add the OWASP scanner task. You have two choices of scan type, targeted or scan on an agent. We are going to use targeted scan, as this means targeting a URL. Scan on an agent is when you want to scan on a container app. Aggressive scan mode means active penetration testing, which simulates what a hacker might do to try and break into or break your website.
Failure threshold is the number of failures the scanner will tolerate until it fails the build. And of course, I'll be scanning on port 80, but obviously you can change that if the need arises. After the scanning has been run, we need to get hold of the reports. And we are going to do that with a publish build artifacts task. I'm just going to give it a meaningful name, but I will also need to change the path to publish to, Build dot Sourcedirectory forward-slash owaspzap. I don't need the published location. And while I remember, we shouldn't run aggressive mode on OWASP ZAP scanning during continuous integration and deployment. Aggressive or active mode is resource and time-intensive and it's something that should be done on a scheduled job, perhaps overnight when people aren't working on the build pipeline. And if the application is in production, at times when it is likely not to be highly-utilized.
Having published the reports, this is where I want to go back to the marketplace and grab those scripts to integrate into Azure DevOps testing. So I'll just copy the first one and paste it in, then grab the next script, which is another bash script, paste that in. And then the last task is to publish the test results. Now that we've finished setting up the build script, we can save and run it. Let's just go and have a look at the pipeline in action.
As I mentioned before, OWASP ZAP is not a service. So to get it to work as an extension within Azure DevOps, there is a build of the executable in a container that gets downloaded and run. OWASP has a regularly updated container image on their website that is free to use. So we go to our artifacts under build summary and we can see the raw, OWASP ZAP reports.
Let's have a look at the HTML report. Next, if we go into tests under summary, we can see that the tests are being integrated from OWASP ZAP. Let's have a look at the details. But you can also create a bug issue directly from the tests, with all the details from the report filled in and suggested solutions. So I'll just do that and assign it to the guy with the five empty coffee cups on his desk. And there we go.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.