Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the bandwidth for existing manual InfoSec processes. However, DevSecOps is a methodology that offers a different take by thinking about application and infrastructure security from the very beginning.
In this article, we’ll look at how teams can secure their DevOps environments with the DevSecOps methodology and provide training materials that will help you secure your applications and environments. If you aren’t already familiar with DevOps, AGILE, and continuous delivery/continuous integration, the Cloud Academy Playbook provides an ideal start point for any team looking to quickly absorb and get started using the fundamental practices.
DevSecOps increases system security in the same way it increases quality. Simply put, it’s wrong to assume that high velocity means less stability or security. DevSecOps mandates automation. That allows teams to automate quality control measures, such as replacing manual testing with automated testing. The same thinking applies to replacing manual InfoSec processes with more scaleable and maintainable automated processes. Automating this work replaces the bottleneck (the source of the incorrect assumption) on InfoSec team members with a system of shared responsibility and enforcement.
Additionally, DevSecOps requires continuous delivery. Continuous delivery moves organizations from manual processes to automated deployment pipelines. That required a perspective shift in the day-to-day implementation thinking. Security is no different.
Shift left with automation
Automation changes the relationship between developers and InfoSec. Previously, InfoSec tests were performed manually at the end of the process. DevSecOps shifts those checks to earlier in the process and moves from individual to shared responsibility. Adopting automation enables teams to add more checks for run-time security concerns and downstream compliance and auditing scenarios.
There are actions teams may take regardless of their DevSecOps maturity. First, integrate different forms of static analysis in the deployment pipeline. If nothing else, take this away from this article since it will increase the security of your system with minimal effort.
- Use static code analysis tools like SonarCube to vet code for known security holes (such as calls to
- Integrate dependency scanning to fail builds with dependencies with known common vulnerabilities and exposures (CVEs). Exploit distributed as compromised libraries like those hosted on NPM or Ruby gems are becoming more frequent.
- Integrate dependency scanning on deployment artifacts such as Docker image or VM images. System level packages may be exploited as well. Don’t rely on upstream base images to ensure all packages are exploit fee.
There are multiple off-the-shelf tools in this area. Find one that fits your infrastructure and use it. Then, you can focus on larger goals. The next objectives may be:
- Run the Open Web Application Security Project (OWSAP) against a test environment as part of your deployment pipeline. This requires more effort since teams may need to standup a dedicated environment, but Infrastructure-as-Code mitigates it drastically.
- Vet the licenses of dependencies against compliance standards. Using an incorrect license may have severe legal implications, so guard against it at scale with automation.
- Produce a “bill of materials” for each build. This may include a list of all package versions, language versions, and licenses. This may aid auditing and compliance later on.
Securing your system does not end with automation. You also must reconsider your architecture.
Consider the sensitive parts of your system. This includes the data itself, especially if you have personal identifiable informational (PII), bits like connection strings, or usernames and passwords, admin systems, and even the deployment pipeline itself. How can you increase security in these areas?
You should secure sensitive information such as certificates, API keys, and passwords using something like Vault from Hashicorp. This is a step from passing sensitive information via environment variables. Vault is an enterprise-grade solution that supports different access control methods and even secret rotation.
You plan to rotate your secrets, right?
The same thought process applies to your infrastructure and deployment systems. Your infrastructure should be protected by time-sensitive access tokens limited to the necessary privileges (e.g., whitelist instead of backlist). The same is true of your deployment pipeline itself. If you wrote an API to support a ChatOps style workflow, then how is that secured? A typical setup involves calling
/deploy from Slack but that doesn’t prevent an attacker taking a computer and simply typing that into Slack. Operations should be confirmed with multi-factor authentication. It’s easier to add these access control mechanisms into your workflow sooner rather than later, so keep that in mind.
You also need to adopt a more verbose logging strategy. Log all production events from the system itself and the supporting control systems (like application deploys or infrastructure updates). Include who and what with all events. It’s even better if log entries point to commits in version control. This information is vital in a security related post-mortem, future audit, or compliance check.
Also consider the data consumed and produced by your system. This is where regulations such as the GDPR interact with software architecture. First, take the necessary steps to encrypt data at rest. This mitigates the risks that compromised data is useful to a third party. Services such as Amazon S3 offer this functionality. Look for this feature in components added to your infrastructure. These components must fit into your general data retention strategy. You’ll need to consider how long to keep data around for, a strategy for purging it in case of a user request (i.e., the GDPR use case), and a way to tokenize all personal identifiable information.
Stay agile with DevOps
DevOps teams are well suited to adapt to changing requirements. Use that strength. Your environments are built with automation and Infrastructure as Code. That means you can accurately reproduce your environment at any point in time time for auditing, compliance, or verification. You can also leverage the same skills to separate out infrastructure with different security requirements. Systems that handle with financial data may be put into separate networks and deployed like any other system. Your team can stay on top of security by writing evil user stories. Attackers are users too, right?
These ideas don’t mean much in practice if you don’t have the skills to implement them. Cloud Academy has the courses to help you secure your applications and environments. GDPR and similar regulations will only become more important (we haven’t seen anything until such regulation comes to the U.S.). It’s in your and your organization’s best interest to stay ahead of the curve. Cloud Academy has an entire learning path dedicated to automating and implementing GDPR on AWS.
Access key, permissions, and multi-factor confirmation are critical components to a secure environment. The AWS Access Key and Secret Management Learning Path demystifies the all important IAM service and options for secret management using AWS services. Students will learn how to implement the access controls described in this article. Cloud Academy also offers a special course, developed in partnership with Hashicorp, on Vault if you want to implement the secret handling described in this post.
Cloud Academy’s library offers something for dedicated InfoSec engineers too. The Azure and AWS Learning Paths teach you how to take advantage of built-in security features and services that enable strong security practices that protect your cloud applications.
DevOps and security are continuous. There is no end state. This approach only works in a culture of shared responsibility. Aaron McKeown, Head of Security Engineering at Xero, along with Stuart Scott, AWS Lead at Cloud Academy, discuss this exact topic in The Shared Responsibility Model: Best Practices for Building a Culture of Security. Stuart and Aaron offer advice on developing and implementing a security program that encourages developers innovate in security enabled teams.
Changing the culture comes with a left shift on security. Mark Andersen, Director of DevOps at Capital One, and Jessica Clark, Scrum Master at Capital One Auto Finance discuss the reality in a recent webinar.
If you are considering the AWS Certified Security Speciality exam, then you want to watch this webinar. Andy Larkin, Head of Content at Cloud Academy, and Stuart Scott, AWS Lead, discuss how to pass the exam and leverage AWS security services in the Cloud Academy Talk: AWS Security Specialist Certification and Beyond webinar.
Google Cloud Platform Certification: Preparation and Prerequisites
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2019, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the second consecuti...
New Lab Challenges: Push Your Skills to the Next Level
Build hands-on experience using real accounts on AWS, Azure, Google Cloud Platform, and more Meaningful cloud skills require more than book knowledge. Hands-on experience is required to translate knowledge into real-world results. We see this time and time again in studies about how pe...
New on Cloud Academy: AWS Solution Architect Lab Challenge, Azure Hands-on Labs, Foundation Certificate in Cyber Security, and Much More
Now that Thanksgiving is over and the craziness of Black Friday has died down, it's now time for the busiest season of the year. Whether you're a last-minute shopper or you already have your shopping done, the holidays bring so much more excitement than any other time of year. Since our...
Understanding Enterprise Cloud Migration
What is enterprise cloud migration? Cloud migration is about moving your data, applications, and even infrastructure from your on-premises computers or infrastructure to a virtual pool of on-demand, shared resources that offer compute, storage, and network services at scale. Why d...
6 Reasons Why You Should Get an AWS Certification This Year
In the past decade, the rise of cloud computing has been undeniable. Businesses of all sizes are moving their infrastructure and applications to the cloud. This is partly because the cloud allows businesses and their employees to access important information from just about anywhere. ...
AWS Regions and Availability Zones: The Simplest Explanation You Will Ever Find Around
The basics of AWS Regions and Availability Zones We’re going to treat this article as a sort of AWS 101 — it’ll be a quick primer on AWS Regions and Availability Zones that will be useful for understanding the basics of how AWS infrastructure is organized. We’ll define each section,...
Application Load Balancer vs. Classic Load Balancer
What is an Elastic Load Balancer? This post covers basics of what an Elastic Load Balancer is, and two of its examples: Application Load Balancers and Classic Load Balancers. For additional information — including a comparison that explains Network Load Balancers — check out our post o...
Advantages and Disadvantages of Microservices Architecture
What are microservices? Let's start our discussion by setting a foundation of what microservices are. Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs). ...
Kubernetes Services: AWS vs. Azure vs. Google Cloud
Kubernetes is a popular open-source container orchestration platform that allows us to deploy and manage multi-container applications at scale. Businesses are rapidly adopting this revolutionary technology to modernize their applications. Cloud service providers — such as Amazon Web Ser...
AWS Internet of Things (IoT): The 3 Services You Need to Know
The Internet of Things (IoT) embeds technology into any physical thing to enable never-before-seen levels of connectivity. IoT is revolutionizing industries and creating many new market opportunities. Cloud services play an important role in enabling deployment of IoT solutions that min...
Which Certifications Should I Get?
As we mentioned in an earlier post, the old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and compan...
How to Go Serverless Like a Pro
So, no servers? Yeah, I checked and there are definitely no servers. Well...the cloud service providers do need servers to host and run the code, but we don’t have to worry about it. Which operating system to use, how and when to run the instances, the scalability, and all the arch...