DevSecOps: How to Secure DevOps Environments

Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the bandwidth for existing manual InfoSec processes. However, DevSecOps is a methodology that offers a different take by thinking about application and infrastructure security from the very beginning.

In this article, we’ll look at how teams can secure their DevOps environments with the DevSecOps methodology and provide training materials that will help you secure your applications and environments. If you aren’t already familiar with DevOps, AGILE, and continuous delivery/continuous integration, the Cloud Academy Playbook provides an ideal start point for any team looking to quickly absorb and get started using the fundamental practices.

DevOps Playbook

DevSecOps methology

DevSecOps increases system security in the same way it increases quality. Simply put, it’s wrong to assume that high velocity means less stability or security. DevSecOps mandates automation. That allows teams to automate quality control measures, such as replacing manual testing with automated testing. The same thinking applies to replacing manual InfoSec processes with more scaleable and maintainable automated processes. Automating this work replaces the bottleneck (the source of the incorrect assumption) on InfoSec team members with a system of shared responsibility and enforcement.

Additionally, DevSecOps requires continuous delivery. Continuous delivery moves organizations from manual processes to automated deployment pipelines. That required a perspective shift in the day-to-day implementation thinking. Security is no different.

Shift left with automation

Automation changes the relationship between developers and InfoSec. Previously, InfoSec tests were performed manually at the end of the process. DevSecOps shifts those checks to earlier in the process and moves from individual to shared responsibility. Adopting automation enables teams to add more checks for run-time security concerns and downstream compliance and auditing scenarios.

There are actions teams may take regardless of their DevSecOps maturity. First, integrate different forms of static analysis in the deployment pipeline. If nothing else, take this away from this article since it will increase the security of your system with minimal effort.

  1. Use static code analysis tools like SonarCube to vet code for known security holes (such as calls to exec)
  2. Integrate dependency scanning to fail builds with dependencies with known common vulnerabilities and exposures (CVEs). Exploit distributed as compromised libraries like those hosted on NPM or Ruby gems are becoming more frequent.
  3. Integrate dependency scanning on deployment artifacts such as Docker image or VM images. System level packages may be exploited as well. Don’t rely on upstream base images to ensure all packages are exploit fee.

There are multiple off-the-shelf tools in this area. Find one that fits your infrastructure and use it. Then, you can focus on larger goals. The next objectives may be:

  • Run the Open Web Application Security Project (OWSAP) against a test environment as part of your deployment pipeline. This requires more effort since teams may need to standup a dedicated environment, but Infrastructure-as-Code mitigates it drastically.
  • Vet the licenses of dependencies against compliance standards. Using an incorrect license may have severe legal implications, so guard against it at scale with automation.
  • Produce a “bill of materials” for each build. This may include a list of all package versions, language versions, and licenses. This may aid auditing and compliance later on.

Securing your system does not end with automation. You also must reconsider your architecture.

Security-first thinking

Consider the sensitive parts of your system. This includes the data itself, especially if you have personal identifiable informational (PII), bits like connection strings, or usernames and passwords, admin systems, and even the deployment pipeline itself. How can you increase security in these areas?

You should secure sensitive information such as certificates, API keys, and passwords using something like Vault from Hashicorp. This is a step from passing sensitive information via environment variables. Vault is an enterprise-grade solution that supports different access control methods and even secret rotation.

You plan to rotate your secrets, right?

The same thought process applies to your infrastructure and deployment systems. Your infrastructure should be protected by time-sensitive access tokens limited to the necessary privileges (e.g., whitelist instead of backlist). The same is true of your deployment pipeline itself. If you wrote an API to support a ChatOps style workflow, then how is that secured? A typical setup involves calling /deploy from Slack but that doesn’t prevent an attacker taking a computer and simply typing that into Slack. Operations should be confirmed with multi-factor authentication. It’s easier to add these access control mechanisms into your workflow sooner rather than later, so keep that in mind.

You also need to adopt a more verbose logging strategy. Log all production events from the system itself and the supporting control systems (like application deploys or infrastructure updates). Include who and what with all events. It’s even better if log entries point to commits in version control. This information is vital in a security related post-mortem, future audit, or compliance check.

Also consider the data consumed and produced by your system. This is where regulations such as the GDPR interact with software architecture. First, take the necessary steps to encrypt data at rest. This mitigates the risks that compromised data is useful to a third party. Services such as Amazon S3 offer this functionality. Look for this feature in components added to your infrastructure. These components must fit into your general data retention strategy. You’ll need to consider how long to keep data around for, a strategy for purging it in case of a user request (i.e., the GDPR use case), and a way to tokenize all personal identifiable information.

Stay agile with DevOps

DevOps teams are well suited to adapt to changing requirements. Use that strength. Your environments are built with automation and Infrastructure as Code. That means you can accurately reproduce your environment at any point in time time for auditing, compliance, or verification. You can also leverage the same skills to separate out infrastructure with different security requirements. Systems that handle with financial data may be put into separate networks and deployed like any other system. Your team can stay on top of security by writing evil user stories. Attackers are users too, right?

Next steps

These ideas don’t mean much in practice if you don’t have the skills to implement them. Cloud Academy has the courses to help you secure your applications and environments. GDPR and similar regulations will only become more important (we haven’t seen anything until such regulation comes to the U.S.). It’s in your and your organization’s best interest to stay ahead of the curve. Cloud Academy has an entire learning path dedicated to automating and implementing GDPR on AWS.

Getting trained

Access key, permissions, and multi-factor confirmation are critical components to a secure environment. The AWS Access Key and Secret Management Learning Path demystifies the all important IAM service and options for secret management using AWS services. Students will learn how to implement the access controls described in this article. Cloud Academy also offers a special course, developed in partnership with Hashicorp, on Vault if you want to implement the secret handling described in this post.

Cloud Academy’s library offers something for dedicated InfoSec engineers too. The Azure and AWS Learning Paths teach you how to take advantage of built-in security features and services that enable strong security practices that protect your cloud applications.

DevOps and security are continuous. There is no end state. This approach only works in a culture of shared responsibility. Aaron McKeown, Head of Security Engineering at Xero, along with Stuart Scott, AWS Lead at Cloud Academy, discuss this exact topic in The Shared Responsibility Model: Best Practices for Building a Culture of Security. Stuart and Aaron offer advice on developing and implementing a security program that encourages developers innovate in security enabled teams.

Changing the culture comes with a left shift on security. Mark Andersen, Director of DevOps at Capital One, and Jessica Clark, Scrum Master at Capital One Auto Finance discuss the reality in a recent webinar.

Getting certified

If you are considering the AWS Certified Security Speciality exam, then you want to watch this webinar. Andy Larkin, Head of Content at Cloud Academy, and Stuart Scott, AWS Lead, discuss how to pass the exam and leverage AWS security services in the Cloud Academy Talk: AWS Security Specialist Certification and Beyond webinar.

 

Avatar

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.

Related Posts

Avatar
Michael Sheehy
— August 19, 2019

What Exactly Is a Cloud Architect and How Do You Become One?

One of the buzzwords surrounding the cloud that I'm sure you've heard is "Cloud Architect." In this article, I will outline my understanding of what a cloud architect does and I'll analyze the skills and certifications necessary to become one. I will also list some of the types of jobs ...

Read more
  • AWS
  • Cloud Computing
Avatar
Nitheesh Poojary
— August 16, 2019

Boto: Using Python to Automate AWS Services

Boto allows you to write scripts to automate things like starting AWS EC2 instances Boto is a Python package that provides programmatic connectivity to Amazon Web Services (AWS). AWS offers a range of services for dynamically scaling servers including the core compute service, Elastic...

Read more
  • Automated AWS Services
  • AWS
  • Boto
  • Python
Avatar
Andrew Larkin
— August 13, 2019

Content Roadmap: AZ-500, ITIL 4, MS-100, Google Cloud Associate Engineer, and More

Last month, Cloud Academy joined forces with QA, the UK’s largest B2B skills provider, and it put us in an excellent position to solve a massive skills gap problem. As a result of this collaboration, you will see our training library grow with additions from QA’s massive catalog of 500+...

Read more
  • AWS
  • Azure
  • content roadmap
  • Google Cloud Platform
Avatar
Stefano Giacone
— August 8, 2019

Test Your Cloud Knowledge on AWS, Azure, or Google Cloud Platform

Cloud skills are in demand | In today's digital era, employers are constantly seeking skilled professionals with working knowledge of AWS, Azure, and Google Cloud Platform. According to the 2019 Trends in Cloud Transformation report by 451 Research: Business and IT transformations re...

Read more
  • AWS
  • Cloud skills
  • Google Cloud
  • Microsoft Azure
Avatar
Andrew Larkin
— August 7, 2019

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to estimate all types of resources, not the least of which are CPU, memory, storage, and network connectivity. Which resources you choose for your delivery —  cloud-based or local — is up to you. But you’ll definitely want...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud Platform
Joe Nemer
Joe Nemer
— August 6, 2019

Google Cloud vs AWS: A Comparison (or can they be compared?)

The "Google Cloud vs AWS" argument used to be a common discussion among our members, but is this still really a thing? You may already know that there are three major players in the public cloud platforms arena: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP)...

Read more
  • AWS
  • Google Cloud Platform
  • Kubernetes
Avatar
Stuart Scott
— July 29, 2019

Deployment Orchestration with AWS Elastic Beanstalk

If you're responsible for the development and deployment of web applications within your AWS environment for your organization, then it's likely you've heard of AWS Elastic Beanstalk. If you are new to this service, or simply need to know a bit more about the service and the benefits th...

Read more
  • AWS
  • elastic beanstalk
Avatar
Stuart Scott
— July 26, 2019

How to Use & Install the AWS CLI

What is the AWS CLI? | The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services and implement a level of automation. If you’ve been using AWS for some time and feel...

Read more
  • AWS
  • AWS CLI
  • Command line interface
Alisha Reyes
Alisha Reyes
— July 22, 2019

Cloud Academy’s Blog Digest: July 2019

July has been a very exciting month for us at Cloud Academy. On July 10, we officially joined forces with QA, the UK’s largest B2B skills provider (read the announcement). Over the coming weeks, you will see additions from QA’s massive catalog of 500+ certification courses and 1500+ ins...

Read more
  • AWS
  • Azure
  • Cloud Academy
  • Cybersecurity
  • DevOps
  • Kubernetes
Avatar
Stuart Scott
— July 18, 2019

AWS Fundamentals: Understanding Compute, Storage, Database, Networking & Security

If you are just starting out on your journey toward mastering AWS cloud computing, then your first stop should be to understand the AWS fundamentals. This will enable you to get a solid foundation to then expand your knowledge across the entire AWS service catalog.   It can be both d...

Read more
  • AWS
  • Compute
  • Database
  • fundamentals
  • networking
  • Security
  • Storage
Avatar
Adam Hawkins
— July 17, 2019

How to Become a DevOps Engineer

The DevOps Handbook introduces DevOps as a framework for improving the process for converting a business hypothesis into a technology-enabled service that delivers value to the customer. This process is called the value stream. Accelerate finds that applying DevOps principles of flow, f...

Read more
  • AWS
  • AWS Certifications
  • DevOps
  • DevOps Foundation Certification
  • Engineer
  • Kubernetes
Avatar
Vineet Badola
— July 15, 2019

AWS AMI Virtualization Types: HVM vs PV (Paravirtual VS Hardware VM)

Amazon Machine Images (AWS AMI) offers two types of virtualization: Paravirtual (PV) and Hardware Virtual Machine (HVM). Each solution offers its own advantages. When we’re using AWS, it’s easy for someone — almost without thinking —  to choose which AMI flavor seems best when spinning...

Read more
  • AWS
  • Hardware Virtual Machine
  • Paravirtual
  • Virtualization