The Convergence of DevOps

IT has changed over the past 10 years with the adoption of cloud computing, continuous delivery, and significantly better telemetry tools. These technologies have spawned an entirely new container ecosystem, demonstrated the importance of strong security practices, and have been a catalyst for a new world of big data. Small and midsize businesses, or SMBs, and enterprises alike now likely need to employ data engineers, data scientists, and security specialists. These roles may be siloed right now, but history tells us there can be a more collaborative path.

DevOps broke down the barrier between development and operations to create the best methodology for building and shipping software. InfoSec is the latest member to join the DevOps value stream. Take a look around the internet and you’ll find plenty of posts on integrating security and compliance concerns into a continuous delivery pipeline, including The DevOps Handbook, which dedicates an entire chapter to the topic. I’ve also written about continuous security on this blog. While Dev, InfoSec, and Ops have become the “DevSecOps” you see splashed around on the internet, we need a new movement that’s rooted in DevOps philosophy to bring in data workers.

Calling for DevSecDataOps

So the name may not be great (you may have even seen it coming), but I’ll make my case for integrating all data related activities into the DevOps value stream. My case begins with the second DevOps principle: The Principle of Feedback.

The Principle of Feedback requires teams to use automated telemetry to identify new production issues and verify the correctness of any new release. Let’s put aside the first clause and instead focus on the arguably more important second clause. First I must clarify a common shortcoming. Many teams ship changes to production and considers that “done”. That’s not “Done”, but still “Work In Progress”. “Done” means delivering expected business value in production.

Imagine your team ships feature X to production. The product manager expects to see Y engagement on feature X and possible changes in business KPIs A, B, and C. The Principle of Feedback requires telemetry for Y, A, B, and C such that the team can confirm or deny in a reasonable time that feature X produced the expected outcome. Businesses live and die with this type of telemetry. Data engineers and data teams are becoming increasingly responsible for providing this type of telemetry. Thus, data workers are part of the critical path from business idea to delivering business value in production. Moreover, their workflows and processes must move at the speed of continuous delivery. In other words: it’s time to bring the data side into the DevOps Value stream.

InfoSec and Compliance for Data Pipelines

I think it’s clear that business KPIs and user engagement telemetry collected by data teams is critical to business. It’s also clear to me that the principle of continuous security connects to compliance in data pipelines. I predicted that the GDPR (General Data Protection Regulation) would be a big deal in 2019 in my previous post. Data warehouses and data lakes are potential sources for GDPR and other regulatory infractions; consider the GDPR requirement that all user data must be deleted 90 days after terminating service.

One solution is to deploy time-to-live telemetry on different types of data, creating alerts for violations. Another solution is to add automated tests for scripts that scrub user data and run the tests as part of the automated deployment pipeline. Hopefully, there’s already a set of automated tests for whatever transformation and munging goes on. If not, then this is a place to create and start building a deployment pipeline for the data processing system. Plus, a deployment pipeline is required for data teams to move at the speed of DevOps.

Converge Data Engineering and DevOps Practices

Data teams can benefit from DevOps practice. Consider what happens when a new data scientist joins the team. That person needs an environment to build and test their models. This requires test data, workstation setup, and even cloud infrastructure setup. This also calls for automation backed by infrastructure-as-code, a key DevOps practice (along with the management of test data, and any other artifact required to bootstrap a new environment). The environment may be something simple like a dedicated EC2 instance or a more complex pipeline of data streams and serverless Lambda functions. Regardless, the setup can and should be automated.

Consider the architecture with a transactional system and a separate data processing system. The data system ingests data from the transactional system to produce reports, KPIs, and other real-time telemetry. Our imaginary feature X spans both systems: functional implementation changes in the transactional system, and processing or analysis changes in the data processing system. Both systems need to be developed simultaneously, tested alongside each other, and ultimately promoted together to production. Note the relationship between these two systems. The data system should not be tested in production, especially if it outputs drive business decisions. Technical issues should not prevent the team from achieving this. It just requires some automated elbow grease and collaboration. Given both systems are encapsulated in infrastructure-as-code, then it should be possible to deploy each system into an isolated and dedicated test environment, enabling smoke testing across both systems. A simple test triggers feature X and assert on the availability of telemetry Y, A, B and C. This small test eliminates an entire class of costly regressions like misconfigured integration points and flat out broken implementations. If the automated tests pass, then both systems can be promoted into production. That’s continuous delivery in a nutshell or The Principle of Flow. The Principle of Flow leads us back to where we started: The Principle of Feedback.

Earlier, we set aside the first part of The Principle of Feedback. Now we must return and apply it to data pipelines. Data Pipelines are just like any other IT component. At runtime, they can be impacted by operational conditions such as memory limits, CPU thrashing, network latency, disk capacity, and/or bandwidth saturation. There are known telemetry playbooks for common data pipelines components such as Kafka or Hadoop. There are also known abnormal operational conditions, application specific failure modes, and tripwires. Consider a data pipeline using Kafka. If there are no messages across the ingestion stream, then something is wrong. That’s a simple tripwire. That covers integration points. Data stores and processing systems also require standard USE (Utilization, Saturation, Errors) metrics and relevant alerts. One example is the disk capacity inside a data warehouse system. Known limits can be defined to trigger an alert condition, say 85% utilization for example, and a resolution. Again, applying these telemetry practices is a core DevOps concept.

There’s one remaining DevOps principle: The Principle of Continuous Learning and Experimentation. The entire IT organization must experiment and learn together. Integrating all members of the value stream is only possible if it’s attempted. Teams have to start somewhere. It may be asking questions like “How can we test and deploy our product and data systems together?” Or “How can we get more real-time data from our data pipeline?” Both are valid questions with many possible solutions. The best outcomes involve collaboration and experimentation. Your organization will achieve something when proper leadership and learning is applied.

How to Shift to DevSecDataOps

Cloud Academy has a deep training catalog for anyone interested in development, security, operations and/or data. You can lead the convergence in your team or organization with a strong knowledge mix across these areas.

The DevOps Culture learning path teaches you to see things from a DevOps perspective and how to bridge gaps in your organization. There’s also a lab on Building a Data Pipeline with DCOS that connects data pipelines to ops and infrastructure. The library of data-oriented courses can get you started with AWS, Google Cloud Platform, or Azure.

Cloud Academy provides in-depth courses which will take you from zero to hero on infrastructure-as-code and configuration management tools:

  • Terraform (developed in coordination with Hashicorp)
  • Puppet (developed in coordination with Puppet)
  • Chef (developed in coordination with Chef)
  • Ansible (developed in coordination with Ansible — and my personal favorite!)

There’s also an introduction to continuous delivery course.

With CloudAcademy, you can learn the skills to make essential changes, converging development, operations, InfoSec and data engineering. Here’s my last question to you: How will you lead DevSecDataOps in your company?

Avatar

Written by

Adam Hawkins

Passionate traveler (currently in Bangalore, India), Trance addict, Devops, Continuous Deployment advocate. I lead the SRE team at Saltside where we manage ~400 containers in production. I also manage Slashdeploy.


Related Posts

Alisha Reyes
Alisha Reyes
— August 5, 2020

New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More

This month our Content Team did an amazing job at publishing and updating a ton of new content. Not only did our experts release the brand new AZ-303 and AZ-304 Certification Learning Paths, but they also created 16 new hands-on labs — and so much more! New content on Cloud Academy At...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Alisha Reyes
Alisha Reyes
— July 2, 2020

New Content: AWS, Azure, Typescript, Java, Docker, 13 New Labs, and Much More

This month, our Content Team released a whopping 13 new labs in real cloud environments! If you haven't tried out our labs, you might not understand why we think that number is so impressive. Our labs are not “simulated” experiences — they are real cloud environments using accounts on A...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Alisha Reyes
Alisha Reyes
— June 11, 2020

New Content: AZ-500 and AZ-400 Updates, 3 Google Professional Exam Preps, Practical ML Learning Path, C# Programming, and More

This month, our Content Team released tons of new content and labs in real cloud environments. Not only that, but we introduced our very first highly interactive "Office Hours" webinar. This webinar, Acing the AWS Solutions Architect Associate Certification, started with a quick overvie...

Read more
  • AWS
  • Azure
  • DevOps
  • Google Cloud Platform
  • Machine Learning
  • programming
Luca Casartelli
Luca Casartelli
— June 1, 2020

DevOps: Why Is It Important to Decouple Deployment From Release?

Deployment and release In enterprise organizations, releases are the final step of a long process that, historically, could take months — or even worse — years. Small companies and startups aren’t immune to this. Minimum viable product (MVP) over MVP and fast iterations could lead to t...

Read more
  • decoupling
  • Deployment
  • DevOps
  • engineering
  • Release
Luca Casartelli
Luca Casartelli
— May 14, 2020

DevOps Principles: My Journey as a Software Engineer

I spent the last month reading The DevOps Handbook, a great book regarding DevOps principles, and how tech organizations evolved and succeeded in applying them. As a software engineer, you may think that DevOps is a bunch of people that deploy your code on production, and who are alw...

Read more
  • DevOps
  • DevOps principles
Michael Dehoyos
Michael Dehoyos
— May 13, 2020

Linux and DevOps: The Most Suitable Distribution

Modern Linux and DevOps have much in common from a philosophy perspective. Both are focused on functionality, scalability, as well as on the constant possibility of growth and improvement. While Windows may still be the most widely used operating system, and by extension the most common...

Read more
  • DevOps
  • Linux
Avatar
Logan Rakai
— April 7, 2020

How to Effectively Use Azure DevOps

Azure DevOps is a suite of services that collaborate on software development following DevOps principles. The services in Azure DevOps are: Azure Repos for hosting Git repositories for source control of your code Azure Boards for planning and tracking your work using proven agil...

Read more
  • Azure
  • DevOps
Simran Arora
Simran Arora
— October 29, 2019

Docker vs. Virtual Machines: Differences You Should Know

What are the differences between Docker and virtual machines? In this article, we'll compare the differences and provide our insights to help you decide between the two. Before we get started discussing Docker vs. Virtual Machines comparisons, let us first explain the basics.  What is ...

Read more
  • Containers
  • DevOps
  • Docker
  • virtual machines
Avatar
Adam Hawkins
— October 24, 2019

DevOps: From Continuous Delivery to Continuous Experimentation

Imagine this scenario. Your team built a continuous delivery pipeline. Team members deploy multiple times a day. Telemetry warns the team about production issues before they become outages. Automated tests ensure known regressions don't enter production. Team velocity is consistent and ...

Read more
  • continuous delivery
  • continuous experimentation
  • DevOps
Avatar
Adam Hawkins
— September 13, 2019

How Google, HP, and Etsy Succeed with DevOps

DevOps is currently well developed, and there are many examples of companies adopting it to improve their existing practices and explore new frontiers. In this article, we'll take a look at case studies and use cases from Google, HP, and Etsy. These companies are having success with Dev...

Read more
  • Continuous Learning
  • DevOps
  • Velocity
Chris Gambino
Chris Gambino
— August 28, 2019

How to Accelerate Development in the Cloud

Understanding how to accelerate development in the cloud can prevent typical challenges that developers face in a traditional enterprise. While there are many benefits to switching to a cloud-first model, the most immediate one is accelerated development and testing. The road blocks tha...

Read more
  • deploy
  • deployment acceleration
  • development
  • DevOps
Avatar
Adam Hawkins
— August 9, 2019

DevSecOps: How to Secure DevOps Environments

Security has been a friction point when discussing DevOps. This stems from the assumption that DevOps teams move too fast to handle security concerns. This makes sense if Information Security (InfoSec) is separate from the DevOps value stream, or if development velocity exceeds the band...

Read more
  • AWS
  • cloud security
  • DevOps
  • DevSecOps
  • Security