5 Best DevOps Tools You Need to Know
Here Are 5 Best DevOps Tools Designed to Streamline Your Engineering Pipeline As the DevOps industry continues to grow, there are a variety of ...Learn More
In this post, I explore What DevOps is and why acquiring a basic understanding of its tenets is critical for advancing in today’s IT environment. Read along and we’ll explore:
Continuous customer satisfaction is the goal of implementing these tenets. Understanding DevOps is easier than you might think.
There is a lot of talk about “DevOps” in the technical community, and for all the talk we have yet to see a single definition agreed upon. One potential reason for the lack of a clear definition could be that no single solution will fit every company. If we look at the different proposed definitions, and the tools being branded as “DevOps tools,” we can start to see that DevOps is all about efficiently providing the customer with the best possible product.
There is no perfect model for the software development life cycle (SDLC). There are, however, a lot of different options for each phase of the SDLC that have been utilized successfully throughout the years.
Every so often a shift in the way we think about phases of the SDLC comes about. An individual or group analyzes years of experiences and distils them into a proposed solution, concept, or philosophy. Concepts like agile development, continuous integration (CI), and continuous delivery (CD) have helped companies move their code into production faster, more reliably, and with less downtime. These solutions support new thinking, offering valuable frameworks used at every development level, from beginners through expert gurus.
DevOps is a philosophy of the efficient development, deployment and operation, of the highest quality software possible.
DevOps attempts to be one such philosophy. In fact, DevOps builds on these well-established concepts.
Before going further, you should understand how we’re defining DevOps so that we share a common language and vocabulary. DevOps is a philosophy of the efficient development, deployment, and operation, of the highest quality software possible.
Which makes DevOps a holistic approach to continuous customer satisfaction. Continuous customer satisfaction (CCS) is related to the ongoing happiness of the largest percentage of your user base possible. This is typically manifested through the fast delivery of newly requested features with the least amount of downtime. There is a trend in DevOps to offer a “continuous everything” tone, and continuous customer satisfaction may seem like another generic addition to the “continuous” family, but it’s actually a pretty powerful concept.
Like many aspects of cloud computing, there are certifications for DevOps. Andrew Templeton, who passed all five AWS exams at one time, has written a post about increasing your odds of passing the difficult AWS DevOps Pro Exam.
Continuous customer satisfaction represents a customer-centric approach to software. Customers who receive the features they want quickly, on a stable, and secure platform are generally satisfied by the overall experience. These “happy” clients are much more likely to become repeat customers and some may go as far as recommending you to other potential customers.
So if your goal is continuous customer satisfaction, then the DevOps philosophy will help you achieve that. Since DevOps is a philosophy of the efficient development, deployment, and operation, of the highest quality software possible, it has the intentional result of supporting continuous customer satisfaction.
When DevOps is properly adopted, it supports higher quality, faster lead time — that is, the time it takes a customer’s request to make it into production — greater stability, and increased security.
Because DevOps is philosophy and not a solution, there is no concrete path to follow. This flexibility allows organizations to adopt the philosophy in a way that best supports them.The community around DevOps has proposed a few tenets that further define the philosophy. Here are a few, in no particular order.
Below are some of the key features as defined by the DevOps Community:
Quantify everything first. You’ll thank me later.
There are metrics to be found at all stages of the DevOps pipeline. It’s important to know which of these metrics is going to be useful to you by reviewing your existing processes. In order to know if your DevOps practices are having a positive impact, you need a good starting point to measure against.
From a business perspective, you should know how often you’re deploying to production. You should know how many of the deployments have resulted in outages or bugs with a measurable impact on the user base.
You should know the average time it takes your team to recover from outages. You should understand, at-a-glance, what your up-time is, and if you’re meeting any SLAs that you may be bound to. There are plenty of additional business level metrics worth tracking, though that’ll be something each company typically clarifies for themselves and their teams.
The technical side of DevOps will value different metrics. Knowing how long your CI process takes is important. The average response time of your REST services or the number of concurrent users at any given time represents useful data that may change the way developers solve for specific problems.
Knowing how code is performing on the servers allows your engineers to quantify the impact code changes have on performance. This dovetails into understanding how your production servers are performing, and if you are over or under provisioned. Your operations team should have all the metrics they need to ensure that they are running the most elastic and secure infrastructure possible.
This tenet of “quantify everything” is a bit nebulous, because the volume of data is massive and growing at all different levels. Knowing what to track is crucial to any successful DevOps plan. In fact, if you’re new to DevOps, here are a few key performance indicators you should track, to get you started:
Start by capturing as much info about your current process as possible. Once you feel you have a good handle on your current metrics, you’ll have something to measure your DevOps efforts against.
The journey of a thousand miles begins with the first step. Once you have a serious understanding of your metrics, the DevOps philosophy becomes increasingly clear. There is a major art opening at a downtown museum and attendance is 5% of projections. Something is wrong. Imagine a Google Satellite image on your screen. What you see looks like a gray slab. You zoom out and find a building. Zoom out further, and city block becomes apparent. Further and you realize the bridge over a river is blocked by an accident and traffic is stalled. You’ve identified the problem and understand which parties must cooperate for an effective solution. The police should work with fire in clearing the accident and caring for any potentially injured parties. The city road teams must inspect the bridge for further damage, in case structural damages occurred. The art opening should expand hours or re-schedule. The DevOps philosophy supports this type of analysis, discovery, collaborative action.
Cultural Change, in this case, means breaking down silos.
The second component we’ll discuss is cultural change. This means, at a minimum, inter-departmental team collaboration is required. Depending on your experiences in tech this may sound obvious, or impossible. There has long been a culture of silos in which each team functions as a separate entity, and acts with little to no collaboration with other teams.
Breaking down silos requires uniting every team that has a role in the SDLC early, and often. The collaboration should promote shortened and enhanced feedback loops. Once your teams are working together towards a common goal, they need to ensure that information moves to the individuals that need it, as quickly as possible.
The act of breaking down silos is more than just putting disparate teams into the same room. It requires cross-disciplinary training and evaluating best uses of available skills. This might mean, that instead of having your QA team work in isolation testing new features, that the QA team also advises engineers on ways of writing better, more comprehensive tests.
The same may apply to operations, their roles might shift from the classic, “this is our sandbox, and we choose who gets to play here,” to more of a collaborative and/or oversight role. With the cloud offering ever more in the way of SaaS and PaaS, the traditional boundaries between developers and operations is quickly fading. All this applies to security teams as well.
It’s not just for breakfast anymore.
Automation provides a level of consistency and efficiency that can’t happen manually. Everything that can reasonably be automated should be. This isn’t new, CI and CD have been preaching this for a long time. However, operations teams only recently started using the levels of automation that development teams traditionally have.
In addition to automating the CI and CD pipelines, operations must automate server setup and configuration. True automation requires all servers performing the same functions (that is to say a web server, database server, etc) run the same version of all required software. These servers must share the exact configuration in the same reproducible ways.
Automation begins to simplify things like upgrading a software dependency due to a bug, or security vulnerability. Consider something like the Heartbleed bug that impacted so many servers a couple years back as an example. If you were an operations person in charge of patching hundreds or thousands of servers, and you didn’t have some level of automation, you’d be stressed out. This may have been the type of event that would cause you to consider looking for a new job. However, if you could just point Ansible toward your servers, and run a playbook to upgrade the version of OpenSSL, this kind of all day nightmare would take you five minutes to fix.
— Lee Van Steerthem (@LeeVanSteerthem) April 8, 2014
Quantify, collaborate and automate, these components are not an exhaustive list, rather a good starting point for “Going DevOps.” Remember that DevOps is the philosophy of the efficient development, deployment, and operation, of the highest quality software possible, and when done correctly, it can greatly improve the overall quality, stability and security of your code.
This article provides an overview of DevOps thinking and action. Cloud Academy and I have just produced an Introduction to DevOps course that presents 10 lessons for 1 hour of video content. It is a non-technical review of the philosophy and the agreed-upon tenants of DevOps. If you enjoyed this post, I think you’ll engage with my course.
Vineet Badola, a regular Cloud Academy blog contributor, wrote an article last year titled Cloud DevOps: improve your application development life cycle and it offers a different take on the above DevOps themes. There are a million ways to learn and I suggest continuous training with Cloud Academy. They offer a free 7-day trial subscription where you can explore DevOps video courses, learning paths, and hands-on labs. They present quiz questions for deeper learning of exam study.
Cloud Academy’s learning paths provide a place to start and a clear direction to specific goals. We have a DevOps Engineer Professional Certification for AWS learning path that features 7 video courses for over 12 hours of content. It presents 2 hands-on labs and a quiz for practical application and testing knowledge. Try it out and let us know what you think.
Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. Both of the new instance types are built on the AWS Nitro System. The AWS Nitro System improves the performance of processing in virtualized environments by...
Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...
Security in AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructure, hardware, virtualization layer, facilities, and staff while the subscriber organization ...
Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...
Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...
There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...
Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....
Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...
Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...