5 Best DevOps Tools You Need to Know
Here Are 5 Best DevOps Tools Designed to Streamline Your Engineering PipelineAs the DevOps industry continues to grow, there are a variety of too...Learn More
In this post, I explore What DevOps is and why acquiring a basic understanding of its tenets is critical for advancing in today’s IT environment. Read along and we’ll explore:
Continuous customer satisfaction is the goal of implementing these tenets. Understanding DevOps is easier than you might think.
There is a lot of talk about “DevOps” in the technical community, and for all the talk we have yet to see a single definition agreed upon. One potential reason for the lack of a clear definition could be that no single solution will fit every company. If we look at the different proposed definitions, and the tools being branded as “DevOps tools,” we can start to see that DevOps is all about efficiently providing the customer with the best possible product.
There is no perfect model for the software development life cycle (SDLC). There are, however, a lot of different options for each phase of the SDLC that have been utilized successfully throughout the years.
Every so often a shift in the way we think about phases of the SDLC comes about. An individual or group analyzes years of experiences and distills them into a proposed solution, concept, or philosophy. Concepts like agile development, continuous integration (CI), and continuous delivery (CD) have helped companies move their code into production faster, more reliable, and with less downtime. These solutions support new thinking, offering valuable frameworks used at every development level, from beginners through expert gurus.
DevOps is a philosophy of the efficient development, deployment, and operation, of the highest quality software possible.
DevOps attempts to be one such philosophy. In fact, DevOps builds on these well-established concepts.
Before going further, you should understand how we’re defining DevOps so that we share a common language and vocabulary.
DevOps is a philosophy of the efficient development, deployment, and operation, of the highest quality software possible, which makes DevOps a holistic approach to continuous customer satisfaction. Continuous customer satisfaction (CCS) is related to the ongoing happiness of the largest percentage of your user base possible. This is typically manifested through the fast delivery of newly requested features with the least amount of downtime. There is a trend in DevOps to offer a “continuous everything” tone, and continuous customer satisfaction may seem like another generic addition to the “continuous” family, but it’s actually a pretty powerful concept.
Like many aspects of cloud computing, there are certifications for DevOps. Andrew Templeton, who passed all five AWS exams at one time, has written a post about increasing your odds of passing the difficult AWS DevOps Pro Exam.
Continuous customer satisfaction represents a customer-centric approach to software. Customers who receive the features they want quickly, on a stable, and secure platform is generally satisfied by the overall experience. These “happy” clients are much more likely to become repeat customers and some may go as far as recommending you to other potential customers.
So if your goal is continuous customer satisfaction, then the DevOps philosophy will help you achieve that. Since DevOps is a philosophy of efficient development, deployment, and operation, of the highest quality software possible, it has the intentional result of supporting continuous customer satisfaction.
When DevOps is properly adopted, it supports higher quality, faster lead time — that is, the time it takes a customer’s request to make it into production — greater stability, and increased security.
Because DevOps is philosophy and not a solution, there is no concrete path to follow. This flexibility allows organizations to adopt the philosophy in a way that best supports them. The community around DevOps has proposed a few tenets that further define the philosophy. Here are a few, in no particular order.
Below are some of the key features as defined by the DevOps Community:
Quantify everything first. You’ll thank me later.
There are metrics to be found at all stages of the DevOps pipeline. It’s important to know which of these metrics is going to be useful to you by reviewing your existing processes. In order to know if your DevOps practices are having a positive impact, you need a good starting point to measure against.
From a business perspective, you should know how often you’re deploying to production. You should know how many of the deployments have resulted in outages or bugs with a measurable impact on the user base.
You should know the average time it takes your team to recover from outages. You should understand, at-a-glance, what your up-time is, and if you’re meeting any SLAs that you may be bound to. There are plenty of additional business level metrics worth tracking, though that’ll be something each company typically clarifies for themselves and their teams.
The technical side of DevOps will value different metrics. Knowing how long your CI process takes is important. The average response time of your REST services or the number of concurrent users at any given time represents useful data that may change the way developers solve for specific problems.
Knowing how the code is performing on the servers allows your engineers to quantify the impact code changes have on performance. This dovetails into understanding how your production servers are performing, and if you are over or under provisioned. Your operations team should have all the metrics they need to ensure that they are running the most elastic and secure infrastructure possible.
This tenet of “quantify everything” is a bit nebulous, because the volume of data is massive and growing at all different levels. Knowing what to track is crucial to any successful DevOps plan. In fact, if you’re new to DevOps, here are a few key performance indicators you should track, to get you started:
Start by capturing as much info about your current process as possible. Once you feel you have a good handle on your current metrics, you’ll have something to measure your DevOps efforts against.
The journey of a thousand miles begins with the first step. Once you have a serious understanding of your metrics, the DevOps philosophy becomes increasingly clear. There is a major art opening at a downtown museum and attendance is 5% of projections. Something is wrong. Imagine a Google Satellite image on your screen. What you see looks like a gray slab. You zoom out and find a building. Zoom out further, and city block becomes apparent. Further and you realize the bridge over a river is blocked by an accident and traffic is stalled. You’ve identified the problem and understand which parties must cooperate for an effective solution. The police should work with fire in clearing the accident and caring for any potentially injured parties. The city road teams must inspect the bridge for further damage, in case structural damages occurred. The art opening should expand hours or re-schedule. The DevOps philosophy supports this type of analysis, discovery, collaborative action.
Cultural Change, in this case, means breaking down silos.
The second component we’ll discuss is cultural change. This means, at a minimum, inter-departmental team collaboration is required. Depending on your experiences in tech this may sound obvious, or impossible. There has long been a culture of silos in which each team functions as a separate entity, and acts with little to no collaboration with other teams.
Breaking down silos requires uniting every team that has a role in the SDLC early, and often. The collaboration should promote shortened and enhanced feedback loops. Once your teams are working together towards a common goal, they need to ensure that information moves to the individuals that need it, as quickly as possible.
The act of breaking down silos is more than just putting disparate teams into the same room. It requires cross-disciplinary training and evaluating the best uses of available skills. This might mean, that instead of having your QA teamwork in isolation testing new features, that the QA team also advises engineers on ways of writing better, more comprehensive tests.
The same may apply to operations, their roles might shift from the classic, “this is our sandbox, and we choose who gets to play here,” to more of a collaborative and/or oversight role. With the cloud offering ever more in the way of SaaS and PaaS, the traditional boundaries between developers and operations is quickly fading. All this applies to security teams as well.
It’s not just for breakfast anymore.
Automation provides a level of consistency and efficiency that can’t happen manually. Everything that can reasonably be automated should be. This isn’t new, CI and CD have been preaching this for a long time. However, operations teams only recently started using the levels of automation that development teams traditionally have.
In addition to automating the CI and CD pipelines, operations must automate server setup and configuration. True automation requires all servers performing the same functions (that is to say a web server, database server, etc) run the same version of all required software. These servers must share the exact configuration in the same reproducible ways.
Automation begins to simplify things like upgrading a software dependency due to a bug, or security vulnerability. Consider something like the Heartbleed bug that impacted so many servers a couple years back as an example. If you were an operations person in charge of patching hundreds or thousands of servers, and you didn’t have some level of automation, you’d be stressed out. This may have been the type of event that would cause you to consider looking for a new job. However, if you could just point Ansible toward your servers, and run a playbook to upgrade the version of OpenSSL, this kind of all-day nightmare would take you five minutes to fix.
— Lee Van Steerthem (@LeeVanSteerthem) April 8, 2014
Quantify, collaborate and automate, these components are not an exhaustive list, rather a good starting point for “Going DevOps.” Remember that DevOps is the philosophy of the efficient development, deployment, and operation, of the highest quality software possible, and when done correctly, it can greatly improve the overall quality, stability and security of your code.
This article provides an overview of DevOps thinking and action. Cloud Academy and I have just produced an Introduction to DevOps course that presents 10 lessons for 1 hour of video content. It is a non-technical review of the philosophy and the agreed-upon tenants of DevOps. If you enjoyed this post, I think you’ll engage with my course.
Vineet Badola, a regular Cloud Academy blog contributor, wrote an article last year titled Cloud DevOps: improve your application development life cycle and it offers a different take on the above DevOps themes. There are a million ways to learn and I suggest continuous training with Cloud Academy. They offer a free 7-day trial subscription where you can explore DevOps video courses, learning paths, and hands-on labs. They present quiz questions for deeper learning of exam study.
Cloud Academy’s learning paths provide a place to start and a clear direction to specific goals. We have a DevOps Engineer Professional Certification for AWS learning path that features 7 video courses for over 12 hours of content. It presents 2 hands-on labs and a quiz for practical application and testing knowledge. Try it out and let us know what you think.
The AWS Solutions Architect - Associate Certification (or Sol Arch Associate for short) offers some clear benefits: Increases marketability to employers Provides solid credentials in a growing industry (with projected growth of as much as 70 percent in five years) Market anal...
Moving data to the cloud is one of the cornerstones of any cloud migration. Apache NiFi is an open source tool that enables you to easily move and process data using a graphical user interface (GUI). In this blog post, we will examine a simple way to move data to the cloud using NiFi c...
Amazon DynamoDB is a managed NoSQL service with strong consistency and predictable performance that shields users from the complexities of manual setup.Whether or not you've actually used a NoSQL data store yourself, it's probably a good idea to make sure you fully understand the key ...
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...
Learn how Aviatrix’s intelligent orchestration and control eliminates unwanted tradeoffs encountered when deploying Palo Alto Networks VM-Series Firewalls with AWS Transit Gateway.Deploying any next generation firewall in a public cloud environment is challenging, not because of the f...
Use AWS Config the Right Way for Successful ComplianceIt’s well-known that AWS Config is a powerful service for monitoring all changes across your resources. As AWS Config has constantly evolved and improved over the years, it has transformed into a true powerhouse for monitoring your...
Cloud Academy is a proud sponsor of the 2019 AWS Summits in Atlanta, London, and Chicago. We hope you plan to attend these free events that bring the cloud computing community together to connect, collaborate, and learn about AWS. These events are all about learning. You can learn how t...
The AWS cloud platform has made it easier than ever to be flexible, efficient, and cost-effective. However, monitoring your AWS infrastructure is the key to getting all of these benefits. Realizing these benefits requires that you follow AWS best practices which constantly change as AWS...
Amazon Web Services’ resource offerings are constantly changing, and staying on top of their evolution can be a challenge. Elastic Cloud Compute (EC2) instances are one of their core resource offerings, and they form the backbone of most cloud deployments. EC2 instances provide you with...
Before migrating domains to Amazon's Route53, we should first make sure we properly understand how DNS worksWhile we'll get to AWS's Route53 Domain Name System (DNS) service in the second part of this series, I thought it would be helpful to first make sure that we properly understand...
As businesses expand their footprint on AWS and utilize more services to build and deploy their applications, it becomes apparent that multiple AWS accounts are required to manage the environment and infrastructure. A multi-account strategy is beneficial for a number of reasons as ...
AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...