My name is Ben Lambert. I’m an instructor at Cloud Academy. Before this, I was a software engineer for many years. I’ve had a good fortune of being able to work on a lot of really cool web and mobile projects at a lot of different web and mobile development companies. That’s given me the freedom to pick the technologies that I’ve wanted to work on. I’ve got to learn a lot of different technologies. It’s been a lot of fun. Because I’ve worked at mostly smaller companies, I’ve also had the responsibility of doing a lot of the operations side of things which has left me with being able to set up a lot of continuous integration and continuous delivery servers, as well as running the production environment at some of these companies. This has allowed me to learn some of the operations while maintaining my core skillset as a developer.
We’re here to talk about DevOps, that would give us a shared language so that we’re speaking about the same thing. Once we’re done with that, we’re going to talk about who is practicing DevOps. After that, we’ll get into a more detailed discussion about how things are changing for those of us that develop and operate software.
What problem is DevOps trying to solve? It’s common in large companies to structure teams according to their role. You may have a development team, a QA team, database team, operations team, security among others. It’s very common to have each of these teams basically working in isolation. Developers are going to write code and maybe some documentation on installing it, and they’re going to send it off to the QA team. QA does some testing and goes back and forth with the developers to fix some bugs. At some point, the code is going to get sent over the wall, the proverbial wall, and it’s going to be handed off to operations to run it in production.
Ignore the fact that most companies I’ve seen allocate a week or so for QA, but there’s no time for developers to fix bugs. This silo pattern is extremely inefficient. As developers, the only connection that we have to code running in production, is that we have a queue of tickets for production bug fixes. Bug fixes is how we know what’s going on with our code in production. We are also disconnected from our code and QA is spending a lot of time doing testing that should be automated. They’re not really utilizing their time well and management is not utilized in QA’s time efficiently. Operations has to support code that is written by developers that may not know what the production environment even looks like. That’s a tough thing for operations folks.
The good news is that there’s a better way and that better way is DevOps. If you search for a definition of DevOps, you’re going to see a lot of opinions. You’ll see a lot of different definitions but what you won’t find is any formal generally agreed-upon definition. The reason is that it offers a flexibility for companies to leverage the tenants of DevOps without having to adhere to some strict definition.
Whenever you have strict definitions or manifestos, it creates this rigid environment where if you need to deviate from that definition, then you have people getting upset because they follow it to the letter, and that can create some friction among team members.
Now, I’m honestly not sure if DevOps was created with this flexibility by design or if it was just lucky. However, either way, it’s better for all of us. Since there’s no agreed-upon definition, I am going to share mine.
I call DevOps a philosophy of the efficient development, deployment, and operation of the highest quality software possible. The reason I call it a philosophy is that it’s a system of thinking with a primary concern on developing, deploying, and operating high quality software.
If we can consider development, deployment, and operations as a pipeline for your code to flow through, then DevOps is about looking at that pipeline holistically and with goals of making it more efficient and making it produce higher quality products. The logical question is then, how does DevOps help increase the efficiency of the pipeline and increase your software’s quality?
It encompasses both of these through some generally agreed-upon tenants.
By generally agree-upon, what I mean is that there’s some level of consensus in the technical community that these tenants are a good thing.
These tenants are often abbreviated as CAMS which stands for culture, automation, measurement, and sharing. We’re going to go through this one at a time and we’ll start the DevOps culture with culture.
Why Culture is important to DevOps?
DevOps as a name is the combination of the abbreviations “dev” for development and “ops” for operations. Even the name suggests that DevOps is about bringing teams together.
DevOps culture is about breaking down the silos that prohibit teams from collaborating. As the level of collaboration increases, so does the potential for improved quality and efficiency.
What does that mean to break down the silos?
It means that not only do teams need to collaborate more but the company values may need to change.
That change tends to happen from the top down. Some of the important values are things like collaboration, quality, efficiency, security, empathy, transparency, and the list goes on with similar qualities. If your company doesn’t value these things, then it’s likely no amount of technology is going to help.
Take any of those as an example, let’s say quality, if quality isn’t a company value, then as an engineer, you likely won’ get the time that you need to create unit test, integration test, and automated acceptance test. The same goes for all of the rest of the values we mentioned. If it isn’t a company value, then it’s not important to the company, and it tends to be ignored until it’s time to place blame for something going wrong.
Automation (learn why automation is important to DevOps) removes all of the things that prevent us, as engineers, from delivering awesome new features. Our goal should be to automate everything that it makes sense to and nothing more . Once you start automating, it can be really easy to just go crazy and try to automate absolutely everything. Trying to automate things that are like a user acceptance testing is usually more effort than it’s worth. Where we start is by automating continuous integration process. It should build your software and run the unit test on each command, and it should notify the team of either success or failure.
A failed build should result in holding off on any new commits that aren’t intended to fix the build. The goal is to prioritize keeping your software in a working state over doing new work, but this is a tough thing to do. We have the automated continuous delivery process. This picks up where you see our process left off. For each successful build from the CI server, it should deploy that code to a staging environment, run the automated acceptance test, any automated non-functional testing like load testing and security audits. If all is successful, it should allow us to run any manual test that we need to run.
It’s different than the old school way, because we’re only having people testing on versions that have passed all of these automated tests. It also allows us to catch a lot of these things that shouldn’t make it into production and that are easy to test for. Once the manual testing is complete, the build is considered releasable. Then, this is where it’s the business decision to do what you want with it at this point. Maybe you deploy it, maybe you hold off and have a scheduled release. It really depends on the business factors.
Automation is an important part of getting the code into production quickly and efficiently but it’s also important for managing infrastructure. Being able to manage infrastructure in code is really one of my favorite parts of DevOps because you can codify how your infrastructure should be laid out. I think that’s incredibly valuable.
If you don’t know what’s really going on with your software and your infrastructure, you can’t make informed decisions. Measurement is really important. Remember that we said in the opening that DevOps is about improving the efficiency of the dev, deployment, and operations pipeline so you need to know what things are going on and how they’re performing. In addition to know how your pipeline is performing, you also need to know how your infrastructure is behaving. Monitoring is very important.
Let’s now talk about which is the S in the CAMS. This one seems a vague concept. What it means is you should be sharing the problems and solutions that you’re working on throughout your company (ideally, the entire community in your organization). When you run into challenges, you should talk to the different departments, talk to the different people in your company, but also how you solve those problems so that everyone agrees on a shared solution and it prevents other teams for having to reengineer the wheel. Consider sharing as a facilitator for collaboration and transparency.
What is DevOps and why is it important?
The old schools ways just didn’t scale well. They didn’t produce the highest quality software possible yet, and they took too long to really deliver too little. All of that doesn’t support experimentation very well because it takes too long to do anything.
In contrast, DevOps scales better because you should be able to push a button and release a production environment or any other environment with little or no downtime. Because you’re performing automated tests starting with the most granular unit test and all the way up to these acceptance tests, they serve as these gates that will allow you to prevent easily testable issues for making it through all the way to production.
Companies practicing DevOps, even if they don’t call it that are deploying dozens and some even hundreds, or even thousands of times per day.
Because you’re able to get code into production so quickly, it allows you to experiment. That can be A/B testing (How to A/B-Test Data-Driven Algorithms in the Cloud) or just creating a proof of concept to deploy to a staging environment to do some exploratory testing. That’s what we’ve talked about when you hear the term “fail fast.”
If you can get a concept to the people that want to play with it and actually use it, they can tell you if they like it or if they don’t, and you can iterate very quickly. It’s not a problem if it’s a negative response, because you’ve spent so little time that you have very little invested in that. Failing fast is one of those things that DevOps will allow you to do.
Most of the companies that are able to deliver softwares quickly and scale well are practicing some form of DevOps. They may not call it DevOps or anything at all because some of the companies have grown organically into what we would now label as DevOps. One of these examples is Etsy. I really like this as an example because I think most of us as engineers have worked on a platform like the one that Etsy started out with (for more check out their engineering blog https://codeascraft.com/).
In 2008, Etsy was structured in that siloed way that we talked about. Dev teams wrote codes, DBAs handled database stuff, and ops teams deployed. They were deploying twice a week which is pretty good even by today’s standards. They were experiencing a lot of the problems that I think are pretty common: they had all of their business logic in SQL stored procedures on one large database server. That was becoming a problem, because they were generating a lot of site traffic. They recognized that things are rough and that they could do better and they identified silos as a problem. The way that they chose to solve it was a designated operations engineer. What that meant was each development team would have a designated ops person that would be a part of all of their meetings and planning. Through the help, the developers understand operational concerns, as well as service and advocate to the rest of the ops team for that project. This allowed developers some insight into production, and operations some insights into development.
Some of the things that they started using to help improve efficiency was Chef for configuration management. This allowed them to provision servers and probably some infrastructure. They switched to using MySQL with Master Application from Postgres and the replication was really important, because it allowed them to scale their database horizontally. They started using an ORM to get business logic out of the database and they put it all into the code. It has a lot of value to have all of your code already versioned. They stated using Feature Flags to allow features that aren’t complete to still be deployed without breaking anything.
They have a self-service feature which allows developers to get a dev VM up and running quickly that mirrors the text stock that they’re using. All of this resulted in a continuous integration, continuous delivery process that allowed them to deploy to production over 50 times a day, and they substantially increased their uptime. Five out of the last six months had 100% uptime and one month, I think it was April 2016 had 99.95% uptime. That’s according to pingdom.com. With the right people, processes, and tools, creating something like this is possible.
Etsy isn’t alone in this: Disney, Netflix, Amazon, Adobe, Facebook, IBM, among so many others are all using this DevOps practices to efficiently deliver high quality software and their stories are all pretty similar. Check out in this lecture what companies practice DevOps.
What does it mean for us as engineers? How are things changing? For us, as developers, CIS Admins, and all the rest of the engineers, the cloud just won’t change everything on us. Back in the day, we were able to develop in our local development environment that looked nothing like production, and most of them were cobbled together. Production was quite possibly a system that didn’t need to scale horizontally because the demands for traffic for most of us just weren’t that high but nowadays, every software engineer, whether it’s a developer, operations, QA, security, we all have so much more that we need to know in order to stay relevant.
I had a coworker some time ago who was a junior developer, or at least at the time, and he was asked to set up a production environment for a project he’d been working on. It was a dot-net application hosted on AWS, behind an elastic load balancer. He set up the environment on an EC2 instance and everything looked good (Learn here how to create your first Amazon EC2 instance in Windows). He cloned that server and added that to the load balancer, and started seeing some odd behavior.
He spent hours trying to figure out why the things just weren’t working. Locally, everything looked fine. Maybe he was too embarrassed to ask another developer or maybe he was just really focused and lost track of time. The issue was that each server had its own session state and the load balancer wasn’t using sticky sessions. He’d bounce between the two of them and what he was seeing seemed like sporadic behavior. He’d never have to think about session state before and how it works in a load balanced environment. He’d always done local development and handed it off to have somebody else run it. He learned really quickly when it was up to him to get it in production.
The reason I mentioned this is I think this example helps to show that there’s a lot of value in learning more than what’s considered your area of the software life cycle.
The following skills are just invaluable for anyone in the tech field and these are things that apply to developers, ops, QA, and security, and even management really.
First up is learn to code. I think this is crucial for all disciplines of software engineers. I’d like to recommend Python. I think it’s clean, it’s concise, and it’s multipurpose.
Python is widely used and can build out a complex application, as well as process large data sets or you can just script out common operations or security tasks. If you have another language you’re drawn to, that’s fine. The thing is to pick a language and learn the fundamentals. Once you know the fundamentals, going to another language will be easier.
Every time you go to a new language, it will be easier and easier because you have all those features from the previous ones to inform and shape your understanding of the new one.
Knowing how to code will allow you a new tool set for solving problems, as well as being able to take existing open source tools and modifying them to work for you.
Here in this example, we have some Python that I found online. It’s a snippet of printing off a list of S3 buckets from AWS. Even if you’re not sure how to code, take a look at this, and you’ll see that it reads a little bit like English. With a few lines of code, you’re able to do a lot and it’s pretty powerful. Everything is being turned into code nowadays so learning to code is invaluable.
Learn the configuration management tools
They’re useful for ops because they get to provision servers and even entire environments.
For security I’d like to use configuration management tools to script out phases of a penetration test. I use it to run scans against different machines and having it all in an Ansible Playbook allows me to have an under-sourced control. You can do the exact same thing in Shell Scripts. The reason I like using a configuration management tool is because it allows me to orchestrate things from my computer so I can have other servers running if scanned against different systems. It just lets me boss around a bunch of computers in a centralized method. For QA, it’s similar. Configuration management tools are useful for orchestrating different tests from different servers. There’s a lot of value there!
Let’s take a look at an example of an Ansible Playbook. This is an example of using Ansible to do application from an apt-get repo. This playbook allows us to deploy to any AWS EC2 server that has a tag named environment and the value equal to whatever it is we’re passing on the command line. Knowing at least one of them is going to be really useful and if you want to learn one of these tools, I recommend go to the official site for which everyone you want to dive into and start with their documentation. Once you have a baseline for the actual tool itself, you’ll have some place to start with actually diving into it in-depth. Then, you’ll be able to dive in the community and ask questions on the subreddits for any of these tools.
Learn a continuous integration, continuous delivery tool. The concepts for these tools are the same roughly across these different tools. They’re basically different forms of automation servers. Learning any of them is going to allow you to better acclimate when your company inevitably starts using one if they’re not already. As a side note for security engineers watching, this tend to be configured by people that don’t have a strong focus on security. When your company implements them, you’ll want to perform a full audit. If you want to learn more about these tools, just like the configuration management, go to their official website for the one that you’re interested in, and start with their documentation, and then move on from there.
Learn a containerization technology. You don’t need to use it every day. You don’t need to be an expert but containers are getting better and better all the time. If they continue to evolve, they will most likely be the method of choice for immutable deployments for most of the companies out there
For ops, you’ll be able to deploy immutable containers allowing for potentially faster deployments. There’s a little bit more time upfront to build this containers but the actual deployment process tends to be a lot faster. The ability of rollback to previous container is very valuable. It allows a level of portability across different Linux distribution. That’s also very valuable. Containers right now comes with some security concerns. It’s not all fun and games but learning about this potential security issues now and keeping up on how the container has evolved and how they solve these problems will have you ready when they become more mainstream to be ready to secure this.
Learn a cloud platform from the perspective of a solutions architect. Even if you don’t care to take the final step and get certified, I think this information is just invaluable. Knowing how to architect a cloud system as a developer will help you to know what tools are out there that will help you build better systems. If you’re working on an existing thing, you'll be able to identify when a project isn’t following the current best practices for architecture.
For operations, this allows you to better understand where the interactions happen and where likely failures will occur. I think that’s just exceptionally valuable knowing what might fail because you know where the interactions are. For QA, it’s very similar to the ops. Knowing where things might break allow you to help the developers pinpoint where things are going wrong. For security, this will help you better understand the attack surface and identify likely attack of actors.
If you want to learn more about this, check out the cloud vendors’ documents. If you’re not into reading a bunch of documentation, we have a lot of really good courses that distill that documentation into just the important part. The upside of the course, if you’ve gone through the courses, then when you’re ready to get certified, if you want to take that final step, then you’ll be able to do that.
Deploy something and keep it running. Ideally, this would be something that you’ve written, but it doesn’t have to be. Deploying something to production like environment is exceptionally valuable. When you deploy it to a production environment, did you cause any downtime? You really need to understand why things worked so that if things are working well, maybe 90% of the time and there’s some edge case that causes them to not work, you need to understand where those are. Knowing why things work is as valuable as why things broke.
To learn this, you can use our labs. They allow you to deploy things to production environments. It gives you chance to play around with AWS, see the console, see how things work, and I think that’s just invaluable. Once you’ve played around with some of the labs, you could also set up your own AWS environment, and that will allow you to play around with AWS. It could be any cloud provider but using the smallest compute instances that they have will either be free or very inexpensive. The educational value tends to be worth that little expense that you’ll see. Make sure you shut down the instances when you’re done. Otherwise, you’re going to keep getting charged for them.
We have troubleshoot and outage. The ability to diagnose a problem is one of the best skills that an engineer can have. By practicing fixing broken systems, you’re going to gain a deeper understanding of not just that system but how other systems are likely built.
Break something. Ideally, the best way to build resilient system is by practicing system failures. Netflix does this with tools like Chaos Monkey. You don’t have to use Chaos Monkey, you can break things on your own or with other tools. Turnoff components, see what it breaks, see how resilient your system is. It will help you to make sure that your systems are redundant and self-healing.
Last on the list is read up on the tech blogs at least once a week. Most of us have full time jobs and finding the time to read a bunch of articles is challenging. So much can change in the tech world in such a short time frame that this will help you to keep up. There are a lot of wonderful places to find this information. I’m going to list off a few that I like
I think the Netflix tech blog is really good, Techblog.netflix.com is a great resource. Etsy has some great stuff at https://codeascrafts.com. I think we at Cloud Academy has some awesome stuff because we have a lot of different topics regarding stuff like machine learning or regarding cloud platforms like AWS, or Google Cloud Platform, Asher, and DevOps. They have just wonderful stuff at https://aws.amazon.com/blogs. I think they’re valuable and I want to hear from you guys on the community forums what you guys are into.
I think the roles of developers are changing. I think it’s becoming a little bit more obvious as we’re going through and seeing things like different server list technologies. I think these roles are changing all the more in smaller companies. With the cloud platforms offering things like Google’s App Engine and Amazon’s Elastic Beanstalk amongst others, we as developers are getting tasked with managing and running the code that we write in production. That doesn’t mean that ops is going to go away and nor should it, I think ops is in an invaluable role, but we need to understand how to keep our code running at scale.
The role for ops is changing too. More and more, I’m seeing that operations is being turned into site reliability engineers which is a really cool role and I think is a lot of fun. What it means to be a site reliability engineer is that you’re not only responsible for keeping large complex systems up and running, you’re also spending some time writing code.
The role of security is changing too. As companies start implementing continuous delivery pipeline, they’re just getting their code out there faster and faster. They’re deploying code dozens of time per day. Security engineers are expected to keep up. That is not easy. You don’t have the time as a security engineer in that continuous delivering process to thoroughly manually test every single build that makes it into production. You, as a security engineer, are going to have to help create this different security tests that are automated as part of the continuous integration through continuous delivery pipeline allowing you to just do more of an in-depth manual penetration test or security audit on some set schedule. The same applies to QA. QA has to help shape the tests that are going to be automated as part of this continuous delivery process. That will allow them to free up their time to really focus a manual effort on the things that are more important to do manually than automated.
Having this broader knowledge will allow you to more seamlessly shift between roles and roles start to morph and change. We’re coming into a phase where developers need to know how to operate the code in production and operations needs to know how to code. They can help to better understand the systems that they’re operating and they’ll be able to create automation tools so they can better support that stuff. Operations is having to write a lot code in a lot of the companies now as they shift into site reliability engineers. Knowing the code is going to be invaluable.
The companies that are out there that are practicing DevOps, they’re shifting teams around, they’re cross training them, and they really are just expecting more out of each member of the team. The results are speaking for themselves. Companies like Disney are getting great value out of this shift. I want to share a slide that was taken from a talk presented at a DevOps conference by Jason Cox who is the Director of Systems Engineering at Disney. In his presentation, he talked about how the ops team transitioned into system engineers because that was what’s required of them to operate at scale for Disney.
Check out some of these numbers. I’m going to highlight the ones that I think standout as being particularly awesome but feel free to pause and take a look. ABC is not able to scale from 300-700 servers in 15 to 30 minutes. That’s just awesome. In the time that it would take me to run out and get some fast food, and comeback, they’ve already basically doubled their server capacity! Their Parks and Resorts Online was able to test their disaster recovery plan and deploy their entire tech stack in five minutes to over 200 servers in their disaster recovery data center. This is amazing. You can see that there’s a lot of real world value in practicing DevOps. Disney gain levels of efficiency that I just don’t think would happen from manual efforts because they have everything in code.
It’s all becoming code. Most everything in the modern text doc is done in code. Obviously, the software that we write it’s a code. The automated test that we write is in code but the ability to provision a server is also being done in code. You can use tools like Ansible, Chef, Puppet, and others that are in that similar space to specify how a server should be set up and provisioned, what kind of settings it should have, what patch levels, and what software it should have installed. Then, the tool goes and does this for you. That’s just an amazing part of the DevOps automation stuff.
Infrastructure is being done in code too. You’ve probably heard the term infrastructure as code. What it means is you can create an entire infrastructure, and provision all the servers, and get that environment live with a push of a single button if it’s all in code. Having everything in code allows us as engineers to have it under version control which means we can collaborate on things like infrastructure. I could install maybe the Django application in an environment. I can send of the scripts and have one of my colleagues install the engine X portion. You could collaborate on this different pieces, and then run it, and see how it all performs. It’s an exceptionally valuable thing.
Learning to code is important. Learning about the tools mentioned earlier is also important. Everything is becoming a code. Knowing the code is going to be the new literacy. It’s invaluable. Where do you go to learn more? We have some courses on introduction to DevOps which I created. I think the introduction to DevOps is going to talk about some of the stuff we talked about today.
Once you’ve completed that, move on to the introduction to continuous integration. It’s going to teach you all the fundamentals about continuous integration. You can move on to the introduction to continuous delivery. I think that one, once you’re at that point, is incredibly valuable because continuous delivery is a technique that every company is moving towards. Even if they’re not there yet, that’s the goal of a lot of software companies.
It’s incredibly valuable.
Before we wrap up, I want to make a couple of recommendations for required reading. Just two books that are the ones for now. Then, we could talk about it on the community forums, what books you guys are reading, why you think they’re valuable, and maybe get started with a recommended reading for the community. First up, is the Phoenix Project by Gene Kim and George Spafford, and Kevin Behr. This is a very entertaining read. I think this is just wonderful. It’s going to help you better understand DevOps but it’s not some dry technical thing. It’s a story about this company, I don’t want to spoil anything. Go out and read this book, it’s wonderful!
I have as much praise for the next book Continuous Delivery, Reliable Software Releases through Build, Test, and Deployment Automation, David Farley and Jez Humble. It’s about continuous delivery, it has some wonderful insights into the topic of continuous delivery. It will get you started thinking about what the continuous delivery process should and should not look like.
I think that we’ve covered as much as we have time for. I want to read of a couple of questions from the community.
How is security integrated into DevOps?
Right now, security isn’t really integrated in every company. Like I said, every company gets to do things differently. That’s the beauty of DevOps. It’s more of a philosophy. Some companies are integrating it by using automated processes and the different phases of the pipeline. They might a static code analysis at the continuous integration process. They might do something a little more dynamic at the continuous delivery process testing, for the OWASP top 10 SQL injection cross-site scripting, misconfigurations, all of these things. You should try and automate as much as you can in these different phases that are part of the automated testing.
Is there any set of tools specifically for AWS that help to deliver the code? The answer is yes. If you’re in the AWS console, and you look under the development tools, you’ll see they have things like code deploy. Even things like elastic beanstalk will help you with deploying your code in a way that has minimal or no downtime. Things like elastic beanstalk will use a blue-green deployment model that will help out
Where can I get real world experience with DevOps?
DevOps is a philosophy. It’s a way of looking at the software development life cycle holistically and with a view of efficiency and quality. Because it’s a lens to see that through, you would get the experience in either development, CIS admin, QA, security. You could take any of those approaches, but when you’re there you really want to be considering the entire process.
If you can get into working at a company that doesn’t have these silos, that’s going to be your best option, I think. If you are in a company that’s siloed where each department is doing their own thing, dev writes code, they hand it over to operations to run, then pick some tools, configuration management tools, learn this on your own time as best as you can. I know it’s tough to find a time but try and learn these principles, learn the cloud platforms from the perspective of a solutions engineer, learn the code. This will help you to transition into the knowledge you need that will allow you to work at a company that really practices DevOps the right way, if there is such a thing.
Which cloud platform do you think is most compatible with DevOps? I don’t think there’s any right answer here. I think you can use DevOps practices in all of them really well. If there’s one that doesn’t play well, I’m not familiar with it. I’ll leave that to you guys to hit me up on the user forums and on the community forums. Let me know if there’s a cloud platform that just doesn’t play well with DevOps.
Can you suggest which configuration management tool is better? Better is a very subjective thing especially when we’re talking about something like this. There are a lot of really good options. No matter what platform, if you’re on Windows, you can look at Microsoft who has their desired state configuration tool. It’s a part of the PowerShell family. No matter whether you’re in Windows or Linux, you have options, Ansible, Chef, Puppet, SaltStack, these are the ones that a lot of people know. There’s a reason for that because they’re really good tools!
I like Ansible because I like the YAML playbooks. I think that they offer a lot of power and ease of typing out.
Behind the scenes, you can create a module of just about any language you want that will interact through the same YAML interface or abstraction, I guess is a better term.
I like Ansible, it’s going to run on every Linux environment that has Python. It doesn’t require an agent to be installed on the client. You can use it on just about every Linux platform. Chef is a great option. If you already know Ruby or if you know a language that’s similar like Python, then that’s a great option. If you feel comfortable programming with their DSL, it’s a wonderful tool. I can’t speak highly enough about it.