The course is part of these learning paths
The DevOps Institute is a collaborative effort between recognized and experienced leaders in the DevOps, InfoSec and ITSM space and acts as a learning community for DevOps practices. This DevOps Foundations course has been developed in partnership with the DevOps Institute to provide you with a common understanding of DevOps goals, business value, vocabulary, concepts, and practices. By completing this course you will gain an understanding of the core DevOps concepts, the essential vocabulary, and the core knowledge and principles that fortify DevOps principles.
This course is made up of 8 lectures and an assessment exam at the end. Upon completion of this course and the exam, students will be prepped and ready to sit the industry-recognized DevOps Institute Foundation certification exam.
- Recognize and explain the core DevOps concepts.
- Understand the principles and practices of infrastructure automation and infrastructure as code.
- Recognize and explain the core roles and responsibilities of a DevOps practice.
- Be prepared for sitting the DevOps institute Foundation certification exam after completing the course and assessment exam.
- Individuals and teams looking to gain an understanding and shared knowledge of core DevOps principles.
- A basic understanding of IT roles and responsibilities. We recommend completing the Considering a Career in Cloud Computing? learning path prior to taking this course.
- [Narrator] Hi and welcome back. In lecture six, we explore automation and architecting DevOps toolchains. First, we'll introduce continuous integration and continuous deployment, and infrastructure as code, then we will introduce overview Cloud services, containers, and microservices, before delving into DevOps toolchains. So let's go through some of the terminologies that tends to be important to automation and toolchains. First is the artifact, which is an element in software development. Application Programming Interface, or API, a set of protocols used to create applications for a specific operating system, or as an interface between modules or applications. Microservices, which is a software architecture that is composed of smaller modules that interact through APIs and can be updated without affecting the entire system. This is also known as loose coupling. Operating System Virtualization is a method for splitting a server into multiple partitions called containers or virtual environments, in order to prevent applications from interfering with each other. Containers, a way of packaging software into a lightweight, standalone executable package that includes everything new to run it. So the code, the runtime, the system tools, the system libraries and all the settings. For development, shipment and deployment, this makes things very easy. And open source, which is essentially software that is distributed with its source code so that end-user organizations and vendors can modify it for their own purposes. Now let's not forget machine learning, which is data analysis that uses algorithms to learn from data patents. The DevOps is not just about automation, but automation is a common enabling practice found in organizations that are adopting a DevOps culture. DevOps extends and builds upon the practices of infrastructure as code, which is pioneered by Dr. Mark Burgess. Infrastructure as code enables the reconstruction of a business system from just a source code repository and application data back up or even a bare metal resource. The on-demand creation of environments and keeping those environments in sync is critical to running working systems. It's not just production environments that benefit from infrastructure as code, automation touches all aspects of operations. Every environment is someone's production environment, as it's ultimately where they do their work. So we need to have automation across all parts of the business. Now this chart from XebiaLabs is a great start to introducing the various platforms and services available in the DevOps space. It's a great visual representation of how many tools have been deployed to support continuous delivery in DevOps. There are specific use cases where services will best meet specific needs, and we will examine some of these services in more detail in subsequent DevOps playbooks. The important thing is, while we have a lot of choices, the main benefits come from people and teams collaborating around shared tools. So it is better to have a defined toolchain that everyone uses over having a wider array of tools that some people use. Now, this is often a challenge, but one which needs to be driven as part of the DevOps belief in collaboration. So it's very much a cultural shift. So does automation improve DevOps workflow and what is the correlation between automation and high performance? From the 2017 State of DevOps Report, high performers had 46 times more frequent code deployments. And not surprisingly, that frequency is shown to be dropping year on year as people have introduced more automation into deployment practices. But they also seem to have 440 times faster lead times from committing to deploying, and 96 times faster mean time to recovery from downtime. Now, this number is increasing, which shows a trend in the right direction. MTTR change failure is also five times lower. So chances are one in five as likely to fail, compared to a one in three likely with seen in the previous year. So what are the benefits of automation? Well, extensive use of automation is a whole mark of DevOps. Automation leads to faster lead times, more frequent releases, and fewer errors. Less turbulent releases, that are faster and more predictable, are only possible with automation. The same can be said for reducing time to recover when errors do occur. So technical staff should devote quality time to designing and deploying automation capabilities. Automation also frees people up to communicate and collaborate, and to work on solving business challenges, improving processes and removing constraints, so they have the opportunity to identify, analyze and mitigate risks. Ideally, we want to let systems management systems and let people manage the business. Now it's important to keep the front of the mind, that the tools alone will not make us successful. It's the way we use them and the culture we build around them. Let's listen to John Okoro talk about the DevOps toolchain.
- [John] And bring things from an agile process of development, all the way to production. I will touch on some traditional tools that are also here, 'cause traditional teams can use the DevOps toolchain, but primarily I'm gonna focus on agile methods in association with it in tools. So planning and project management will start here and thanks to sources at NCOQ and UpGuard's E-book on the toolchain as well as some of my own personal experience, we'll bring here. For project management, we look at Jira and Asana, Pivotal, Agile ALM tools. You're gonna see a lot of these, Trello and Kanban. Also for traditional teams, Microsoft Project may be a planning tool as well. Again, I ain't focusing on Agile teams, but you can use traditional tools as well. Now, let's take a look at next as we go from project management planning, DevOps toolchain, next to requirements. So requirements tools can be things as simple as Word, and spreadsheets and Wikis. And we can also look at other things along those lines. Now it's best to have something you can automate. Word documents may be tough, because you can't automate those so easily. You might look at user stories, which might be in your Agile ALM tool, which would be a way the Agile team would capture their requirements. Now, in issue tracking, we look at tools here. Could be Jira, which actually is a Agile ALM tool as well, just depends on how you use it. But tracking issues, you could also look at Zendesk Visual Studio online, VS online, and other tools in that space. So while you track issues, make sure you're understanding what's happening in your overall environment. Now for source control and versioning perspective, we're gonna look at quite a few familiar names. Names like, GIT and SVN, and Microsoft TFS, and CVS, all tools that would be potential version control. We'll also look one moment, little bit later, back at the integration you have with CI tools, which is Continuous Integration. But for the moment, that's our source control versioning. Now let's next take a look at the development environment. So development environment itself has quite a few tools at play here, things like Vagrant and Cloud9, IDE, and Codenvy. Docker also could actually be viewed as helping in the development space, for being on the container eyes, and virtual eyes, some of your deployment and even things in the development environment that you're deploying there as well. We'll look more Docker a little bit later. But I saw some different development environment tools as you're developing and automating different pieces there. Next, let's go over and let's take a look at Continuous Integration. Now Continuous Integration, a lot of people look at this as kind of a core area of DevOps. Of course a whole toolchain is important, but Continuous Integration is going to include things that allow us to do green builds and be able to build, and make sure everything is building successfully. Companies like Microsoft, many Silicon Valley companies have done this for quite a long time. So we're looking at tools here like Drones.IO, Jenkins and TeamCity, and Travis, and others. And one of the key points here is that these are going to actually integrate back into your SVN and your version control software, there should be a tight integration here. So this is going to be a place where we're actually going to see integration between our CI or between this integration tooling and our source control so that we can do these continuous builds. So we're doing that and then we're also looking next at configuration management. So we need to make sure that we have a consistent view of the configuration. We understand that everything is, we look at the manifest and everything that is configured is to be sure that everything is in align with what we expect in our environment. Tools here might be a puppet or chef, these are agent-based configuration management tools. So they go out and they have a mastering agent, or ansible and salt. These are agent-less, also PowerShell DSC would be another one that might play the configuration management space. Now, let's go on to monitoring. In the monitoring space, what we see is the traditional shell scripts, which you're going to see a lot. Course we're looking at the configuration state. We're running and we're scripting some things to look and see that, we might integrate that in. There are some other monitoring pieces and tools that can be used, the DevOps toolchain. things like Graphite, and Logstash, and Kabana to visualize some of that as well in the monitoring space, making sure that everything is really where we expect it is, that our configuration is actually not deviating from the baseline and we want to be able to maintain a baseline and standardize on that as well. Tools like UpGuard help us with that. So discovery is next, looking at our CMDB, which is our Configuration Management Database. Our CMDBS or Configuration Management Databases are going to help us to make sure that we actually have complete configuration state and we understand it, it's documented, it's easy for us to understand if there's anything that's differing from the configuration that we expect. And so we look at our CMDBS in that space and are able to understand from that perspective. Now, we go on and we're going to take a look at deployment. Deployment's another key area DevOps, a lot of people feel that the ability to deploy quickly and accurately without errors is really especially key in DevOps. So in this space of deployment, and again this is where you realize some of the benefits of Agile, fast development with Agile, but fast deployment with DevOps. So looking at things like Octopus, Thoughtworks Go, Packer, Docker, of course can give us the containerized, virtualized deployment as well. So we have different tools here. And this also ties tightly to continuous delivery, which is something that is kind of the goal that everybody has when they look at DevOps. And so now let's look at collaboration tools, which are also a part of our DevOps toolchain. We need to be able to communicate. We need to be able to understand what's happening at different aspects in time. So we look at things here like, we want to have Campfire, we can have Slack, you've probably heard of and IM tools. We're looking at things like IRC, you know, that relay chat of course. Skype and Lync and HipChat from Atlassian. And we also have our blogs and our wikis and other things as well. So it's a lot of things that play in this space. Now, let's take a look at a few notes on the DevOps toolchain. First thing I would like to note is that virtualization containers like Docker, allowing for consistent deployment in almost any environment are becoming increasingly popular DevOps tools. Another thing is that this is not a comprehensive list that I'm giving you. DevOps tool landscape is constantly changing and there are new and updated tools introduced frequently. I highly recommend taking a look at different sources. As I said, I reference the UpGuard Ebook on this. I referenced InfoQ, referenced some other personal experience. There's a lot of other sources out there to look at, but stay abreast of what's the latest. [Narrator] So, DevOps automation practices. Now, limiting tool choices is like ideally standardization needs to be seen as a way of achieving success over being just seen as a constraint. And a common approach with DevOps practices is to integrate a collection of task-specific tools. So, start with an understanding of the artifact and the information flow. Whether that's done by value stream mapping or whatever system you use, and then begin mapping requirements to tour sets and features. The ultimate end goal is to have teams collaborating and coordinating around the same tools. So ensuring buy-in across multiple teams is very important. There may be some trial and error required to achieve that. Dev and Ops teams can better understand each others requirements and working methods when they're trialing out systems and software. So self-service can improve the timeliness of environmental creation for example. And test-driven deployment is a common practice in DevOps organizations. And that requires architecting software in a way that enables test automation. So many organizations also require developers, or Dev to build the code that enables monitoring into each application as a standard practice. So technical staff should devote quality time to devising and deploying automation capabilities. Infrastructure as Code is the practice of writing code to provision and configure structure. Let's just delve into some of these high level topics in a little more detail. We cover these in much more granularity in later lectures and learning paths. So if you're looking for more information on any of these topics, please check the appendix. We've got links to other courses and learning paths that go into these in way more detail. For now, Cloud computing is the practice of using remote services hosted on the internet to host applications rather than having local services, or servers in a private data center. So Amazon Web Services, Microsoft, Azure and the Google Platform are the most common, popular public Cloud platforms providers. Some people use the term private Cloud, where IT services are provisioned over private IT infrastructure for a dedicated use of a single organization. Now Docker is the most commonly used container solution out there. Containers can hold applications and microservices don't have to live in containers, but there are a lot of benefits of having microservices in containers. Artificial Intelligence is a huge growth area and it can be applied to deploying, monitoring and troubleshooting applications and infrastructure. It could really help streamline operations. Now there's a breadth of products and services which enable analysis and automation in infrastructure as well as numerous business cases such as data mining and customer services where AI and machine learning can add a lot of value and business benefit. Now we have a wealth of courses on machine learning and Artificial Intelligence in the library. So see the appendix for more information on those if you would like to learn more about AI and machine learning. Okay, let's talk about communication and collaboration and automation. Now there are aspects of collaboration and communication that can be automated. There's a variety of tools that support communication and collaboration including four platforms that provide multiple modes of communication from a single tool. So each organization should determine the tools that best suite the engagement and requirement for that organization. Real-time collaboration using common tools helps to improve flow and ChatOps is a great way of doing this concept introduced by GitHub. Involves using group chat rooms aided by a chat bar, which basically enables a shared backlog allows both teams to select improvement projects from an organizational versus local perspective. So a shared backlog can be used to prioritize the work that delivers the greatest organizational value, and helps to pay down technical debt. So let's review some of the steps we can take to improve DevOps automation. A good place to start is with repetitive tasks. Every time a task comes up for a second time, make a commitment to try and automate it, no matter how simple it is, and track the changes in conversion control. With each automated task, you will free up time that can be used to invest in automating more complex tasks. And some of the common patents we want to encourage are to architectonic before automating. So always make sure that we review the architecture before we start automating the service. Identify the best patents first. Let's put simplicity first. We don't need to automate bad processes. We want to list the requirements and match tools to those requirements carefully. We need to ensure products and services meet those requirements, as this helps us to encourage buy-in and cross-team collaboration. We need to be realistic about how long it will take to evolve our toolchain. So set realistic expectations and don't be afraid to make changes if something is not working. Keep realistic about the time it takes to build tools. Best of breed is usually the shortest route to success with DevOps toolchains. So now that we have a background knowledge of the composite parts of a DevOps toolchain, let's explore how we go about evaluating, selecting, and implementing DevOps toolchains. One way to enable market-orientated outcomes is for Operations to create a set of centralized platforms and tooling services that any Dev team can use to become more productive. Platform that provides a shared version control repository with pre-blessed security libraries for example, or a deployment pipeline that automatically runs code, quality, and security scanning tools and which deploys their applications into known, good environments that already have production monitoring tools installed on them is ideal. Now these shared services facilitate standardization which enables engineers to quickly become productive, even if they switch between teams. So for instance, if every production team chooses a different toolchain, engineers may have to learn an entirely new set of technologies to do their work which puts the team goals below the global goals. Now here's an example of a deployment pipeline from Jez Humble and Dave Farley's book Continuous Delivery. Let's visualize this not only the stages of the deployment pipeline toolchain, but also the underlying process. The deployment pipeline is an automated process for managing all changes, from check in to release. And toolchains span silos and automate the deployment pipeline. So let's quickly walk through some of these steps. The first key success point is automating a version control trigger, When code is checked in. This takes out one foul point immediately. We can then look to automate the build and unit tests. These can be run based on the version control trigger. The next opportunity for automation is feedback from the build and unit tests. These can be fed back to all teams with a simple pass or fail using ChatOps. We can then, of course, build triggers to commit or notify a delay for the next code push. There are a number of feedback triggers and feedback points that we can automate in this chain. So the DevOps toolchain is composed of the tools needed to support a DevOps continuous integration, continuous deployment and continuous release in operations initiative. At an abstract level, a deployment pipeline is an automated manifestation of your process for getting software from version control into the hands of your users. And the deployment pipeline ensures that all code checked into version control is automatically built and tested in a production-like environment. So the interfaces of the toolchain deployment pipeline need to be administered and supported. So this process management, incident, problem, event monitoring, et cetera, needs to be thought through. The entire application stake and environment can be bundled into containers which can enable unprecedented simplicity and speed across the entire department pipeline. Technology ecosystems are product platforms defined by core components made by the platform owner and complemented by applications. So let's look at a sample toolchain. This one is from GSA, which is a U.S. Government agency. This is a good example of how various tools are adapted and integrated into a deployment pipeline to deliver the most business value. The tools and services are selected based on their suitability for the various stages of the chain. The tools should be swappable and should never become a dependency of a toolchain itself. An ecosystem is a set of businesses functioning as a unit, EGA DevOps ecosystem, and interacting with a shared market for software and services. Software ecosystems can also consist of a company providing a software platform and a community of external developers and perhaps even external companies providing functionality that extends the basic platform, selecting tools that are part of an existing ecosystem, makes things much easier as they follow API standards and it allows for automation of one tool to kick off downstream work on the next tool. So let's review some of the elements in a DevOps toolchain. The deployment pipeline breaks the software delivery lifecycle into logical stages. And each stage provides the opportunity to verify and qualify new features from a different angle. It provides the team with fast feedback and at each stage provides visibility into the flow of changes. DevOps toolchains provide the capabilities needed to automate and expedite each stage. And typical toolchain elements can be requirements management, orchestration and visualization, version control management, continuous integration and builds, artifact management, containers and OS virtualization, test and environmental automation, server configuration and deployment, system configuration management, alerts and alarms, and monitoring. Now we need to build our DevOps toolchain gradually. This is a recommended sequence for introducing increased levels of automation in the creation of a toolchain. Of course, some will happen simultaneously depending on the organization and in large organizations there are likely to be multiple toolchains being built by multiple teams at different speeds with different tools. So at this point, the organization may wish to take the advice from the DevOps handbook and ask the IT operations team, or formate infrastructure team, to design and deliver a DevOps toolchain as a central service. This can also be part of any of the transformation assets that are part of any transformation plan. They should be part of a common library. Now one that may seem as if organizations have a single deployment pipeline and toolchain, the more likely scenario is that organizations have multiple deployment pipelines and toolchains for the various software products and services. The key is to ensure that the pipelines and toolchain are not operating in complete isolation. Particularly, since there may be a need to interface between them. The risk is encouraging toolchain silos, or conflicts in the testing and production environments. At best practices for DevOps toolchains are to include provisioning by IT Operations or by an infrastructure squad. Also to architect upfront with standardized tool categories. Use common sense to decide which tools teams should use when there are multiple tools on offer. Based on what is already in use and what is liked by the organization as a whole, make available to be consumed by a self service, that's always gonna make things easier. That brings us to the end of lecture six. I'll see you in lecture seven.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.