The DevOps Institute is a collaborative effort between recognized and experienced leaders in the DevOps, InfoSec and ITSM space and acts as a learning community for DevOps practices. This DevOps Foundations course has been developed in partnership with the DevOps Institute to provide you with a common understanding of DevOps goals, business value, vocabulary, concepts, and practices. By completing this course you will gain an understanding of the core DevOps concepts, the essential vocabulary, and the core knowledge and principles that fortify DevOps principles.
This course is made up of 8 lectures and an assessment exam at the end. Upon completion of this course and the exam, students will be prepped and ready to sit the industry-recognized DevOps Institute Foundation certification exam.
Learning Objectives
- Recognize and explain the core DevOps concepts.
- Understand the principles and practices of infrastructure automation and infrastructure as code.
- Recognize and explain the core roles and responsibilities of a DevOps practice.
- Be prepared for sitting the DevOps institute Foundation certification exam after completing the course and assessment exam.
Intended Audience
- Individuals and teams looking to gain an understanding and shared knowledge of core DevOps principles.
Prerequisites
- A basic understanding of IT roles and responsibilities. We recommend completing the Considering a Career in Cloud Computing? learning path prior to taking this course.
- [Instructor] Hello, and welcome to Lecture 3, Key DevOps Practices. In this lecture, we will learn to recognize and explain Continuous Testing, Continuous Integration, Delivery, and Deployment. Okay, let's start with Continuous Testing. So, Continuous Testing is the process of executing automated tests as part of the deployment pipeline, and that gives us immediate feedback on the business risks that might be associated with a software release candidate. Now, the sequence of testing and a need for automation, and Continuous Testing is crucial to DevOps. Test plans and automation are essential to DevOps practices. Non-functional requirements can often be overlooked in testing plans. While a term non-functional may seem to sound less important, these are the requirements that underpin the product and must be tested in deployment, and development, and beyond. So pay particular attention to the rising concept of "shifting left", where testing security and compliance, and other functional and non-functional requirements are tested during the development processes. Ensuring that non-functional requirements are included as well, is a key part of getting DevOps right. A version control repository is a repository where developers can commit and collaborate on their code. It also tracks historical versions and potentially identifies conflicting versions of the same code. So our first step towards building quality at the source is Continuous Integration. So Continuous Integration refers to the continuous integration of multiple code branches into the trunk, also known as the master, and it ensures that it passes unit tests before it gets to that master branch. In the context of Continuous Delivery in DevOps, Continuous Integration also mandates running on production-like environments, and passing acceptance and integration tests. These practices make it possible to detect when code changes break the system as early as possible. The practice also highlights what caused the break and to allow quick remediation before moving into production. Now, it's important to remember that waterfall methodologies can also take advantage of continuous integration and test-driven development practices. These are not specific to Agile, Lean, or to DevOps, per se. Continuous Delivery is achieved by continuously integrating the software done by the development team, building executables and running automated tests on those executables to detect problems. Furthermore, you push the executables into increasingly production-like environments to ensure the software will work in production. You know you're doing Continuous Delivery when your software is deployable throughout its lifecycle, when your team prioritizes keeping the software deployable over working on new features, when anybody can get fast, automated feedback on the production readiness of their systems, anytime somebody makes a change to them, and you can perform push-button deployments of any version of the software to any environment on demand. So the benefits of Continuous Delivery, including reducing your risk, demonstrable progress, and quicker access to user feedback. Now often there's a question around how do you differentiate between release and deploy in your organization? It's really worth thinking through how you differentiate between a release state and a deployed state in your organization. Continuous Delivery is sometimes confused with Continuous Deployment. Continuous Delivery means that you are able to do frequent deployments, but may choose not to, usually do to businesses preferring a slower release rate of deployments, typically achieved by batching changes into releases. So Continuous Delivery requires that whenever anyone makes a change that causes an automated test to fail, breaking a deployment pipeline, developers stop the line and bring the system back into a deployable state. Organizations may even choose to have the Continuous Integration and Deployment system reject any changes, for example, code or environmental commits so that that doesn't take the code out of a deployable state. Doing this continually, and throughout a development project, enables the common practices of having a separate integration and test phase at the end of the project, which often gets compressed or can be skipped entirely, resulting in more technical debt and the downward spiral. Monitoring is important throughout the deployment lifecycle of the pipeline. In addition to monitoring the production services, we also need to monitor the pre-production environments, for example, our Dev, Test, and Staging Environments, that way, we can detect and correct potential performance problems long before production, and so that we can also minimize the cost of correcting those problems.
- [Instructor 2] When multiple developers work on the same project, they'll usually be changing a shared master development branch at overlapping intervals. This is overlap occurs because developers create parallel branches for work, and then merge those branches in when features are complete. The branches they create for their work, all start off as identical copies of the master branch, but, as the master branch changes over time, the code on an unmerge branch looks less and less like the current code on the master. When it's time to integrate their changes into the main code base, this inevitable divergence can cause lots of challenges that can introduce bugs, create bottlenecks, or even bring development to a complete halt. Continuous Integration, or CI, is a workflow strategy that helps ensure everyone's changes will integrate with the current version of the project. This lets you catch bugs, reduce merge conflicts, and have confidence your software is working. While the details may vary depending on your development environment, most CI systems feature the same basic tools and processes. In most scenarios, a team will practice CI in conjunction with automated testing using a dedicated server or CI service. Whenever a developer adds new work to a branch, the server will automatically build and test the code to determine whether it works, and can be integrated with the code on the main development branch. The CI server will produce output containing the results of the build and an indication of whether or not the branch passes all the requirements for integration into the main development branch. By exposing build and test information for every commit on every branch, CI paves the way for what's known as Continuous Delivery, or CD, as well as a related process called Continuous Deployment. So, what's the difference between Continuous Delivery and Continuous Deployment? Continuous Delivery is the practice of developing software in such a way, that you could release it at any time. When coupled with CI, Continuous Delivery lets you develop features with modular code in more manageable increments. Continuous Deployment is an extension of Continuous Delivery. It's a process that allows you to actually deploy newly developed features into production with confidence and experience little, if any, downtime. Now, let's take a look at how GitHub fits into this process, and we'll take it one step at a time, starting with CI. GitHub is like a clearing house for your code. Developers make changes locally and push those changes to GitHub when they want to share them with others. With CI, all of these changes need to get to the CI server, so it can determine whether or not they will integrate with the current main development branch, but how does it even know about them? GitHub uses what are called Webhooks to send messages to external systems about activity and events that occur in your projects. For each event type, you can specify the subscribers who should receive the message about the event. In this case, we can subscribe our CI server to receive a message anytime someone pushes code to a branch or opens a Pull request on GitHub. The CI server will parse the message from GitHub, grab a current copy of the project, build the branch, and run the tests. When the CI server finishes its processes for the current commit, it sends a message to GitHub status API, containing status information about the commit. GitHub uses that message to display information about the commit, and can even link back to more detailed information on the CI server. This helps give you a clear idea of which changes can be integrated into the main development branch, and which ones need a bit more work. Continuous Deployment works in a similar way. You can often configure your CI server to deploy branches as part of its processes. In a simple setup, anytime the master branch receives a new commit, the CI provider grabs a current copy of the project and deploys the master branch to production. The setup for this type of deployment will vary depending on your provider. If your project requires more flexibility, GitHub also exposes a deployments API that lets you create custom deployments from branches, tags, or commits. You can use the deployments API in conjunction with Webhooks, to automatically notify third-party systems, which can then retrieve a copy of the code from GitHub, and deploy the version you request to the environment you specify. So, let's review all of that one more time. Continuous Integration is a workflow strategy you can lean on to help you ensure new code will integrate into the current version of the software. Continuous Delivery is developing software that could be released at any time. GitHub puts your code at the center of your development ecosystem by serving as a clearinghouse that not only keeps track of changes but also communicates with other systems about those changes using Webhooks and APIs.
- [Instructor] This graphic is from the 2017 State of DevOps Report, and it is a great representation of the ability to increase IT and Business Performance through DevOps. It is a structured equation model. Each box in the figure represents a construct measured in the research, and each arrow represents relationships between those constructs. To interpret the model, all arrows can be read using the words predicts, affects, drives, or impacts. So for example, IT Performance predicts Organizational Performance, and if you see minus next to one of those arrows, it means the relationship is negative. So Continuous Delivery negatively impacts Deployment Pain. All arrows in the model represent statistically significant relationships; a very quick way to summarize how DevOps can improve your business process. Now Google created the concept of Site Reliability Engineering and codified it in the book on the same name. The ideal SRE candidate is a programmer who has said operational systems or network knowledge, and who likes to whittle down complex tasks. SRE and DevOps share the same foundation principles. Software Reliability Engineering is viewed by many and sited in the Google SRE book, as a specific implementation of DevOps with some idiosyncratic extensions. SREs, being developers themselves, will naturally bring solutions that help remove the barriers between deployment teams, development teams, and operational teams. So what is Resilience Engineering? The intrinsic ability of a system to adjust its functioning prior to, during, or following changes and disturbances, so that it can sustain required operations under both expected and unexpected conditions. Resilience Engineering looks at how the organization functions as a whole, rather than in silos, and the best defensive is a good offensive. So if we take an aggressive, blameless and systemic view of post-incident post-mortems, we can learn more and start to identify areas that can add to improvements. We need to consider both human and technical elements in post-mortems, and systems must be stronger than their weakest link. Failure is the flip side of success. Let's look at DevSecOps. DevSecOps is a principle that embeds security as code by shifting it left, by putting continuous security testing throughout the deployment pipeline. Another concept, Rugged DevOps, was also used in the DevOps security space. DevSecOps has now established itself as the term most popularly used to describe DevOps with a particular emphasis on security-related outcomes. Some people are not fans of the two, as they believe it could create another silo, demanding people to learn another thing, or still, it doesn't encapsulate the spirit of what DevOps is. However, it's become a fairly well-established term and is expected to remain as long as security needs special attention, which of course it does. There are rumored to be over one-million cybersecurity positions unfilled around the world, and some ratios say that for every 100 developers there are 10 IT operations people and one security person. DevSecOps breaks the security constraint by spreading the knowledge between humans and embedding it in systems. ChatOps is the communication approach that allows teams to collaborate and manage many aspects of their infrastructure, code, and data, from a chatroom. ChatOps is great for synchronous communication, regardless of where people are, problem-solving and creating a shared history, as communications are searchable. Now care must be taken to ensure information that needs to be kept secure is kept secure, and also that information that is relevant to deployments or best practices gets moved to more permanent locations, such as knowledge bases, wikis, or Google Docs. So what is Kanban? Kanban is a method of work that pulls the flow of work through a process at a manageable pace. In Japanese, Kan means visual and ban means bored. So Kanban is basically a pull system. Teams pull work only when they are ready for it, in an effort to prevent overburdening of teams. That brings us to the end of Lecture 3, I will see you in Lecture 4.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.