1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. Implementing a Build Strategy for Continuous Integration With Azure DevOps

Conclusion

Contents

keyboard_tab
Course Introduction
1
Introduction
PREVIEW1m 41s
Course Lectures
2
Build Tools
PREVIEW2m 52s
Course Conclusion
6

The course is part of this learning path

AZ-400 Exam Prep: Microsoft Azure DevOps Solutions
course-steps
18
lab-steps
5
description
1
play-arrow
Start course
Overview
DifficultyIntermediate
Duration38m
Students210
Ratings
4.3/5
starstarstarstarstar-half

Description

Azure DevOps is a tool for planning, auditing, code versioning, code integrating, testing, artifact storing, and deploying. It is the primary tool for continuous integration and deployment. This course will focus on the most important parts of Azure Pipelines in implementing an end-to-end continuous integration strategy.

Build triggers are essential for automated code processing and will reduce the workload of any team governing the process. You will learn which build triggers exist, why you may want to use them, and a strategy for implementing them.

Hybrid builds allow for flexibility in workflow, security, and processes by integrating builds from more than one pipeline or tool.

Parallel builds will speed up processing of the workflow where it makes sense to do so. You will learn how to use parallel or multi-agent builds, when to use them, and what their benefits are.

Azure DevOps is but one tool in the CI ecosystem. This course will also touch upon other build tools as well as recommendations for their use and integration into your workflow.

To round out the course, we will set up a completely automated continuous integration (CI) workflow that will provide a foundation for a secure, repeatable, auditable, and complete CI solution for your projects.

Learning Objectives

  • Maximize automation strategy with build triggers
  • Understand hybrid build concepts
  • Speed up pipelines with parallel builds
  • Learn about build tools and Azure integration

Intended Audience

  • Anyone wanting to learn the continuous integration material for Microsoft's AZ-400 exam

Prerequisites

  • A basic understanding of workflow and the CI build pipeline process
  • A good understanding of the development lifecycle
  • It would be advantageous to have a basic understanding of YAML, although it's not required

Resources

The GitHub repository for this course is at https://github.com/cloudacademy/azure-continuous-integration-build.

Transcript

Automation build tools are extremely powerful but can be complex. Putting forth the effort in learning to use Azure DevOps and developing the ability to design and implement effective and efficient pipelines to automate tasks will pay dividends.

Let's do a brief review of what was covered and pull the individual pieces together.

A good place to start is with build tools. There are many pipeline tools available today. Each has associated characteristics that may or may not be desirable. Some can be run on-premises. Others may have professional support. Whatever criteria must be met, due diligence will need to be done to find one that fits your use case the best.

Azure DevOps is a robust, end-to-end planning, testing, and pipeline tool. Azure DevOps Pipelines subscribes to the configuration as code convention by using an Azure Pipelines YAML file to declare the build criteria and implementation.

Let's take a look at an Azure Pipelines YAML file that has been prepared.

At the top of the file, we can find the trigger declaration. The pipeline is configured to trigger on the master and release branches. Specifying branches to include overrides the default of including all branches.

Triggers allow builds to be automatically started or excluded from starting based on matching criteria.

The branches syntax is not required if simply including branches to trigger on, however, it is required when options are desired such as exclude for branches and the use of paths and tags. 

Wildcards can be used for pattern matching. The asterisk matches zero or more characters and the question mark, which is not used in this example, matches a single character.

By using include, implicitly all other branches not listed or matched are excluded. The same goes for exclude when used. All other branches not listed are implicitly included.

Paths allow file paths to be included or excluded. Any specified path also encompasses children of the specified directory, if specifying a directory. The default is to include root but this declaration has excluded development. By reaching deeper into the file structure, the directory main under development can be included. Everything else under development is excluded. 

Tags can also be specified either under branches using git syntax or under the tags object. All tags are included here, which is converse to the default.

Keep in mind that branch conditions must be a match for the build to run and if tags or paths are specified, one of those must have criteria that matches for the build to run.

Pull request triggers allow branches and paths to be specified. This only applies to builds when pull requests are involved. Branches that are the target of the pull request, meaning the branch being merged into, can be specified to include or exclude. Although wildcards are not used here, they are very much applicable.

Paths are specified to include or exclude as well. The pull request paths are not required to be the same as the ones specified under normal trigger paths. It just happens to be the case in this example. The default is to include all pull request branches and the root path. Tags are not allowed in the pull request trigger specification.

Further down the file are the declared steps to be run. These steps can be logically grouped under jobs. Those jobs can be further grouped into stages. By default, every pipeline has one stage and one job. Although not the case in this example, if only one stage and job are needed then the syntax to explicitly define those can be omitted.

Here, there are two stages: validate_and_test and build_and_archive. Since there are multiple jobs, this would run in parallel if more than one agent is available and the dependsOn syntax wasn't used to force the builds_and_archive job to wait for the validate_and_test job to successfully complete.

The dependsOn syntax can be used for jobs or stages. It is useful for not only establishing dependencies but to also optimize the build. For example, there may be a case where a job could be run in parallel but the overhead of doing so is a detriment to the build. In that case, forcing the build to be sequential would conserve resources.

Chain of result is also a consideration. Build steps typically use the results from prior steps to operate upon. Using more than one build agent means that jobs would either need to be completely independent or some other mechanism must be used to provide context from one job to another.

For example, the initial job for a pipeline tests and builds code, the next step may be to run integration tests on the built code. That could be done within the same job or it could be done in a separate job. If done in a separate job then the built application artifact would need to be made available to the downstream jobs. This could be easily done by storing the artifact at the end of the initial build job and then retrieving the artifact for each successive build.

In fact, often, not only does the code artifact need to be stored, configuration details and metadata need to be promoted. Artifact storage is certainly applicable to these details also. This data can be written to one or more files and then stored as one or more artifacts for later consumption.

These principles also apply to hybrid builds since hybrid builds are essentially build jobs performed across multiple platforms.

Hybrid builds can be started on one platform and then finished on another platform, such as using Jenkins to build the artifact and then using Azure DevOps to thoroughly test the artifact. 

The build process needs to be broken into distinct jobs, while keeping in mind the criteria that needs to be met that prompted the utilization of hybrid builds.

There are several reasons for hybrid builds. This may be due to security requirements, proprietary software, resource allocation, tooling availability, compliance requirements, optimization strategy, or any multitude of other reasons.

One of the most important design items for hybrid builds is how the builds will be triggered. Most commonly this is done with hooks, Git triggers, or API calls.

Careful consideration must be given to what criteria defines a trigger, when it should trigger, how it should trigger, and what actions, if any, need to be performed upon failure of the build at critical points. This will ensure that the build process runs smoothly between platforms.

By understanding the advantages and limitations of build tools and applying the principles and techniques covered with build triggers, parallel builds, and hybrid builds, you will be well on your way to providing a foundation for a secure, repeatable, auditable, and complete continuous integration solutions for your projects.

Lectures

About the Author

Cory W. Cordell is an accomplished DevOps Architect, Software Engineer, and author. He started his DevOps career as a DevOps Engineer for a large bank where he helped implement DevOps practices and tooling and establish a DevOps culture. 

Cory then accepted a position with a global firm to build a DevOps department. He led a team of DevOps Engineers to establish best practices and train development teams on tooling and those practices. He worked to help development teams migrate their applications to Azure Kubernetes Service and establish pipelines to build, test, and deploy code. Realizing that a substantial gap existed in the toolchain, he developed an application to aid in infrastructure tracking and to provide UI abilities for teams to view application status for their software.

Cory is now enjoying working as a contractor and author.