Continuous Integration (CI) and Continuous Delivery (CD) enables teams to adopt automation in building, testing, and deploying software. CI/CD along with DevOps practices are attracting a lot of attention and playing an important part in the software development process. Efficient CI/CD strategies enable companies to deliver better value by being able to reach out to the market with shorter turn-around times, thereby increasing revenues and gaining market share.
CI/CD practices enable us to proactively resolve bugs, issues, and other problems at a much earlier stage. This results in a significant reduction in the overall software development cost.
In this course, you will learn the skills required for building CI/CD pipelines using tools such as Google Cloud Build, Google Container Registry, and Source Repository. The course will start by showing you how to develop code in Cloud Shell and then upload it to Google Source Repository. It will then guide you through the CI/CD pipeline stage to build and deploy an application to GKE using Container Registry and Cloud Build.
If you have any feedback relating to this course, please let us know at support@cloudacademy.com.
Learning Objectives
By the end of this course, you will know how to:
- Work with immutable artifacts
- Deploy immutable artifacts using Cloud Build
- Trigger builds
- Set up Cloud Build pipelines
Intended Audience
This course is suited to anyone interested in building CI/CD pipelines on Google Cloud Platform (GCP) using Cloud Build and Container Registry.
Prerequisites
To get the most out of this course, you should have a working knowledge of Docker, Containers, and Kubernetes.
Resources
The source code used in this course can be obtained from the following GitHub repository: https://github.com/cloudacademy/source-code-pipeline-demo-repo
We all have at some point of time heard the statement: "This code works perfectly fine on my machine but does not work in test or qa or pre-prod or prod or other environments." And almost every time after spending hours or days of debugging we figure out that the developer machine has slightly different versions of binaries or some tweaks in configurations than other environments. And it does not only happen when promoting code from dev machine to X environment, it can also happen when promoting code from QA to pre-prod environment or pre-prod to production environment.
These situations could be frustrating but can be avoided by following some principles or using some concepts. So questions comes to the minds are: How to make sure that code can be deployed in a repeatable fashion? How to make sure that we can have consistent results when deploying the same code in different environments? What is the best way to package my code? How do I promote my code or application from one environment to another? What is the best or good way to deploy an application?
We will learn in this course and address some of the problems mentioned above. We will learn these concepts in general but you may need to do some tweaks to suit your production environment.
First of all, let's understand the term "Artifact". In the DevOps world, an artifact is a collection of Source code, dependencies, configuration files, binaries, et cetera, which can be built using different processes. Artifacts can be in form of JAR file, TAR file, or Docker Images.
Let's say I have an application and I build the artifacts during deployment by fetching some dependencies from the Internet. In this scenario, there are chances that we won't get the same artifacts in all the environments and this could be for a variety of reasons. For example, one environment is pointing to a different version of a repository and thus gets different versions of dependencies or somehow new code has been added to the repository during the deployment.
One good example could be fetching the JDK, an environment may get OpenJDK, others can get Oracle JDK, et cetera. In such a situation the resulting artifacts may not be the same or may not behave in the same manner. This problem can be solved using the concept or principle called Immutable Artifacts. Let's break this term and understand the terms "Immutable" and "Artifacts".
We have already learned about the Artifacts. Now, What does "Immutable" mean? Immutable means an object which cannot change or cannot be changed over time. Or, an Object which cannot be modified once produced. So, an Immutable Artifact is a principle where we produce artifacts once and then should not be modified ever. This solves a lot of problems which comes due to the fact that different artifacts get deployed in different environments.
Now with the Immutable Artifacts, we build an artifact, test it and get a good confidence that this artifact can be deployed to any environment including production and will produce the same results or behave in the same way. Thus, Immutable Artifacts can be deployed in a repeatable fashion with consistent results.
In the Google Cloud Platform world and for this course, when we talk about Immutable Artifacts, we generally refer to the Docker Images. So we will use Docker Image and Immutable Artifact interchangeably. For containerized applications, we build a Docker Image and store it in some repository which could be public or private depending on the use-cases. And whenever there is a change in code or dependency, we build the new image and store it in the repository.
Pradeep Bhadani is an IT Consultant with over nine years of experience and holds various certifications related to AWS, GCP, and HashiCorp. He is recognized as HashiCorp Ambassador and GDE (Google Developers Expert) in Cloud for his knowledge and contribution to the community.
He has extensive experience in building data platforms on the cloud and as well as on-premises through the use of DevOps strategies & automation. Pradeep is skilled at delivering technical concepts helping teams and individuals to upskill on the latest technologies.