Continuous Integration (CI) and Continuous Delivery (CD) enables teams to adopt automation in building, testing, and deploying software. CI/CD along with DevOps practices are attracting a lot of attention and playing an important part in the software development process. Efficient CI/CD strategies enable companies to deliver better value by being able to reach out to the market with shorter turn-around times, thereby increasing revenues and gaining market share.
CI/CD practices enable us to proactively resolve bugs, issues, and other problems at a much earlier stage. This results in a significant reduction in the overall software development cost.
In this course, you will learn the skills required for building CI/CD pipelines using tools such as Google Cloud Build, Google Container Registry, and Source Repository. The course will start by showing you how to develop code in Cloud Shell and then upload it to Google Source Repository. It will then guide you through the CI/CD pipeline stage to build and deploy an application to GKE using Container Registry and Cloud Build.
If you have any feedback relating to this course, please let us know at support@cloudacademy.com.
Learning Objectives
By the end of this course, you will know how to:
- Work with immutable artifacts
- Deploy immutable artifacts using Cloud Build
- Trigger builds
- Set up Cloud Build pipelines
Intended Audience
This course is suited to anyone interested in building CI/CD pipelines on Google Cloud Platform (GCP) using Cloud Build and Container Registry.
Prerequisites
To get the most out of this course, you should have a working knowledge of Docker, Containers, and Kubernetes.
Resources
The source code used in this course can be obtained from the following GitHub repository: https://github.com/cloudacademy/source-code-pipeline-demo-repo
In the previous section, we learned about immutable artifact, also learned that we can store them in some kind of repository. Specifically for the Docker images, we have few options available to opt for. First option is DockerHub, which is a public registry or we can host our own private registry or we can use Google Container Registry which is a private registry.
DockerHub is commonly used as container registry but it is a public registry and does not allow to control access on the images. Achieve access control, one has to use private repository but self deploying and managing the private Docker registry involves work like installing registry software, upgrading or patching registry software stack, storage backup, et cetera. These are operational overhead when hosting your own registry in on-premise or on virtual machines in the cloud. That's where the Google Container Registry service comes really handy.
Now let's move forward and learn more about the Google Container Registry. We often use the short name, GCR, for Google Container Registry. Google Container Registry, or GCR, is a service provider on Google cloud platform that lets us store the Docker images and share them with the team. Now let's look at some of the features of the GCR.
First of all, it is a secure private Docker registry, which means GCR lets us store the Docker images on the cloud and also provides ways to control access on who can view or download the stored Docker images. It has vulnerability scanning capabilities. We would always want to use Docker images which are safe for our environment, GCR scales the vulnerabilities in the Docker images using the constantly updated database. At the time of writing, it costs about $0.26 per scanned container image.
DCR has native Docker support. We can interact with Google Container Registry using already familiar Docker commands. We can push our pull images to or from Google Container Registry using Docker push or Docker pull command. We can lock risky or unsafe images. We can even define our own policy, based on which GCR can restrict access to unsafe images and prevent deployment of risky images to the environment like Google Kubernetes Engine or GKE.
It gives fast access to images. GCR repositories are regional and we can create a private repository near to the compute service which helps to push or pull images quickly. For example, our GKE cluster is running in the Europe region then we can create a GCR in Europe regions so that the transfer of images or interaction with GCR is at optimal level.
GCR also supports auto-built images. Imagine we change our Docker image frequently and push new changes to the version control system, like GitHub or cloud source repository. GCR provides features which can auto-build the new Docker images when there is a new commit in the version control. We can even tag the Docker images stored in the Google Container Registry.
A question may come to mind is: where does Google Container Registry store images? The answer to this question would be: Google Container Registry or GCR uses Google cloud storage behind the scene. When we push the first image in the GCR, it creates a new Google cloud storage or GCS pocket or standard storage class and stores all the images of that repo in the pocket.
Here is the GCR page screenshot from Google cloud console looks like. First of all, if you don't see this page it could be that GCR API is not enabled for your GCP project. And don't worry, we will see later in our class through a demo, how to enable the GCR API.
Now if we closely look into the screenshot, we do not see an option to create a repository or registry in the Google containers registry service page. So the first question comes here: how do I create a repository to store my Docker images? We just need to push the first Docker image and we will see our repository on the console.
We will shortly learn about how to push and pull Docker images using GCR. But before we dive into that, let's learn about the naming convention of Container Registry. In GCR, a registry name is composed of host name and project ID. A host name is the location where we want to store our Docker images and project ID is the GCP project ID.
At the time of writing, Google has following set of values for the host name: gcr.io, usgcr.io, eu.gcr.io, and asia.gcr.io. If we choose gcr.io as our host name, then Container Registry will store the images in the United States region. If we choose us.gcr.io as our host name, then Google Container Registry will store the images in the United States region.
So quickly comes to the mind, if gcr.io and us.gcr.io both store images in the US region, then why two different host names? Well, Google documentation says that Google may change the location of gcr.io in the future.
Now moving forward, if we want our images to be stored in EU region, then we can choose eu.gcr.io and if there is a need to have images in Asia region, then we go for asia.gcr.io. Now let's say, I want to have a container registry for my GCB project; CICD demo 1234 in the EU region. Then the registry name would look like eo.gcr.io/cicd-demo-1234.
Now let's learn about pushing an image to GCR and pulling an image from GCR via demo. To interact with GCR, we need to make sure we have Google cloud SDK Docker libraries installed on the workstation. For this demo, we will use Google Cloud Shell which comes with G-Cloud and Docker pre-installed. This is the landing page of the Google Cloud platform and we see here, a dashboard of the selected GCB project. For this demo, we have a project called CICD demo 1234.
Before we progress with pushing or pulling an image from GCR, we need to make sure that GCR API is enabled. For this, click on the navigation bar and select APIs and services. On this page, click on enable APIs and services. This takes us to the API library page. On the search bar, search for; Google Container Registry API and select container registry API.
If the API is already enabled for this project, then we will see a green tick with API enabled. If the API is not enabled, then we would see a button enable. For this project, API is not enabled yet, so let's go ahead and click on enable. This is now activating Google Container Registry API. Now the API is enabled.
Now let's go back to the Google Cloud Console dashboard and activate the Google Cloud Shell by clicking on this button on the top left blue bar. This will start provisioning a Linux machine for a logged-in user. This machine has a capacity of about 5GB for home directory and data we put in home directory persist for some time. This workstation has G-Cloud command pre-installed and we can quickly check that by running G-cloud auth list. It shows that my credentials are active.
Now let's check if it has Docker pre-installed. To perform this check, I can simply run command docker -v
and it prints the installed Docker version. Before we run further commands, let's maximize the Cloud Shell by clicking on this button.
Now I will run the command gcloud auth configure-docker
to configure Docker to use G-cloud credentials for interacting with GCR. This has now produced a conflict.json in .Docker directory. Let's specify this location in the path environment variable by running command export path
. We will now progress further to push a Docker image to the GCR.
Currently, I do not have any Docker image on my workstation so I will create a simple Docker image. I'm creating a new directory to keep the demo related file and naming it demo01, under CICD directory. I'm creating a simple bash script welcome.sh, which will print a message "Welcome to the CICD GCP course."
Now let's containerize this bash script. Here I'm using the Alpine Docker image and copying our welcome.sh script and specifying to run the script. We will now build the Docker image locally by running the command dockerbuild
. Upon successful build, we see a message like successfully built the image ID. Here 843DE39D0A26 is the Docker image ID. As learned previously, we need to follow certain naming standards when working with GCR.
For this demo, if we want to keep the Docker image in the EU region and as the above name is not so human readable, I want to name it as welcome. So the name of this Docker image would become eu.gcr.io/cicd-demo-1234/welcome. To specify this name, we need to run the command docker tag 843DE39D0826
, which is the image ID, space eu.gcr.io/cicd-demo-1234/welcome.
Before we push this image to Google Container Registry, let's quickly see the GCR service on the Google Cloud console. Here, we do not see any repository because we have pushed no Docker image into this GCP project. Now let's go back to our Cloud Shell and push the Docker image using the command docker push eu.gcr.io/cicd-demo-1234/welcome
. We have successfully pushed the first Docker image to the Google Container Registry. We will now go back to the GCR UI and check if we can see this image. We see that there is a repository name eu.gcr.io/cicd-demo-1234 and a Docker image with the name welcome.
Now we will perform pull action from GCR. We will perform this action, again, on our Cloud Shell and before we run docker pull
command, I'm going to remove all Docker images stored locally on the Cloud Shell. By running the command docker images -a
, I see that there are few Docker images. Let's remove these Docker images by running the command docker rmi $(docker images -a -q)
.
If I now list the Docker images, I see the empty list. Now let's pull the welcome image from the GCR repository by running command docker pull eu.gcr.io/cicd-demo-1234/welcome
. We now have the Docker image name, welcome, on our Cloud Shell instance. We see that interacting with GCR is easy and feels similar interacting with any Docker registry.
With this demo, we conclude our Container Registry lecture and we will learn about cloud source repository in the next lecture.
Pradeep Bhadani is an IT Consultant with over nine years of experience and holds various certifications related to AWS, GCP, and HashiCorp. He is recognized as HashiCorp Ambassador and GDE (Google Developers Expert) in Cloud for his knowledge and contribution to the community.
He has extensive experience in building data platforms on the cloud and as well as on-premises through the use of DevOps strategies & automation. Pradeep is skilled at delivering technical concepts helping teams and individuals to upskill on the latest technologies.