The course is part of these learning paths
See 6 moreThis course explores how to secure your deployment pipelines on GCP. We will cover the four main techniques to securely build and deploy containers using Google Cloud and you will follow along with guided demonstrations from Google Cloud Platform so that you get a practical understanding of the techniques covered.
If you have any feedback relating to this course, please contact us at support@cloudacademy.com.
Learning Objectives
By completing this course, you will understand:
- The advantages of using Google managed base images
- How to detect security vulnerabilities in containers using Container Analysis
- How to create and enforce GKE deployment policies using Binary Authorization
- How to unauthorized changes to production using IAM
Intended Audience
This course is intended for:
- Infrastructure/Release engineers interested in the basics of building a secure CI/CD pipeline in GCP
- Security professionals who want to familiarize themselves with some of the common security tools Google provides for container deployment
- Anyone taking the Google “Professional Cloud DevOps Engineer” certification exam
Prerequisites
To get the most out of this course, you should be familiar with:
- Building CI/CD pipelines
- Building containers and deploying them to Kubernetes
- Setting up IAM roles and policies
Now, you can let Google automatically take care of your base image, but you're still on the hook for securing the other layers. Luckily, Google has made detecting issues in your images easier by providing a service called Container Analysis. Google Container Analysis can perform vulnerability scans on your Linux amd64 images. Container Analysis supports both automatic scanning, via the Container Scanning API, and manual scanning, via the On-Demand Scanning API. You're billed for every scanned image, so you can choose the method which best suits your needs.
Automatic scanning performs vulnerability scans on container images stored in either Artifact Registry or Container Registry. Once enabled, every time you push an image, a scan will be triggered. Now, scans happen only once based on the images digest. That means that adding or modifying tags will not trigger new scans, only changing the contents of the image will.
Vulnerability information is continuously updated for up to 30 days whenever new vulnerabilities are discovered, so you don't need to constantly rescan your images every day to identify new defects. After 30 days, images are considered stale and the metadata is no longer updated.
Now to extend the monitoring window, you'll need to push the image again, and it is possible to automate re-pushing containers every 30 days, if you desire. With On-Demand Scanning, you can manually scan images in the Container Registry, Artifact Registry, or even locally on your own machine. Results are available for up to 48 hours after the scan is complete.
Unlike automatic scans, this will not continuously monitor and notify you of newly discovered vulnerabilities. However, it is useful for when you desire full control over what images are scanned and when. It also can be used for scanning images locally before committing them to them a registry.
Now, when the scan of an image is complete, a list of detected vulnerabilities is generated. This list includes the name of the package that contains the vulnerability, the severity, from critical to minimal, and whether a known fix is available. All results are stored as metadata, which can be accessed via API and also can be used to make decisions during your deployment process. The database for Container Analysis is continuously updated, every time a new security threat is identified. So even if an image came up clean in the past, a new update might now flag it as compromised.
Now, Container Analysis won't detect every single possible security issue. As I said, it currently only supports Linux-related issues on the amd64 platform. Windows containers are not covered. But it can be an easy and effective way to identify the most common security problems.
Daniel began his career as a Software Engineer, focusing mostly on web and mobile development. After twenty years of dealing with insufficient training and fragmented documentation, he decided to use his extensive experience to help the next generation of engineers.
Daniel has spent his most recent years designing and running technical classes for both Amazon and Microsoft. Today at Cloud Academy, he is working on building out an extensive Google Cloud training library.
When he isn’t working or tinkering in his home lab, Daniel enjoys BBQing, target shooting, and watching classic movies.