1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Developing on Anthos

Understanding Anthos Config Management

Start course
Overview
Difficulty
Intermediate
Duration
54m
Students
129
Ratings
5/5
starstarstarstarstar
Description

Anthos is an enterprise-grade solution from Google aimed at nothing less than modernizing and unifying your entire server infrastructure, wherever it currently exists. Anthos encompasses a very broad spectrum of components, yet it’s still very new, so there isn’t a lot of good documentation and training material available for it yet. This can all make Anthos seem very daunting to learn, but this course aims to show you that the very purpose of Anthos is to simplify your infrastructure complexities for you.

Learning Objectives

  • Understand what Anthos is and does
  • Identify how Anthos fits in with other existing hybrid and multi-cloud solutions
  • Investigate options to modernize existing infrastructure configurations to use Anthos
  • Learn about the key components that make up Anthos, and how to configure them
  • Build and test a modern microservice application for Anthos on GCP
  • Create a CI/CD pipeline for deploying to Anthos

Intended Audience

  • Developers interested in learning about the latest in modern cloud-based development strategies

Prerequisites

  • Familiarity with Kubernetes and GKE
  • Have a Google Cloud Platform account
  • Have the Google Cloud SDK installed and initialized
  • Have Git installed

It is also highly recommended that you have Docker Desktop and Visual Studio Code pre-installed as well.

Transcript

Understanding Anthos Config Management. A common complaint when working with Kubernetes is that because container orchestration is still quite new, security best practices when working with containerized workloads are not yet well known by many software developers or system administrators. Due to the nature of infrastructure as code, this means if we make a mistake configuring our container somewhere, that misconfiguration will be propagated to every instance of our application, on any cluster we deploy it to. This problem is amplified when considering a multi-cloud or hybrid cloud deployment where these same containers could also be running across different hosting environments simultaneously.

The good news is we can solve this problem with another bit of infrastructure as code using Anthos Config Management. With Anthos Config Management, we can apply a declarative model to our infrastructure security using standard Kubernetes definitions like Namespaces, labels, and annotations to select and enforce rules across our clusters. Anything we can write a Kubernetes YAML or JSON file for, we can also secure using Anthos Config Management. These configurations are saved in a central Git repository, which allows us to sync our changes across all of our Anthos clusters regardless of their location. Since it's just a standard Git repository, this also gives us the ability to roll back changes or even incorporate a full GitOps workflow into our infrastructure security management.

Now not only have we overcome a common problem managing container-based applications, but we can also take things a step further to create a significant optimization to our infrastructure management. Anthos has a Policy Controller that checks these configuration files and enforces their rules against every Kubernetes API request. With this, we can create guardrails for our applications, by defining security rules that are enforced on all of our containers across all of our Anthos deployments. As an example, let's make sure that we don't have any containers in our production Anthos environments running with root user access.

A common mistake made by developers new to working with containers is to give themselves full permissions in their development environment for convenience. This can avoid encountering pesky permissions problems in development, but presents a massive security vulnerability that could compromise our entire application if this is propagated to our production environment. We'll use the Cloud Shell Editor from the Google Cloud Console for this demo. This is a great tool for administrators because it provisions a Linux environment already authenticated with our Google resources, all right inside our browser. This way we can easily make configuration changes from anywhere we have an internet connection, without having to worry about setting up a development environment.

First let's make a tutorial directory, then run this curl command in that directory. This command downloads a script to configure our Cloud Shell environment to work with our Anthos sample deployment. We can then run this environment using this source command. We can tell that everything is working correctly if we can run this nomos status command now. Let's also make sure we have our sample deployment repository configured correctly in our cloud shell with these quick git commands.

We're now ready to create a contstraint.yaml file with these rules to tell Anthos we want to constrain privileged containers on any pod, or in other words prevent any of our pods from running with elevated permissions. We can copy and paste our yaml config into our cloud shell by wrapping it in a cat command, like this. We can then run nomos vet to verify our configuration file is valid. If there are no errors returned, we can then commit the change to our repository. We can run watch nomos status to confirm our changes are applied. We can also test that it's working by trying to intentionally deploy a pod with elevated privileges.

Here is a yaml file to deploy a privileged nginx container. If we run kubectl on this file and try to deploy this to a pod, we are greeted with an error: the Policy Controller has denied this action as it's now against our security definitions. I have provided a link in the course resources to the Policy Controller template library which gives you a starting point for many other common rules you might want to apply as guardrails to your infrastructure. Now that we understand the basics of creating and securing our containerized workloads, in the next lecture we'll explore how we can easily and securely network our application endpoints with Anthos Service Mesh.

About the Author
Avatar
Arthur Feldkamp
IT Operations Manager and Cloud Administrator, Database and API Integrations Specialist
Students
647
Courses
3

Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.