1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Developing on Anthos

Understanding Anthos Service Mesh

Start course
Overview
Difficulty
Intermediate
Duration
54m
Students
213
Ratings
5/5
starstarstarstarstar
Description

Anthos is an enterprise-grade solution from Google aimed at nothing less than modernizing and unifying your entire server infrastructure, wherever it currently exists. Anthos encompasses a very broad spectrum of components, yet it’s still very new, so there isn’t a lot of good documentation and training material available for it yet. This can all make Anthos seem very daunting to learn, but this course aims to show you that the very purpose of Anthos is to simplify your infrastructure complexities for you.

Learning Objectives

  • Understand what Anthos is and does
  • Identify how Anthos fits in with other existing hybrid and multi-cloud solutions
  • Investigate options to modernize existing infrastructure configurations to use Anthos
  • Learn about the key components that make up Anthos, and how to configure them
  • Build and test a modern microservice application for Anthos on GCP
  • Create a CI/CD pipeline for deploying to Anthos

Intended Audience

  • Developers interested in learning about the latest in modern cloud-based development strategies

Prerequisites

  • Familiarity with Kubernetes and GKE
  • Have a Google Cloud Platform account
  • Have the Google Cloud SDK installed and initialized
  • Have Git installed

It is also highly recommended that you have Docker Desktop and Visual Studio Code pre-installed as well.

Transcript

Understanding Anthos Service Mesh. So far in this lecture group, we've learned the basics about using Anthos GKE to deploy our applications to containers, and we learned how we can use Anthos Config Management to help secure those containers. In this lecture, we'll learn how we can apply the same infrastructure as code approach to manage network configuration and observability for our application components using Anthos Service Mesh.

One common problem with a microservices architecture is that it greatly increases the complexity of our networking infrastructure. With a monolithic application, we may only need to worry about connections on various ports to a single network endpoint. While we can quickly end up with dozens or more network end points in between all of our microservices, with Anthos Service Mesh, we can again use a simple YAML configuration file to define how all our containers interact with each other, and with external network connections.

We can take a declarative approach using Kubernetes namespaces, labels, and annotations to define how our application components are connected instead of defining explicit IPs or URLs in our code, we can even easily ensure mutual TLS is used between our microservices for improved security without the hassle that's usually associated with setting up MTLS. Since this is again done with just a simple YAML configuration file, we can also employ a GitOps workflow here for managing our network configurations.

Our sample deployment uses some more advanced Jinja templating. So I have here just an example of what a basic Anthos Service Mesh GAML file might look like. The important thing is really just to name your services and assign them tags and a namespace, so you can easily apply other rules to them by simple name references, rather than by network addresses. Here we see a YAML snippet. We could apply to our Bank of Anthos clusters using the cube CTL command line tool to ensure all of the pods in our BOA namespace strictly enforce MTLS connections.

Anthos Service Mesh is based on the Istio service mesh. And I have included a link to the Istio documentation and some further Istio samples in the course resources. You can check these out for more details on customizing these YAML configuration files for your own needs. The way a service mesh actually works is by injecting a sidecar proxy container into our clusters. Our applications containers only talk to this proxy container for any network communication.

The proxy containers are then all connected to a single control plane, allowing them to easily share connections between each other or restrict those connections based on our YAML configuration. With Anthos Service Mesh handling these restrictions is further simplified by integrating Google identity and access management with role-based access control. Anthos Service Mesh makes it easy for us to configure and manage networking between our services and external networks, but also gives us observability into how our application components are actually communicating.

If we click here to go to the Anthos Service Mesh page, we'll see a table with all the microservices running in our sample Bank of Anthos project. If we select BOA from the namespace dropdown, we'll see only the Bank of Anthos services and filter out all of the system services. This sample project includes a load generator utility that simulates some fake traffic to our application, so we have some data to look at here. We can click through here and get more information about any inbound and outbound connections to a particular service.

We can also go back and click here to get the services topology view, which gives us an actual visual representation of how our microservices are connected to each other. With this observability into our network communications, it's even possible for us to create service level indicators and define the service level objectives for our application and get alerted when our application components aren't behaving as expected. We now have a basic understanding of how to deploy our application to containers with Anthos GKE, secure those containers with Anthos Config Management and easily configure networking for our clusters with Anthos Service Mesh. We can do all this with just a Docker file and a few YAML files, completely abstracting all the difficult concepts so they are separated from our actual application code. Coming up in the final lecture group, we'll learn how to integrate Anthos development into our IDE and create a simple CICD pipeline to automate our deployment process.

About the Author

Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.