Understanding Anthos components
Working with Anthos
Anthos is an enterprise-grade solution from Google aimed at nothing less than modernizing and unifying your entire server infrastructure, wherever it currently exists. Anthos encompasses a very broad spectrum of components, yet it’s still very new, so there isn’t a lot of good documentation and training material available for it yet. This can all make Anthos seem very daunting to learn, but this course aims to show you that the very purpose of Anthos is to simplify your infrastructure complexities for you.
- Understand what Anthos is and does
- Identify how Anthos fits in with other existing hybrid and multi-cloud solutions
- Investigate options to modernize existing infrastructure configurations to use Anthos
- Learn about the key components that make up Anthos, and how to configure them
- Build and test a modern microservice application for Anthos on GCP
- Create a CI/CD pipeline for deploying to Anthos
- Developers interested in learning about the latest in modern cloud-based development strategies
- Familiarity with Kubernetes and GKE
- Have a Google Cloud Platform account
- Have the Google Cloud SDK installed and initialized
- Have Git installed
It is also highly recommended that you have Docker Desktop and Visual Studio Code pre-installed as well.
Understanding Anthos GKE. Welcome back! In the first lecture group, we learned about what Anthos is, and how it's actually made up of many different components working together. We also talked about how Anthos relates to other cloud services and on-premise solutions. In this lecture group, we're going to move away from the Operations side now, and focus on the developer's experience working with Anthos. While Anthos has many great features for migrating and modernizing applications and infrastructure, we'll need to know how to build a modern microservices application first for this to be truly useful.
Through the rest of this course, we'll look at just the components a developer needs in order to build a microservices-based Anthos application. To start with, we'll need to enable Anthos on our GCP account. Then we'll deploy the Anthos Sample Deployment on our project. This sample deployment creates a demo application for a fictional Bank of Anthos that we can use to explore the main components we need to know when developing on Anthos. Remember that there is a free 30-day demo period for the Anthos service itself, but the resources created by this sample deployment running on Anthos still have costs associated with them.
Please remember to clean up your project resources or delete your demo project after this course so you don't incur any unexpected charges. The central component to everything Anthos that we need to understand best is Google Kubernetes Engine. We'll orchestrate and manage our applications running as containers deployed to pods operating in clusters on GKE. From the Anthos Dashboard we can navigate to the Anthos Clusters page.
If you've used GKE before, this will look familiar because it's exactly the same as the GKE clusters page. We can click Workloads from here to see what's actually running on our pods, but let's click on Services and Ingress first and find the external endpoints exposed for our application. We can see the endpoint here on port 80, so we should be able to open this in our browser and see a web interface for our Bank of Anthos site. There are a lot of moving pieces here, and even just navigating around your clusters through the Anthos dashboard can seem a little overwhelming. No need to panic though, remember that Anthos is a managed service, and this dashboard is mostly showing you all the complexity that it's managing for you.
As developers, all we need to know is how to package our code to run in a container by creating a Dockerfile for our application. We can then fully automate deployment of our updated application code to our GKE cluster whenever we commit changes with some simple YAML configuration files, without really needing to know anything else about how it actually handles those orchestration tasks for us. We'll go over building docker containers out of our code in more detail in the final lecture group. When building more complex applications we'll probably require multiple containers, as well as some other GCP resources. We can define all these components for the entire deployment together in another configuration file.
We saw how we were able to easily install this Anthos Sample Deployment with Google Click to Deploy through the GCP Marketplace. This deployment example implements some more advanced Infrastructure as Code using Jinja templates. You can see these config files are very human readable, and the only real obstacle is going through the documentation to learn all the service names and settings you need to use with them. Simpler deployments can be managed with just basic YAML configuration files, but Jinja allows us to include variables and logic in our configurations that make them more easily redeployable.
Some links to learning more about creating these YAML engines or configuration files from scratch are included in the course resources. We'll just be working with these configuration files provided with the Anthos Sample Deployment for the rest of this course. In the next lecture, we'll take a look at how we can apply this same infrastructure as code approach to securing our containerized applications using Policy Controller with Anthos Config Management.
Arthur spent seven years managing the IT infrastructure for a large entertainment complex in Arizona where he oversaw all network and server equipment and updated many on-premise systems to cloud-based solutions with Google Cloud Platform. Arthur is also a PHP and Python developer who specializes in database and API integrations. He has written several WordPress plugins, created an SDK for the Infusionsoft API, and built a custom digital signage management system powered by Raspberry Pis. Most recently, Arthur has been building Discord bots and attempting to teach a Python AI program how to compose music.