The course is part of these learning paths
Kubernetes has become one of the most common container orchestration platforms. It has regular releases, a wide range of features, and is highly extensible. Managing a Kubernetes cluster requires a lot of domain knowledge, which is why services such as GKE exist. Certain aspects of a Kubernetes cluster vary based on the underlying implementation.
In this course, we’ll explore some of the ways that GKE implements a Kubernetes cluster. Having a basic understanding of how things are implemented will set the stage for further learning.
- Learn how Google implements a Kubernetes cluster
- Learn how GKE implements networking
- Learn how GKE implements logging and monitoring
- Learn how to scale both nodes and pods
- Engineers looking to understand basic GKE functionality
To get the most out of this course, you should have a general knowledge of GCP, Kubernetes, Docker, and high availability.
Hello and welcome. In this lesson, we'll be reviewing some of the aspects of GKE that relate to security. The Kubernetes project follows the concepts that they've defined in the Kubernetes cloud-native security documentation and looking at this diagram, we can see they're using a layered approach.
In addition to the security considerations built into Kubernetes, GKE also has some security mechanisms at different layers.
By the end of this lesson, you'll be able to describe how to become authenticated in order to interact with the cluster, describe how audit logging is implemented in GKE, and describe how the GKE Sandbox adds another layer of workload protection.
Let's imagine that you've just created a new cluster, it's up and running, and you're ready to deploy some workloads. You open the terminal, you use kubectl apply, and it fails. The error indicates that you're not authenticated so what's next?
kubectl is just a command-line interface that we can use to interact with any cluster via its endpoint. Before we can interact with the cluster, we need to tell kubectl all about it, and the way we do that is through a file called kubeconfig and that file stores details about the different clusters that we want to interact with.
The kubeconfig file is local to the kubectl binary and in some cases, its entries are managed indirectly. For example, when running gcloud container clusters create, an entry is automatically added to the kubeconfig for us. In other cases such as creating a cluster in the console, the kubeconfig isn't managed at all. So in these cases where we're not authenticated automatically, we need to create the kubeconfig entry by running the command
gcloud container clusters get-credentials. Running this will add an entry for the given cluster to kubeconfig and it sets this cluster as our current context, which means the commands issued will automatically target that cluster by default. These commands leverage the credentials of the currently authenticated gcloud user and this allows us to control project-level access with Cloud IAM.
In cases where you need more granular object-level control we can also use the built-in Kubernetes RBAC system to control access to specific objects in the cluster. So the way we authenticate against a cluster is really through the kubeconfig file which we can manage using the gcloud cluster commands.
Changing topics, let's talk about audit logging. Imagine you're seeing changes to the cluster and you're not sure how they happen, so what do you do? How do you investigate the who, what, when, where, and why of the incident? Kubernetes includes an audit log that tracks all calls sent to the API server. Audit logging will answer the who, what, when, and where a portion of that who, what, when, where, why. GKE implements audit logging using cloud audit logs and stackdriver logging. GKE can log both admin activity and data access logs. Admin activity is logged by default and doesn't add any additional costs, though data access is disabled by default since it does cost extra.
Moving on, let's talk about the GKE Sandbox. Docker is not intrinsically a security mechanism. Docker containers share the host's kernel which could put the hosts at risk in the event of a kernel exploit. The amount of risk depends on the different type of workloads. For workloads that might be higher risk, maybe it's something like they execute untrusted code, GKE has a feature called the GKE Sandbox the sandbox serves as another layer of security for select use cases. It's implemented using Google's open-source project gVisor which is a userspace implementation of the Linux kernel API. The sandbox is a feature that is enabled per node pool, though you can't enable it on the default node pool. The way it works is roughly each pod in the pool uses its own gVisor kernel which isolates the containers in the pod from the node's kernel. Like everything in tech, the sandbox has its trade-offs. In return for the added layer of isolation, you do have some restrictions. So if you feel you have a use case that would be ideal for this added isolation, make sure you fully evaluate these restrictions before implementing.
Alright, that's going to wrap up this lesson. Thank you so very much and I will see you in the next lesson.
Ben Lambert is a software engineer and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps. When he’s not building software, he’s hiking, camping, or creating video games.