Course Introduction
Overview of Kubernetes
Deploying Containerized Applications to Kubernetes
The Kubernetes Ecosystem
Course Conclusion
The course is part of these learning paths
See 6 moreKubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.
This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.
Learning Objectives
- Describe Kubernetes and what it is used for
- Deploy single and multiple container applications on Kubernetes
- Use Kubernetes services to structure N-tier applications
- Manage application deployments with rollouts in Kubernetes
- Ensure container preconditions are met and keep containers healthy
- Learn how to manage configuration, sensitive, and persistent data in Kubernetes
- Discuss popular tools and topics surrounding Kubernetes in the ecosystem
Intended Audience
This course is intended for:
- Anyone deploying containerized applications
- Site Reliability Engineers (SREs)
- DevOps Engineers
- Operations Engineers
- Full Stack Developers
Prerequisites
You should be familiar with:
- Working with Docker and be comfortable using it at the command line
Source Code
The source files used in this course are available here:
Updates
August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics
May 7th, 2021 - Complete update of this course using the latest Kubernetes version and topics
The Kubernetes ecosystem is a very vibrant and healthy mixture of tools that can help you get more done and work more efficiently with Kubernetes. So I wanted to give you a sense of what's happening around the core of Kubernetes. There are really way too many tools and topics to choose from, so I've only selected a few that are popular and worth knowing about. So, just know that I'm touching the surface of this ocean.
The first tool that I'll mention is Helm. Helm is Kubernetes' package manager. You write packages called charts. Then you use the helm CLI to install and upgrade charts as releases on your cluster. Charts contain all the resources like services and deployments required to run a particular application. Helm charts make it easy to share and complete applications built for Kubernetes. Helm charts can also be found on the Helm hub, similar to how you would find Docker images on Docker hub.
As an example of how you might use Helm. In our sample application, we used Redis for our data tier. So rather than build the data tier up from scratch and managing the resources ourselves we could take advantage of the register charts available for home. There is a highly available Redis chart that comes from the single point of failure in our example application. Charts can be installed with just a single Helm CLI command. Using available charts for running common applications can really free you up to focus on the applications that are core to your business. So definitely take a look at what's available before deciding to roll your own solution.
Here's another example. You can create a chart for your entire microservice application and pack up onto the services, deployments, everything and then subsequently publish it so other members of your organization or the public could use it. The next tool I want to mention is Kustomize with a K. Kustomize allows you to customize YAML manifests in Kubernetes. It can help you manage the complexity of your applications running inside of Kubernetes.
An obvious example is managing different environments such as tests and stage. We saw how we could use config maps and secrets to help with that. But Kustomize makes it even easier. Kustomize works by using customization dot YAML file that declares rules for customizing or transforming resource manifest files. The original manifests are untouched and remained usable as they are, which is an, a massive benefit compared to other tools that required templating in the manifest, rendering them unusable on their own.
Some examples of the kinds of rules you can create with Kustomize are generating config maps and secrets from files. In our data tier we had to write the contents of the Redis config file instead of the config map data. With Kustomized you can generate it directly from the config file rather than trying to keep track of the config file and config map and keep them tied together and in sync. You can also configure common fields across multiple resources. For example, you can set the namespace, labels, annotations, and name prefixes and suffixes using Kustomize itself. That makes it easy to customize around your organization's conventions without polluting the original manifest files. The original manifest remain pristine and easy to share, and also reusable in other situations.
In our example, we had to keep our tier labels and prefixes manually synchronized across all of the resources. And then we had to specify the namespace at the command line every time to avoid hard coding in any space in the manifest. Kustomize can contain all of that complexity for us. The other thing that Kustomize can do is apply patches to any field in a manifest. Kustomize allows you to find a base group of resources and apply an overlay to customize to a base. This is an easy way to manage separate environments by applying a dev name prefix and a label for development environments.
Kustomize has been directly integrated with kubectl since Kubernetes 1.14. The kubectl customized command prints the customized resource manifests with customization defined in customization dot YAML. To accept the customization and then realize them in your cluster you can include the --kustomize or - k option to cube control create or apply.
The next one I'd like to discuss is Prometheus. Prometheus is an open source monitoring and alerting system. Prometheus's built on top of many components, but at its core is a server for pulling in time series metric data and storing it. Prometheus was originally inspired by an internal monitoring tool at Google called borgmon. Similar to how Kubernetes itself was inspired by the board project at Google. Given that history it should come as no surprise that Prometheus is the de facto standard solution for monitoring Kubernetes.
Kubernetes components supply all their own metrics and Prometheus format makes it easy to integrate. You can collect a lot more metrics in Prometheus in the basic metric server we used in this course. That includes metrics outside of Kubernetes. There's also adapters that allow Kubernetes to get metrics from Prometheus so you can do things like auto-scale pods based on custom metrics in Prometheus instead of only the CPU utilization that we saw in the course.
Prometheus has some built in options for visualization but it is commonly paired with Grafana to to create visualizations and dashboards. Prmoetheus also lets you define alert rules to send out notifications. It's incredibly easy to install Prometheus in a cluster, and one way to do it is by using a Helm chart.
I'll round out our talk about all these tools with two members of the Kubernetes ecosystem that relate to the types of applications that you deploy on top of Kubernetes. The first is Kubeflow. Kubeflow is aimed at making deployment of machine learning workloads on Kubernetes simple, scalable, and portable.
Kubeflow is a complete machine learning stack. You can use it for complete end to end machine learning including building models, training them, and serving them all within Kubeflow. Being built on Kubernetes, you can deploy it anywhere and get all of the nice features that Kubernetes provides like auto-scaling. Definitely check out Kubeflow if your requirements involve machine learning.
And the last one is Knative. Knative is a platform built on top of Kubernetes for building, deploying, and managing serverless workloads. Serverless has gained a lot more momentum because it allows developers and companies to focus more on the code and less on the servers that run it. This trend started with AWS Lambda which is synonymous with serverless. However, as the industry shifts to multi-cloud and avoiding vendor lock-in solutions built on top of Kubernetes can be deployed anywhere. This gives you the portability that you would get with containers, but for your entire serverless platform. Knative is not the only game in town when it comes to serverless but it does have the support of industry heavyweights like Google, IBM, and SAP.
This concludes our lesson about the Kubernetes ecosystem. We only touched on a few topics but I hope you can see the breadth of the ecosystem. I hope you can use some of these tools that we've seen but know that there's a lot more to explore on your own.
Jonathan Lewey is a DevOps Content Creator at Cloud Academy. With experience in the Networking and Operations of the traditional Information Technology industry, he has also lead the creation of applications for corporate integrations, and served as a Cloud Engineer supporting developer teams. Jonathan has a number of specialities including: a Cisco Certified Network Associate (R&S / Sec), an AWS Developer Associate, an AWS Solutions Architect, and is certified in Project Management.