1. Home
  2. Training Library
  3. Containers
  4. Courses
  5. Introduction to Kubernetes

Kubernetes Ecosystem


Course Introduction
Deploying Containerized Applications to Kubernetes
12m 29s
5m 49s
9m 16s
14m 3s
The Kubernetes Ecosystem
Course Conclusion

The course is part of these learning paths

Building, Deploying, and Running Containers in Production
Introduction to Kubernetes
more_horizSee 2 more
Start course
Duration2h 12m


Kubernetes is a production-grade container orchestration system that helps you maximize the benefits of using containers. Kubernetes provides you with a toolbox to automate deploying, scaling, and operating containerized applications in production. This course will teach you all about Kubernetes including what it is and how to use it.

This course is paired with an Introduction to Kubernetes Playground lab that you can use to follow along with the course using your own Kubernetes cluster. The lab creates a Kubernetes cluster for you to use as we perform hands-on demos in the course. All of the commands that are used in the course are included in the lab to make it easy to follow along.

The source files used in this course are available in the course's GitHub repository.

Learning Objectives 

  • Describe Kubernetes and what it is used for
  • Deploy single and multiple container applications on Kubernetes
  • Use Kubernetes services to structure N-tier applications 
  • Manage application deployments with rollouts in Kubernetes
  • Ensure container preconditions are met and keep containers healthy
  • Learn how to manage configuration, sensitive, and persistent data in Kubernetes
  • Discuss popular tools and topics surrounding Kubernetes in the ecosystem

Intended Audience

This course is intended for:

  • Anyone deploying containerized applications
  • Site Reliability Engineers (SREs)
  • DevOps Engineers
  • Operations Engineers
  • Full Stack Developers


You should be familiar with:

  • Working with Docker and be comfortable using it at the command line


August 27th, 2019 - Complete update of this course using the latest Kubernetes version and topics



The Kubernetes ecosystem is very vibrant with a healthy mixture of tools that can help you get more done and work better with Kubernetes. I want to give you a sense of what’s happening around the core of Kubernetes. There are really too many tools and topics to choose from. I’ve selected a few that are popular and worth knowing about. Just know that I’m really only scratching the surface. 


The first tool that I’ll mention is Helm (https://v3.helm.sh/). Helm is Kubernetes' package manager. You write packages called charts, then you can use the Helm CLI to install and upgrade charts as releases on your cluster. Charts contain all the resources, like services and deployments required to run a particular application. Helm charts make it easy to share complete applications built for kubernetes. Helm charts can be found on the helm hub similar to how you can find docker images on docker hub.


As an example of how you might use helm, in our sample application we used redis for our data tier. Rather than build the data tier up from scratch and managing the resources ourselves we could take advantage of the redis charts available for helm. There is a highly available redis chart that overcomes the single point of failure in our example application. Charts can be installed with just a single helm cli command. Using available charts for running common applications can really free you up to focus on the applications that are core to your business. Definitely take a look at what’s available before deciding to roll your own solution. Here's another example, you can create a chart for your entire micro service application, package up all the services, deployments, everyting, and publish it so other members of your organization or anyone can use it. 


Next I want to mention kustomize (https://github.com/kubernetes-sigs/kustomize), with a k. Kustomize allows you to customize YAML manifests in Kubernetes. It can help you manage the complexity of your applications running in kubernetes. An obvious example is managing different envrionments such as test and stage. We saw how we could use configMaps and secrets to help with that but kustomize makes it easier. Kustomize works by using a kustomization.yaml file that declares rules for customizing or transforming resource manifest files. The original manifests are untouched and remain usable as they are which is a benefit compared to other tools that require templating in the manifests rendering them unusable on their own. 

Some examples of the kinds of rules you can create in kustomize are:

- Generating configmaps and secrets from files. In our data tier example we had to write the contents of the redis config file inside the configmap data. With kustomize you can generate it directly from the config file rather than trying to keep the config file and configmap in sync.

- You can also configure common fields across multiple resources. For example, you can set the namespace, labels, annotations, and name prefixes and suffixes using kustomize. That makes it easy to customize around your organizations conventions without polluting the original manifest files. The original manifests remain pristine and easy to share and reuse in other situations. In our example we had to keep our tier labels and prefixes manually synchronized across all the resources and had to specify the namespace at the command line every time to avoid hard-coding the namespace in the manifests. Kustomize can contain all of that complexity for us.

- You can also apply patches to any field in a manifest. Kustomize allows you to define a base group of resources and apply an overlay to customize to a base. This is an easy way to manage separate environments by applying a dev name prefix and label for a development environment for example. 

Kustomize has been directly integrated with kubectl since Kubernetes 1.14. The kubectl kustomize command prints the customized resource manifests with kustomization defined in a kustomization.yaml file applied. To accept the kustomization and realize them in the cluster you can include the --kustomize or -k option to kubectl create or apply.


The next tool I want to discuss is Prometheus (https://prometheus.io/). Prometheus is an open source monitoring and alerting system. Prometheus is built up of many components but at its core is a server for pulling in time series metric data and storing it. Prometheus was originally inspired by an internal monitoring tool at Google called borgmon similar to how Kubernetes itself was inspired by the borg project at Google. Given that history it should come as no surprise that Prometheus is the defacto standard solution for monitoring Kubernetes. Kubernetes components supply all their own metrics in Prometheus format making it easy to integrate. You can collect a lot more metrics in Prometheus than the basic metrics server we used in the course. That includes metrics outside of kubernetes. There is also an adapter that allows kubernetes to get metrics from Prometheus so you can do things like autoscale pods based on custom metrics in Prometheus instead of only CPU utilization that we saw in the course. Prometheus has some built-in options for visualization but it is commonly paired with Grafana to create visualizations and dashboards. Prometheus also lets you define alert rules and send out notifications. It’s easy to install Prometheus in a cluster and one way to do it is by using a helm chart.


I will round out our ecosystem discussion with two members of the ecosystem that relate to the types of applications you deploy on Kubernetes. The first is kubeflow (https://www.kubeflow.org/). Kubeflow is aimed at making deployment of machine learning workflows on Kubernetes simple, scalable, and portable. Kubeflow is a complete machine learning stack. You can use it for complete end-to-end machine learning including build models, training them and serving them all within Kubeflow. Being built on Kubernetes, you can deploy it anywhere and get all the nice features that Kubernetes provides like autoscaling. Definitely check out kubeflow if your requirements involve machine learning.


The final topic for this lesson is knative (https://knative.dev/). Knative is a platform built on top of Kubernetes for building, deploying, and managing serverless workloads. Serverless has gained a lot of momentum because it allows developers and companies to focus more on their code and less on the servers that run it. The trend started with AWS Lambda which is synonymous with serverless. However as the industry shifts to multi-cloud and avoiding vendor lock-in, solutions built on top of Kubernetes that can be deployed anywhere. This gives you the portability that you get with containers for your entire serverless platform. Knative is not the only game in town when it comes to serverless on top of Kubernetes, but it does have the support of industry heavyweights like Google, IBM, and SAP.


This concludes our lesson about the Kubernetes ecosystem. We only touched on a few topics but I hope you can see the breadth of the ecosystem. I hope you can use some of the tools we’ve seen but know that there is a lot more to explore on your own.


This also concludes the instructional part of the course, so take a deep breath, and give yourself a pat on the back. Join me for the course conclusion in the next lesson.

About the Author
Learning paths22

Logan has been involved in software development and research since 2007 and has been in the cloud since 2012. He is an AWS Certified DevOps Engineer - Professional, AWS Certified Solutions Architect - Professional, Microsoft Certified Azure Solutions Architect Expert, MCSE: Cloud Platform and Infrastructure, Google Cloud Certified Associate Cloud Engineer, Certified Kubernetes Administrator (CKA), Certified Kubernetes Application Developer (CKAD), Linux Foundation Certified System Administrator (LFCS), and Certified OpenStack Administrator (COA). He earned his Ph.D. studying design automation and enjoys all things tech.