Interested in knowing what Knative is and how it simplifies Kubernetes?
Knative is a general-purpose serverless orchestration framework that sits on top of Kubernetes, allowing you to create event-driven, autoscaled, and scale-to-zero applications.
This course introduces you to Knative, taking you through the fundamentals, particularly the components Serving and Eventing. Several hands-on demonstrations are provided in which you'll learn and observe how to install Knative, and how to build and deploy serverless event-driven scale-to-zero workloads.
Knative runs on top of Kubernetes, and therefore you’ll need to have some existing knowledge and/or experience with Kubernetes. If you’re completely new to Kubernetes, please consider taking our dedicated Introduction to Kubernetes learning path.
For any feedback, queries, or suggestions relating to this course, please contact us at firstname.lastname@example.org.
By completing this course, you will:
- Learn about what Knative is and how to install, configure, and maintain it
- Learn about Knative Serving and Eventing components
- Learn how to deploy serverless event-driven workloads
- And finally, you’ll learn how to work with and configure many of the key Knative cluster resources
- Anyone interested in learning about Knative and its fundamentals
- Software Engineers interested in learning about how to configure and deploy Knative serverless workloads into a Kubernetes cluster
- DevOps and SRE practitioners interested in understanding how to install, manage and maintain KNative infrastructure
The following prerequisites will be both useful and helpful for this course:
- A basic understanding of Kubernetes
- A basic understanding of containers, containerization, and serverless based architectures
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
The knative-demo GitHub repository used within this course can be found here:
Welcome back. In this lesson, I'm going to introduce you to Knative and review its background and history, what motivated its creation, its two core components, Serving and Eventing, and when and where you should consider using it yourself.
To begin with, Knative is a Kubernetes-based serverless framework which originated out of Google. Over time, additional support and contributions have been made by other vendors such as Red Hat, IBM, and Pivotal.
Now, the main idea and initiative driving Knative is that it combines two very popular concepts, serverless, together with container orchestration in the form of Kubernetes. To understand why fusing serverless into Kubernetes is such a good thing, I'll first briefly rewind and discuss each by itself.
Serverless is a popularized operating model predominantly for the cloud in which execution environments are dynamically provisioned just before the triggered serverless workload is executed, ensuring that the execution and billing time, often measured in milliseconds, runs for the minimal amount of time, just enough to allow the execution to complete.
When it comes to implementation, the serverless unit of deployment is typically a block of code, that is, you ship and deploy functions. The serverless platform or framework will more than likely use containers to host and run these functions, ensuring functions can launch quickly, and execute in isolation. Containers are a lightweight, stand alone, portable executable package.
Various prebuilt containers with specific runtime engines and/or software frameworks are used to host and execute these functions.
You'll often see larger serverless designs using an eventing model to provide triggering and message passing capabilities. Events are used to trigger individual functions, passing messages between each other using the pub/sub pattern.
Key benefits attributed to the serverless approach are: One, no server management is necessary, two, you only pay for the execution time, three, they can scale to zero, no demand, nothing runs, equals no cost, four, serverless functions are stateless, this promotes scalability, five, reduced operational overhead and costs. Examples of cloud-hosted serverless services are AWS's Lambda Functions, Azure's Azure Functions, and GCP's Cloud Functions.
So let's now take a look at look at Kubernetes and discuss its background and merits. Kubernetes, also originating out of Google, is widely considered to be the most popular container orchestrator and scheduler around at the moment. Kubernetes has gained significant industry adoption, being used by most enterprises to host their own containerized workloads at scale.
Kubernetes provides a cluster-based platform into which you deploy workloads in deployment units called pods. A pod is the smallest unit of deployment or building block, and consists of one or multiple containers.
When building applications for Kubernetes, you declare the required state of your application in a number of manifest files which are then loaded into the cluster. The cluster scheduler and orchestrator then have the responsibility of provisioning and maintaining the declared state within the cluster, resulting in the distribution of pods across the cluster's worker node.
Kubernetes has done a fantastic job of abstracting away and hiding the underlying cluster infrastructure and related networking. As an architect or developer, your only responsibility is to build container images and then deploy them into the cluster, declaring the necessary cluster resources such as pods, deployments, services, et cetera. This is really what Kubernetes excels at, providing a platform that schedules and orchestrates containers.
Therefore, in summary, Kubernetes is a general purpose container management system that automates the deploying, managing, and scaling of your containerized applications. Now, what Kubernetes doesn't set out to address specifically is serverless, and event-driven architectures. But, it can provide some of the required building blocks to enable serverless deployments. Let's see how...
Enter Knative. Knative is a serverless event-driven framework which is deployed directly into Kubernetes, enabling Kubernetes to support serverless event-driven deployments and workloads. With Knative installed into Kubernetes, your cluster gets supercharged and can support not only long running microservice-based workloads, but can now also support event-driven and short-lived, scale to zero serverless workloads.
Knative combines many of the existing native Kubernetes operations and resources together into a set of higher level primitives which help developers to be more productive and self-sufficient.
Knative is composed of two distinct components, Serving, and Eventing. These two components are most often installed together, but can be installed independently of each other if required.
I'll review both of these components individually in the coming lessons. But, for now, lemme quickly explain the responsibility of each component. Serving provides request-driven compute that can scale to zero, and Eventing, provides eventing and event-driven capabilities, encouraging asynchronous messaging flows.
Earlier versions of Knative supported a third component, the Build component. The Build component provided build capabilities for building and packaging container images directly within Kubernetes itself. The latest versions of Knative have deprecated and removed the build component in favour of Tekton. Tekton provides a richer feature set, including the concept of a pipeline, as often seen and used in more traditional CICD tools. Tekton is actually a project in its own right, and can be located at https://tekton.dev/.
Knative, as already explained, is installed into Kubernetes, and currently requires version 1.15 or greater. It requires a networking layer to be installed, which is most often chosen to be Istio. Other networking layers exist such as Ambassador, Contour, Gloo, and Kourier, any of which can be swapped in to replace Istio. This course will focus on using Istio since it's the promoted network layer for Knative.
On top of the networking layer, the Knative Serving and Eventing components are then installed. As previously mentioned, Knative is typically configured with Istio acting as the networking layer. Istio in this context acts as an interface between the Knative components and the underlying Kubernetes platform. Understanding how Istio interfaces with Kubernetes is useful in understanding how the bigger picture works in terms of Knative, and can be helpful when troubleshooting routing issues.
Istio works by creating a service mesh that sits on top of Kubernetes. It implements a control plane and a data plane. The data plane intercepts container traffic being sent from one service to another and then routes it via a proxy container. The Envoy-based proxy container is injected into the pod, automatically at pod creation time, and from then on constantly receives routing information and instructions from the Istio control plane. So, with all of this in place, Istio provides Knative with the capabilities to perform dynamic network routing, load balancing, TLS encryption of data in transit, and traffic splitting, and several other abilities too.
So, as a quick summary, when you deploy Knative into your Kubernetes cluster, you get the additional following functionality: Serverless, Eventing, traffic routing, monitoring, security in the form of TLS endpoints, and policy enforcement.
Okay, that completes this lesson. In the next lesson, I'll focus on the Serving component, explaining how it works and how you use it. Go ahead and close this lesson and I'll see you shortly in the next one.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.