The course is part of this learning path
Interested in knowing what Knative is and how it simplifies Kubernetes?
Knative is a general-purpose serverless orchestration framework that sits on top of Kubernetes, allowing you to create event-driven, autoscaled, and scale-to-zero applications.
This course introduces you to Knative, taking you through the fundamentals, particularly the components Serving and Eventing. Several hands-on demonstrations are provided in which you'll learn and observe how to install Knative, and how to build and deploy serverless event-driven scale-to-zero workloads.
Knative runs on top of Kubernetes, and therefore you’ll need to have some existing knowledge and/or experience with Kubernetes. If you’re completely new to Kubernetes, please consider taking our dedicated Introduction to Kubernetes learning path.
For any feedback, queries, or suggestions relating to this course, please contact us at email@example.com.
By completing this course, you will:
- Learn about what Knative is and how to install, configure, and maintain it
- Learn about Knative Serving and Eventing components
- Learn how to deploy serverless event-driven workloads
- And finally, you’ll learn how to work with and configure many of the key Knative cluster resources
- Anyone interested in learning about Knative and its fundamentals
- Software Engineers interested in learning about how to configure and deploy Knative serverless workloads into a Kubernetes cluster
- DevOps and SRE practitioners interested in understanding how to install, manage and maintain KNative infrastructure
The following prerequisites will be both useful and helpful for this course:
- A basic understanding of Kubernetes
- A basic understanding of containers, containerization, and serverless based architectures
- A basic understanding of software development and the software development life cycle
- A basic understanding of networks and networking
The knative-demo GitHub repository used within this course can be found here:
Welcome back! In this lesson, I'm going to introduce you to the Knative Eventing component, explaining how it works and how and when to use it. The eventing component provides features that enable you to create loosely-coupled event-driven asynchronous applications.
The Knative Eventing component provides an event driven framework which enables you to build serverless applications which respond to events. With the Eventing component installed, you have the ability to: one, create loosely coupled services, two, implement Pub/Sub based messaging, three, consume CloudEvent messages delivered over HTTP Post, four, configure Eventing sources such as Kuberenetes, GitHub, Cron, Kafka, Camel, SQS and or PubSub, and five, create and configure Channels to act as forwarding stores, examples being In-Memory, NATS, Kafka and or PubSub.
The Knative Eventing component provides the following specific resources: Source, used to represent a source of events. Consumer, consumers are responsible for consuming events. Both the standard Kubernetes service and the Knative service can be configured as consumers. Channel defines a single event forwarding and persistence layer. Events in a channel are subscribed to using subscriptions or triggers. Broker acts like a channel but has triggers configured on it to filter and forward events to consumers. And trigger filters and forwards broker events to consumers.
Using the previously defined Knative Eventing primitives enables you to build out various messaging pipelines that wire up your serverless applications. I'll now review several of the more interesting eventing patterns.
The source to sink eventing pattern is used to provide a one-to-one mapping from a single event source and send events from it to a receiving service.
The following example demonstrates how to create both the Source and the Sink. To start off, we create a PingSource which is configured to send a message every minute into the sink, configured to be the cloudacademy-service. The receiving cloudacademy-service is configured to simply launch a single pod which will listen on port 8080. The pod simply logs out the received message to standard out.
With the previous two Knative resources deployed, the PingSource will automatically start sending messages to the cloudacademy-service. This can be confirmed by examining the kubectl logs view for the cloudacademy-service pod.
The Channel and Subscription eventing pattern is used to decouple the source from the destination endpoints. This pattern also provides the ability to fan out messages to multiple subscribers.
The following example demonstrates how to create an in-memory channel that has two subscribers attached to it. An in-memory channel is created for demonstration purposes. Be wary of using this type of channel in production as it is nonpersistent. There are better and more robust production-ready channel implementations in the form of Kafka and or PubSub for example.
The PingSource again is configured to send messages at one minute intervals to the in-memory channel.
For the subscription side, this time we will launch two independent services, both of which again spin up a single pod logger which logs out all received messages to standard out. The first service is named cloudacademy-service1. And the second service is named cloudacademy-service2. Both service configurations are identical except for their names.
To complete the channel and subscription example, we need to create two subscriber resources, one for each of the services. A subscriber wires up and connects the receiver to the channel. The first subscriber wires up the cloudacademy-service1 service. And the second subscriber wires up the cloudacademy-service2 service.
With the channel and subscription deployment in place, we can check to see if the PingSource generated messages have been fanned out and received by both service pods. We need to discover the individual pod names for each service. To do so, we simply query the running pods.
Now that we know each of the service pod names, we can examine the logging. The following logging displayed here is for the cloudacademy-service1 pod. We can see clearly that the PingSource generated messages have been received every minute successfully.
The following logging displayed here is now for the cloudacademy-service2 pod. Again, we can clearly see that the PingSource generated messages have been received every minute successfully. This also highlights how a single message can be fanned out to multiple subscribers.
The Broker and Trigger eventing pattern also decouples the source from the destination endpoints, however this pattern has the additional capability of targeting subscribers with filtered messages.
To work with the Broker and Trigger pattern, you first need to inject a Broker into a namespace into which you'll also deploy your triggers and services. In the following example, we will create a cloudacademy namespace and then enable Knative eventing injection on it.
Having completed this we can swap over into the cloudacademy namespace and then query for the broker resource which will have been automatically deployed for us. Take note here of the broker's name, which is set to default, and the broker's URL, both of which will be referenced in the remaining setup.
Next, I'll again deploy a PingSource that simply generates a new message once per minute continuously, but this time configure it to be sent to the broker.
This example will use three identically configured services, again all of which are designed to launch a single pod that listens for messages and then simply logs them out to standard out. The following configuration displayed here is used to create the cloudacademy-service1 service.
The following configuration displayed here is used to create the cloudacademy-service2 service.
And the following configuration displayed here is used to create the third and final cloudacademy-service3 service.
Again all three services have identical configuration except for their service names.
We then create three triggers, one for each of the three services. The first two triggers are designed to target and filter on messages that come directly from the PingSource as per their filter attribute type. The following configuration displayed here is used to create the cloudacademy-trigger1 trigger.
The following configuration displayed here is used to create the cloudacademy-trigger2 trigger. Both triggers will forward a copy of the same PingSource message to its subscriber.
The third and final trigger has its filter attribute set to cloudacademy.app.blah. The subscriber for this trigger will be the service named cloudacademy-service3. I'll need to handcraft HTTP post messages using the CloudEvent specification, which describes a common way to express event data. Let's see how this works.
To enable us to handcraft messages and send them to the internal broker, we will need to spin up a utility pod that gives us access to the curl command. There are various ways this can be accomplished. I'll use the latest fedora image like so.
Let's quickly retrieve the broker's URL again. This is the URL that we will need to HTTP post our customized CloudEvent message to.
From here, it's just a case of entering into the curler utility pod that we just spun up and then from within it, executing a curl command to perform a HTTP POST to the broker where a custom CloudEvent HTTP Header specifying the type is set. In this case, we are simply setting the Ce-Type http header to be set to the same value as used and configured within the Trigger named cloudacademy-trigger3.
Okay with everything in place, we can review the logging as captured by each of the three services we previously set up. If we look at the logs for the cloudacademy-service1 pod, we can see clearly that it is receiving messages from the PingSource every minute at the tail end of the logging.
Likewise if we also look at the logs for the cloudacademy-service2 pod, we can see clearly that it too is receiving the PingSource originating message on the minute every minute at the tail end of the logging.
And finally if we examine the logs for the cloudacademy-service3 pod, we can see that the handcrafted HTTP Post CloudEvents message we posted via the broker from the curler utility pod has been received.
Okay, that concludes this lesson. In summary, you learnt that the Knative Eventing component provides a number of middleware primitives that collectively help you to create event driven asynchronous Kubernetes hosted serverless workloads.
Go ahead and close this lesson and I'll see you shortly in the next one.
Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.
He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.
Jeremy holds professional certifications for both the AWS and GCP cloud platforms.