1. Home
  2. Training Library
  3. DevOps
  4. Courses
  5. Introduction to Knative

Eventing - Creating and Configuring

play-arrow
Start course
Overview
DifficultyIntermediate
Duration1h 1m
Students127
Ratings
4/5
starstarstarstarstar-border

Description

Interested in knowing what Knative is and how it simplifies Kubernetes?

Knative is a general-purpose serverless orchestration framework that sits on top of Kubernetes, allowing you to create event-driven, autoscaled, and scale-to-zero applications. 

This course introduces you to Knative, taking you through the fundamentals, particularly the components Serving and Eventing. Several hands-on demonstrations are provided in which you'll learn and observe how to install Knative, and how to build and deploy serverless event-driven scale-to-zero workloads.

Knative runs on top of Kubernetes, and therefore you’ll need to have some existing knowledge and/or experience with Kubernetes. If you’re completely new to Kubernetes, please consider taking our dedicated Introduction to Kubernetes learning path.

For any feedback, queries, or suggestions relating to this course, please contact us at support@cloudacademy.com.

Learning Objectives

By completing this course, you will: 

  • Learn about what Knative is and how to install, configure, and maintain it
  • Learn about Knative Serving and Eventing components
  • Learn how to deploy serverless event-driven workloads
  • And finally, you’ll learn how to work with and configure many of the key Knative cluster resources

Intended Audience

  • Anyone interested in learning about Knative and its fundamentals
  • Software Engineers interested in learning about how to configure and deploy Knative serverless workloads into a Kubernetes cluster
  • DevOps and SRE practitioners interested in understanding how to install, manage and maintain KNative infrastructure

Prerequisites

The following prerequisites will be both useful and helpful for this course:

  • A basic understanding of Kubernetes
  • A basic understanding of containers, containerization, and serverless based architectures
  • A basic understanding of software development and the software development life cycle
  • A basic understanding of networks and networking

Resources

The knative-demo GitHub repository used within this course can be found here:

https://github.com/cloudacademy/knative-demo

Transcript

Okay, we've finished with Knative serving and we're now going to move on to step five and look at Knative Eventing. The first pattern we're going to look at is the source to sync pattern. The source to sync pattern involves a single source generating messages and sending it to a single sync. It's a one to one mapping. So step 5.1 is going to install the source, in step 5.2 is going to install the sync, and 5.1, it's a new resource of PingSource type and this simply has a schedule. It's a Cron job like schedule.

So this one's going to run every minute and it's simply going to send this piece of data to the sync. So let's copy 5.1 and we executed in the terminal and then we'll copy 5.2 which is the sync and run it also within the terminal. Okay, so the source and the sync are both installed, I'll clear the terminal, I'll run the following command, actually before I do, let me run kubectl get pods. We can see we've got two pods, so the first pod is our sync. It's a Knative serving service, and then we've got our source triggered every minute.

Okay, so I'm going to run this command here and this is going to grab this pod name as it is, We can see it here, and then I'm simply going to refer to it using the kubectl logs command, passing in that name. So here's the name, and am going to use dash dash follow. So here we can see our message has been received by the sync. So we leave that to run and we should see a new message, turn up every minute. So again just to be clear, this is the logging out to stand it out activity for this pod.

While we're waiting for the next message to arrive, we can look at the timing and we can see the first one arrived at 46, the second one arrived at 47 and the one that's just arrived now has arrived at 48. Which is every minute. Okay, moving on to step 6. So the next eventing pattern we're going to look at is the pattern that is referred to as channel and subscription. And step 6.1 we need to install an in-memory channel. So our channel was our forwarding persistent store.

So we'll run that, and then we'll redeploy the PingSource, and the reason this time is we want to update the sync within it to send it to our newly created in-memory channel. So what we're doing here is we're actually decoupling the source from the subscriber, using an intermediary, which is our in-memory channel. This time under 6.3 I'm going to install two services. And both of these services, when we get to 6.4 we'll be subscribing to the messages that are in the in-memory channel.

Okay, so our two services have been deployed and now we're going to install and set up our 2 subscriptions for those services. So on doing that we've wired up service one and service two to receive the same message that gets put on the in-memory channel by the source, which is our PingSource. So again, if I run keep CTL get pod, we can say we've got our service one as well as our service two in our single PingSource. 

So under 6.5, I'll extract out the two pod names, we'll then equal them out, like so. So these pod name one, pod name two. And then again we'll look at the logs and we'll follow them. So we're looking at the logs for the first part, again we can see the message. First message has arrived at 7:51, and then if I look at the second service, we can also see that it's received the same message. And we can also see that the timing is approximately the same.

So again, in summary, this has just demonstrated the channel and subscriber pattern, which is single-source sending messages to an intermediary in-memory channel. And then we have two subscribers who receive that same message. So we can see the third message has just arrived. If we go back to the first service, we can see the same three messages. So the two services are in sync in terms of receiving the same message.

Okay, moving on to step seven, this time we're going to look at the last eventing pattern. This is the broker and trigger pattern. So this is very similar to the previous channel and subscriber pattern, except the broker and trigger brings in the ability to filter and target specific messages. So let's see how this works. So under step 7.1 the first thing we need to do is we need to install and configure automatic pod injection on our Cloud Academy namespace.

So I'll clear the terminal, run the command. And what this will do, is that will create a broker within the namespace and then I'll request the broker URL by writing kubectl get broker. So we can see that is our Knative eventing broker and the CloudAcademy namespace. Moving on to step 7.2, we'll redeploy the PingSource again. And this time we can see that it is set to sync with the broker. Step 7.3 we're going to install three services. So the first two services, they'll receive messages from the PingSource. And the third service is designed to receive a message from a pod that contains the curl utility.

Okay, now to set this up, we need to install three triggers. So as just mentioned, the first two triggers will be set up to receive and filter our messages from PingSource. Whereas the third trigger has got a custom filter attribute type, and I'll explain this as we go. So let's install the triggers, these triggers are being created. Step 7.5, as mentioned, we need to spin up a utility pod, which will give us access to the curl utility, which will then give us the ability to craft HTTP post requests to the third service, okay? We'll grab the broker URL. This is the same broker that we set up earlier on. We go that out. I'll put it in the bottom terminal as well. Okay, that's good.

So then under 7.7 was simply going to use the curl utility on the curler pod to send a cloud event message to our third service. So we'll send it off. So that message has been sent to the broker and you can see there the broker has received it and responded with a 202 accepted. Let's now examine the logs for each of our services. So the first two services will have received messages from the PingSource. So these commands are just extracting the pod names for those services, so we'll grab the third service pod name as well.

Okay, we'll then equal each of these out. So it's the service name for the first service. Service name for the second service and service name for the third service. So that's good. So if we look at the logs for the first service, we indeed see the generated messages coming from PingSource every minute, we can see that we'll look at the logs for the second service, which should have the same events, and it does.

So again, messages coming from the same PingSource. And then if we look at the third service, we should see a message that came from our co-request, which we do. So that's a great result. So what that shows is the ability to set up triggers that filter on certain types of messages. So the third service is filtering on messages of type cloudacademy.app.blah.

Now while we've got the tail still running on the third service, let's send another message from the bottom pane. So this time, message two, Enter, this should show up in this log, so that message first goes to the broker. And then the trigger for the third service filters on it, and passes it to the third service. And we can see here our message did indeed arrive. Let's do one more message. And again, here's our third message. 

Okay, so that is the end of all of the demonstrations and which I showed you how to firstly install and set up Knative. Then to install and configure and use Knative serving, and then third and finally to install, configure, and use Knative eventing.

About the Author
Students38832
Labs34
Courses93
Learning paths25

Jeremy is the DevOps Content Lead at Cloud Academy where he specializes in developing technical training documentation for DevOps.

He has a strong background in software engineering, and has been coding with various languages, frameworks, and systems for the past 20+ years. In recent times, Jeremy has been focused on DevOps, Cloud, Security, and Machine Learning.

Jeremy holds professional certifications for both the AWS and GCP cloud platforms.