The course is part of this learning path
This course explores Amazon EventBridge and how it can be used to construct architectures in the cloud using event-driven microservices. In this course, you will learn how to use EventBridge rules and targets to react to events. We'll then take a deeper dive into the service and learn how it differs from Amazon Kinesis and the Simple Notification Service.
- Create Eventbridge rules and targets that can react to events created by multiple AWS sources as well as SaaS providers
- Understand how EventBridge could become a new way to architect your solutions with event-driven patterns in mind
- Anyone wanting to move from a monolithic cloud architecture to one composed of microservices that work based on event-driven systems
- Anyone who wants to learn more about EventBridge
To get the most out of this course, you should have a good understanding of architectures, cloud computing, basic monitoring, and some understanding of JSON. Knowledge of Amazon CloudWatch would also be beneficial.
Probably one of the most impressive features about Eventbridge is the access to Eventbridge archives. The archive allows you to create a place where you can archive events and easily replay them at a later time.
These events will stay in the archive for as long as you set the retention period, after that time they are discarded. You can of course keep your events in the archive indefinitely. there is no limit to the length of time that you can keep events for, and since their text is based anyways it doesn't take up that much room storage wise.
One of the most obvious benefits of Archiving all of your events is for disaster recovery scenarios. Let's say your database is corrupted or gets deleted somehow, if you have a record of all the events that took place you can reconstruct exactly where your database was before the catastrophe.
Disaster recovery isn't the only reason you want to archive. Even something as simple as updating an app with a new feature or a new underlying system could be a great reason to replay events. It might allow you to get new information out of old data that you didn't have the ability to retrieve before.
Amazon has made it super easy to replay these archived events. The functionality is already built into the service and all you have to do is create a new replay, select the archive you wish to draw your events from, and select the destination you wish them to go. At the moment you can only send them back to the same event bus they originally created from but that's OK. I imagine there will be more updates to this feature to allow replay to many different targets... and of course, if you didn’t want to keep these events streaming back from the archive, you can always stop a reply at any time.
When writing applications that deal with events and receiving information from the event bridge it's important to know about the schema of the events that you are going to use. A schema describes the structure of an event and helps you understand what you can expect within the event in regard to attributes and data types.
For example, a customer review event might always contain two strings: one for customer name, and one for the review itself.
Eventbridge has a schema registry built into the service, where you can see all the possible schemas that are available on your event bus. To help facilitate your development, every single AWS service that is available to event bridge, has a prebuilt schema in this registry for you to search through.
This registry allows you to browse through by title or content all of the possible schemas. This search can include variable names within the schemas as well as the title of the service themselves.
At the moment, the SAAS events do not have a prebuilt catalog available for each of their event types, but we can easily discover schemas based on these events. This is literally as easy as selecting one of your SAAS events – for example, a Zendesk new ticket event – and pressing the discover schema button.
Finally, the schema registry allows you to create your own custom event schemas from a JSON string that you create yourself, or provided by your custom service / application.
When developing for events and Eventbridge you have the option to generate code bindings that can be used within Visual Studio.
A code binding is simply an extension to the visual editor that brings in the schema and allows Visual Studio to easily check to see if your variables are of the right type and to expose attributes making programming a lot easier.
Code bindings can greatly increase development speed and are available for java, Python, and Typescript. Bindings can be created for any of the AWS services already supported within EventBridge, as well as your own custom and discovered schemas.
Event bridge vs SNS
Now you may have noticed some similarities between EventBridge which can push events to many subscribers and the Amazon simple notification service. And I would say you're 100 % correct there are many crossovers between these two services.
There are a few key differences here that allow you to make a decision about which is best for your solution. The simple notification service is as the name states very simple it operates on a limited set of parameters, but it does allow you to scale up to millions of subscribers. It however doesn't have direct connectivity to software as a service provider and doesn't provide as much routing capability as Amazon event bridge does.
For example, it's extremely hard to have SNS trigger a step functions state machine compared to Amazon event bridge.
And even though SNS scales almost infinitely the filtering is limited to attributes only not including the content within an event.
So if you are looking to have a dead simple service that can handle the pub sub architecture go for SNS, but if you need a more complex and sophisticated approach, take a look at EventBridge.
Event bridge vs Kinesis
Kinesis actually does a fairly good job of being what EventBridge is; it's able to route events as well as work as event storage which is ideal for processing real time data at large scales.
One of the problems however is there's a limit to the number of consumers that can connect to a single stream. Additionally, each consumer would have to be responsible for filtering out any of the messages that came through kinesis to determine what was important for it.
While they are a very close comparison, EventBridge really does allow you some fantastic flexibility when dealing with your SaaS Providers, so if that's more of your area of concern, do some more reading into EventBridge for your solution.
William Meadows is a passionately curious human currently living in the Bay Area in California. His career has included working with lasers, teaching teenagers how to code, and creating classes about cloud technology that are taught all over the world. His dedication to completing goals and helping others is what brings meaning to his life. In his free time, he enjoys reading Reddit, playing video games, and writing books.