CloudAcademy
  1. Home
  2. Training Library
  3. Microsoft Azure
  4. Courses
  5. 70-534 - Design Advanced Applications

Implement messaging applications

play-arrow
Start course
Overview
DifficultyIntermediate
Duration37m
Students630

Description

This course covers the Design advanced applications part of the 70-534 exam, which is worth 20–25% of the exam. The intent of the course is to help fill in an knowledge gaps that you might have, and help to prepare you for the exam.

Transcript

Welcome back. In this lesson, we'll be discussing messaging applications on Azure. In particular, we'll cover Service Bus and Event Hubs.

Azure Service Bus is a hosted messaging service that provides various communication channels between your applications. Service Bus provides a few options namely, queues, topics, and relays, though we won't be covering relays in this lesson.

Queues are a great messaging solution because you don't require the receiver to be available. Messages can be placed in the queue and then, they are picked up by the receiver whenever that receiver gets available. When the receivers are ready to process messages, queues will deliver them in a first-in first-out pattern.

Because the queue is its own durable storage mechanism for messages, it allows you to build applications that are much more robust; because you can queue up requests and then scale the receivers independently.

A few examples of common queue based patterns are the queue-based load leveling pattern, the priority queue pattern, and the competing consumers pattern. Now if you're not familiar with these patterns, I recommend that you check out Microsoft's Cloud Design Pattern documentation.

Here's some of the key properties that define the behavior of a queue. This list is far from complete but it highlights some of the more significant settings. The first setting is the queue max size, which defines the message holding capacity of a queue and it can be set up to five gigs.

The Time To Live defines how long the message should live before it's removed from the queue. There's also an option that will enable you to move expired messages to a dead letter queue.

Another important setting is the lock duration which is length of time that a message remains unavailable to other receivers once they receive a request, what's called a peek lock, on that message. A peek lock is a nondestructive read, meaning that the receiver can read the message while making it inaccessible to other readers.

Typically the message will be deleted by the receiver after it's processed. The default lock duration for a service bus queue is 60 seconds. The next setting is sessions, which allows messages to be grouped together, meaning that their delivery is guaranteed to be in order.

And the last setting is the partitioning setting which allows messages to be split across multiple brokers which enables greater scalability. In addition to queues, Service Bus also offers Subscriptions and Topics, which are based on the publisher and subscriber pattern, often called the Pub-Cell pattern.

The way it works is that the existence of a message is broadcasted, which is that published part of the Pub-Sub; and then, any subscribers that are listening can receive that message, which is the subscribe part. Unlike a queue, which provides a one-to-one message delivery mechanism, subscriptions and topics provide a one-to-many delivery system.

When a message is published to a topic, it's copied to each subscription that's configured as a listener for that topic and receivers read from those subscriptions to get the messages. You can also use filters to further restrict which messages are delivered from a topic to each subscription.

To make it easier to incorporate Service Bus into your architecture, it's worth three protocols. It's worth HTTP, Service Bus Messaging Protocol, which is typically abbreviated SBMP, and AMQP, which is the Advanced Message Queueing Protocol. AMQP and SBMP are more efficient protocols than HTTP since they maintain a connection to the Service Bus as long as the messaging factor exists.

However, HTTP is a firewall friendly option that works with just about every program and language. If you're going to use Service Bus, and you'll need to consider its limitations and how to overcome them. That means you should consider your scaling.

When it comes to scaling Service Bus, there are a number of strategies. First, you can create additional namespaces to distribute the load hosting various entities such as relays, topics, and queues in separate namespace instances.

Namespaces have a limit on the number of concurrent connections, so spreading the load across multiple namespaces will immediately increase the scalability. Another scaling strategy is to partition your entity instances, such as a queue, to increase overall throughput by removing the bottleneck of having just one message broker per queue.

You can also adjust the message sizes to optimize performance, depending on your scenario. You could increase the number of entities, meaning the number of instances of your relays, queues, and topics to distribute the workload without being constrained to the throughput limitations of a single instance as well.

When it comes to queues and topics, there's an adjustable storage size ranging between one and five gigabytes, at the time of this recording, which determines the maximum number of messages on a queue at any given time.

And finally, another strategy is batching, which involves sending and receiving multiple messages at the same time, which reduces the number of requests from the senders and receivers to that queue so it increases your overall throughput.

Okay, let's switch gears to focus on Event Hub. Azure Event Hubs are a highly scalable, publish-subscribe service that can handle millions of events per second and make them available to multiple applications. Event Hubs allow you to process, re-process, and analyze massive amounts of data and respond to variable traffic load.

Unlike queues, Event Hubs don't have a concept of time to live for messages. There's no concept of transactions or dead-letter queues either. Instead, Event Hubs focus on providing a low latency, reliable, and durable events storage system that has the ability to replay data for processing and analysis.

This diagram is a simple view of Event Hubs. You can have multiple senders. For example, thousands of mobile applications and they could publish events to the Event Hub. When these messages arrive at Event Hub, they're stored at one of the partitions which distributes the message load making it a scalable solution.

And at the time of this recording, the minimum partition count is two and the maximum is 32. When it comes to scaling Event Hub, there are some things to consider just like there were with Service Bus. Aside from creating additional namespaces, the two key scaling tools are throughput units and the partitions.

Throughput units are configured at a namespace level and are shared by all Event Hubs in that namespace. Purchasing additional throughput units, increases the maximum capacity of Event Hub as a whole. With each throughput unit representing one megabyte per second, or 1,000 events per second of ingress and two megabytes per second of egress.

There's also partitioning, which similar to queues, increases the maximum number of readers and increases the maximum throughput. Each partition is limited to one throughput unit, regardless of the number of purchased throughput units; meaning that, partitioning is critical to any Event Hub scaling strategy.

Okay, that's gonna wrap up this lesson. In the next lesson, I'll cover implementing applications for background processing. So, if you're ready to keep going, then I'll see you in the next lesson.

About the Author

Students32101
Courses29
Learning paths17

Ben Lambert is the Director of Engineering and was previously the lead author for DevOps and Microsoft Azure training content at Cloud Academy. His courses and learning paths covered Cloud Ecosystem technologies such as DC/OS, configuration management tools, and containers. As a software engineer, Ben’s experience includes building highly available web and mobile apps.

When he’s not building the first platform to run and measure enterprise transformation initiatives at Cloud Academy, he’s hiking, camping, or creating video games.