This course teaches you how to manage application and network services in the Azure ecosystem.
By the end of this course, you'll have gained a firm understanding of the key components that comprise the Azure application and network services ecosystem. Ideally, you will achieve the following learning objectives:
- How to use Active Directory.
- How to understand networking strategies for Azure and communication services.
- How to use Redis cache.
This course is intended for individuals who wish to pursue the Azure 70-532 certification.
You should have work experience with Azure and general cloud computing knowledge.
This Course Includes
- 1 hour and 10 minutes of high-definition video.
- Expert-led instruction and exploration of important concepts surrounding Azure application and network services.
What You Will Learn
- How to utilize Azure Active Directory.
- How to implement Azure communication strategies.
- How to take advantage of Redis caching.
Welcome back. In this section we'll summarize the scaling options and features available for Azure Service Bus.
When choosing a message pricing tier, Service Bus offers two options. The first is the basic, which provides queues and event hubs only. These are limited to 100 concurrent connections and allow only one consumer group for event hubs. This means that if you want parallel processing of event hub data, this tier will not suffice. The standard tier provides the full feature set of queues, event hubs, topics, and subscriptions, as well as relays. 1,000 concurrent connections are allowed as well as multiple event hub consumer groups enabling parallel processing.
The notification hub tier is separate from the Service Bus messaging tier. It lets us choose from three different service levels. Each level provides messaging to an unlimited number of devices, but has different limitations. The free tier provides up to one million messages per month, but does not provide auto-scale functionality. The basic tier provides 10 million messages per month for free with additional messages available for a fee along with auto-scale functionality. Lastly, the standard tier is simply the basic tier, plus a host of enterprise features that we won't get into here.
When scaling Service Bus, there are a number of strategies. Firstly, we can simply create additional namespaces to spread the load, hosting our various entities, such as relays, topics, or queues in separate namespace instances. Namespaces have limitations such as limit of the number of concurrent connections. So spreading the load across multiple namespaces immediately increases our scaling potential. We can petition our entity instances such as a queue to increase overall throughput by removing the bottleneck of having just the one message broker or backing store per queue. We can adjust message sizes to optimize performance, depending on our scenario, or pay for additional throughput units to increase our capacity. We have the option to increase throughput units. We can also increase the number of entities meaning the number of instances of our relays, queues, or topics to distribute the workload without being constrained to the throughput limitation of a single instance.
When it comes to queues and topics, we have three key points that we touched on previously. We have an adjustable storage size, ranging between one and five gigabytes at the time of writing, which determines the maximum number of messages that can be present on the queue, for example, at any one time. We have batching, meaning sending multiple messages in a single push, and receivers taking more than one message off the queue at once. This reduces the volume of requests from senders and receivers that the queue has to handle, thereby increasing overall throughput. And lastly, we have partitioning. Increasing the number of partitions increases the number of message brokers, which means that the overall throughput is not limited to the performance cap of a single broker.
Partitioning also increases the maximum number of readers, again, increasing throughput potential. And finally, let's recap on event hub scaling. Aside from creating additional namespaces, our two key scaling tools are throughput units and partitions. Throughput units are configured at a namespace level and are shared by all the event hubs in the namespace. Purchasing additional throughput units increases the maximum capacity of the event hub as a whole, with each throughput unit representing one megabyte per second, or a thousand events per second of ingress and two megabytes per second of egress. And lastly, partitioning, which similarly to queues, increases the maximum number of readers and increases the maximum throughput. Each partition is limited to one throughput unit performance-wise regardless of the number of purchased throughput units, meaning that partitioning is critical to any event hub scaling strategy. This concludes the Service Bus scaling overview.
Next we'll have a look at the topic of Service Bus monitoring.
About the Author
Isaac has been using Microsoft Azure for several years now, working across the various aspects of the service for a variety of customers and systems. He’s a Microsoft MVP and a Microsoft Azure Insider, as well as a proponent of functional programming, in particular F#. As a software developer by trade, he’s a big fan of platform services that allow developers to focus on delivering business value.