Azure Artificial Intelligence Services
Design for IoT
Design Messaging Solution Architectures
Design Media Service Solutions
The course is part of these learning paths
This course is focused on creating practical solutions using Azure technologies in areas such as AI, messaging, the Internet of Things, and video media. This will require familiarity with dozens of Azure solutions.
This course will take you through all of the relevant technologies and ensure you know which ones to pick to solve specific problems. This course is for developers, engineering managers, and cloud architects looking to get a better understanding of Azure services.
Whether your app deals with artificial intelligence, managing IoT devices, video media, or push notifications for smartphones, Azure has an answer for every use case. This course will help you get the most out of your Azure account by preparing you to make use of many different solutions.
Design solutions using Azure AI technologies
Design solutions for IoT applications using Azure technologies
Create a scalable messaging infrastructure using Azure messaging technologies
- Design media solutions using Azure media technologies and file encoding
People who want to become Azure cloud architects
General knowledge of IT architecture
Scalability is often your biggest and most critical challenge when designing cloud infrastructure. If we cannot tolerate traffic spikes or sudden growth then we have been derelict in our duty. The first place to start when addressing scalability is to identify bottlenecks. What components of our system are most vulnerable to catastrophic failure due to changes in usage patterns?
With the notification and messaging systems we discussed in the previous lessons, there are, at a high level, three places we want to focus: 1. Messaging and event ingestion, or, the “on-ramp” into our system. 2. Message routing, meaning getting ingested messages to the right place without latency. And 3. Message processing, meaning once we are ready to do something with our messages, be it run some code or store them somewhere, we can do that thing quickly.
So let’s start with message ingestion. We’ll address Azure Storage Queues and Azure Service Bus. For Storage Queues, you get Azure’s enterprise storage SLAs, which include certain performance guarantees. See the links for full details in the Azure documentation. Basically, unless you have an extremely high volume system, Azure should be able to handle your needs. Storage Queues can tolerate 20,000 messages per second per storage account at a rate of 2000 messages per second per queue.
With Service Bus, performance varies a bit depending on the tier you select. In both cases you will be able to handle larger message sizes than storage queues. The main difference between the Service Bus Premium and Standard tiers is in network performance. The Premium tier guarantees consistent high throughput performance at a fixed price with the ability to scale workloads up and down. With the standard tier it is “pay as you go” with variable latency and throughput. Also note that the size of individual queues with Service Bus is fixed at a maximum of 80 GB, wherease Azure Storage queues can go all the way up to 500 TB. Keep this in mind if you have an unusual use case.
So the basic scaling takeaway here is that with both services you get strong performance guarantees but you may need to think carefully about whether Service Bus or Storage Queues make more sense for your needs as there are key differences.
Now for routing messages we’re going to just briefly mention Event Grid. It has strong scalability guarantees as well and is great for serverless architectures that need to route data to various endpoints. Event Grid includes 24-hour retry with exponential backoff to give you some breathing room if there is a temporary issue with your system. You also get a throughput guarantee of support for millions of events per second. Be aware, though, that the pricing model is pay-per-event. If cost is an issue it could be cheaper to handle events with your own custom system and only pay for network bandwidth.
Finally, how do we scale our message processing? Well, that depends on exactly what is doing the processing. If we’re using Azure Functions, happily, we don’t need to worry about scaling - Azure will do it for us automatically as traffic increases. Be aware that you will of course be charged based on function usage. If we have some sort of app running on a VM doing the processing, then we may need to implement some form of autoscaling. Our endpoint could also be an Azure Logic Apps. See the documentation link for details on its limitations. There are http request and message size limits that are not really changeable. If your bottleneck is needing to run more actions in a logic app workflow, you can add nested workflows to get more done.
So that about does it for scaling. We have talked about ingesting, routing, and processing messages using Azure solutions. In many cases we get solid default guarantees from Azure. In some cases we may need to select specific service tiers or change our configuration. The nice thing is that Azure is such a mature and robust system that, unless you are Amazon or Google, you probably can get your work done with their solutions. So we will end it there for our section on messaging. Our last part of this course will focus on handling media with Azure services. See you there!
Logic Apps: https://docs.microsoft.com/en-us/azure/logic-apps/logic-apps-limits-and-config
Storage queues performance: https://docs.microsoft.com/en-us/azure/storage/common/storage-scalability-targets#azure-queue-storage-scale-targets
About the Author
Jonathan Bethune is a senior technical consultant working with several companies including TopTal, BCG, and Instaclustr. He is an experienced devops specialist, data engineer, and software developer. Jonathan has spent years mastering the art of system automation with a variety of different cloud providers and tools. Before he became an engineer, Jonathan was a musician and teacher in New York City. Jonathan is based in Tokyo where he continues to work in technology and write for various publications in his free time.