This course is made up of short "Byte sized" white board lessons showing you how to use some of the latest AWS services.
Currently we have short videos explaning:
What is an SQS First In First Out Queue and when should I use it?
Bastion hosts and NAT Gateway - What is the difference?
Amazon SQS versus Amazon SNS - What is the Difference?
Autoscale Limits - what are they and how do I change them if I need to?
If you have thoughts or suggestions for this course, please contact Cloud Academy at email@example.com.
About the Author
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.
- Hi, CloudAcademy Ninjas. Let's do a quick short talk on Simple Queue Service, and in particular, the FIFO function of Simple Queue Service. So last November, AWS released this FIFO support for Simple Queue Service, and FIFO stands for, first-in and first-out, which essentially means that the FIFO queue maintains the order of how messages are delivered out of an SQS queue, and the FIFO queue features guarantees a one-time delivery of a message, which is a pretty big deal if you've ever had to design systems where one-time delivery was seen as a must have feature. So prior to having FIFO support, we'd have to deal with the random order of SQS messages. That wasn't a deal breaker, but any ordering needed to be handled outside of SQS. So let's just walk through what Simple Queue Service does. So let's say we've got an Internet application and a VPC, and let's say our application is handling entries into a competition. So we'll probably have one layer capturing the person's details, and then we might have another layer that handles a image processing or something similar, and then we might actually write it back to a database at some point so that we store information in a persistent storage. So, we run our application on instances, and of course, please scale those to meet demand using auto scaling groups, which is where the power of cloud technology's really good, and that's gonna allow us to scale to meet demand. Now, one of the key design strengths we have is always trying to decouple layers as much as possible. So to reduce the state and dependency between application layers, and this is where Simple Queue Service becomes a really good ally. Simple Queue Service essentially is a highly available web application, which essentially guarantees to receive a message, and then pass it to a message consumer. So it takes care of all of that processing, if you like, or the, let's call it this sort of backlog of requests that can come between two layers, and we could put SQS in multiple places, to just give ourselves as much durability between our layers as possible. And simple features like visibility timeout, make it possible to limit how messages are consumed, because remember, we can't guarantee the order that messages are received. That is simply because Simple Queue Service is a highly available service, and it's run across multiple availability zones, and even multiple regions, so it's something that's available in all regions. So, the downside of having such a highly available system is that we don't have any guarantee of the order of messages and how they're brought out. However, we can deal with things to reduce duplication, like using the visibility timeout window. We can use long polling as well, which allows us to set a value to say, wait x amount of seconds before requesting another message, so that we reduce the amount of time or risk of having duplicated messages. And we've got a few other things as well, like the dead letter queue, which allows us to shuffle a message into a queue, which is essentially saying, well these are not being processed so how do you want to handle them? So all designed to help that layer talk to the other layers, in the most reliable and robust way. So, what do we get with FIFO? FIFO queue's guarantee a one-time delivery of a message. Now, that is quite a big deal, if you ever have had to deal with messages where, you know, it's possible to request it more than once, and you have to then deal with that extra application layer. So, one-time delivery, plus guaranteed order of messaging. So let's just have a quick think about how this works. You interact with FIFO SQS messages, the same way you interact with standard SQS queue messages, but with a FIFO queue, the order in which messages are sent and received, is strictly preserved. So a message is delivered once, and than remains available until a consumer processes and deletes it. Now this means that duplicates won't be created in this queue. That's pretty cool, so let's take a quick look at how it works. So, a queue, an SQS queue, is recognized as a FIFO queue, if it has a .FIFO extension. Okay, so that's the first difference here. We tell SQS that we want a FIFO queue by giving it a .FIFO extension. We also need to set the FifoQueue attribute to true. We use the same API actions as standard queues, and the message, or the method's for receiving and deleting messages, and even changing the visibility timeout, is the same for FIFO queues as it is for standard queues. However, when you send a message from a FIFO queue, you must specify a MessageGroupId, okay. Now, that's important, MessageGroupId, cannot be blank, right? If you have a MessageGroupId that's blank with a FIFO queue, the message won't get sent. So how does it use this MessageGroupId? Well, messages within a group, are ordered within that group. So, if we had a MessageGroupId of 1, those messages are treated as a message group, and they'll be processed, one by one, inside that GroupId, in script order, relative to the GroupId of 1. Now if we send five other messages with a MessageGroupId of 2, those five messages will be treated relative to each other, rather than to the queue as a whole. Okay, so ordering only applies to the message group messages and as they belong to a different message group, than they may be processed in the same or different order. So, if you got a system that will have multiple senders and multiple recipients, you should look to generate a unique MessageGroupId for each message. Now, if you don't need to create multiple ordered message groups, just use the same MessageGroupId for all of your messages. When you receive FIFO messages, SQS will return as many messages with the same MessageGroupId as possible. FIFO messages use a MessageDeduplicationId key to manage duplication of same messages, and every message must have a unique MessageDeduplicationId. So, if you've got 10 messages sent in succession to a FIFO queue, each with a distinct MessageDeduplicationId, SQS acknowledges the transmission, stores those messages, then each message can be received and processed in the exact order in which the messages were transmitted. Okay, so basically it's a very, very simple service. There's a few things that you need to consider, okay. One is, that you can't convert a standard queue to a FIFO queue, if you want to convert one, you have to delete your standard queue, and then create a new FIFO queue. Right, now the transaction rate, these are our differences, okay, so, for FIFO we can only have 300 TPS, in our queue, all right, whereas with a standard queue, were reasonably unlimited, which is where a standard queue is generally a better option if you do have a high volume burst activity type queue, where you know there's gonna be no limit; we don't want to reach a top limit. There's also a delay factor. So, FIFO queue's support a per queue delay, but not a per message delay, okay. So, if your application sets a DelaySeconds of 10, for example, on each message, that's probably not going to work with a FIFO queue. You'll need to remove the per message parameter and set a DelaySeconds equals 10 parameter on the entire queue, okay. Now, with processing time, you need to make sure that there's enough time to process and delete a message. So, if you're not sure, extend the message visibility timeout to the maximum time it takes to process and delete the message. Now if you don't know how long it takes to process a message, specify the initial visibility timeout, plus the period of time after which you can check whether the message is processed. All right, so if it's taking a long time to process messages and your visibility timeout is already set to a high value, consider adding a receive-request-attempt-id to each received message action. The other one, in terms of limits, is that there's an inflight message limit with FIFO queues. So, we can have up to 20,000 inflight messages with a FIFO queue. Now, it sounds like a lot but if you are dealing with a very high volume queue already, than keep that in mind, the inflight message limit for a standard queue is 120,000, okay, so there's quite a difference there. A message is considered to be inflight, if it hasn't yet been deleted from the queue. Unless you really need that guaranteed ordering, than a standard queue's gonna suit your use case better, probably, but it is also good to know that, if you do have an application, like an eCommerce application, that does require, you know, absolute ordering or once only delivery of a message, than you've got FIFO queue's. Okay, thanks Ninjas, see you next time.