This section of the SysOps Administrator - Associate learning path introduces you to automation and optimization services relevant to the SOA-C02 exam. We will understand the service options available and learn how to apply these designs and solutions to meet specific design scenarios relevant to the exam.
Learning Objectives
- Understand how to decouple architecture using Amazon Simple Notification Service and the Simple Queue Service
- Learn how AWS CloudFormation can be used to optimize and speed up your deployments using infrastructure as Code (IaC)
Hello and welcome to this lecture which will cover the SQS service, Simple Queue Service. With the continuing growth of microservices and the cloud best practice of designing decoupled systems, it's imperative that developers have the ability to utilize the service or system that handles the delivery of messages between components. And this is where SQS comes in. SQS is a fully managed service offered by AWS that works seamlessly with serverless systems, market services and any distributed architecture. Although it's simply a curing service for messages between components, it does much more than that. It has the capability of sending, storing and receiving these messages at scale without dropping message data, as well as utilizing different queue types depending on requirements and includes additional features such as dead-letter queues. It is also possible to configure the service using the AWS Management Console, the AWS CLI or using the AWS SDKs. Let me focus on some of the components to allow you to understand how the service is put together.
The service itself uses three different elements, two of which are a part of your distributed system. These being the producers and the consumers. And the third part is the actual queue, which is managed by SQS and is managed across a number of SQS service for resiliency. Let me me explain how these components work together. The producer component of your architecture is responsible for sending messages to your queue. At this point, the SQS service stores the message across a number of SQS servers for resiliency within the specified queue. This ensures that the message remains in the queue should a failure occur with one of the SQS servers. Consumers are responsible for processing the messages within your queue. As a result, when the consumer element of your architecture is ready to process the message from the queue, the message is retrieved and is then marked as being processed by activating the visibility timeout on the message. This timeout ensures that the same message will not be read and processed by another consumer. When the message has been processed, the consumer then deletes the message from the queue.
Before moving on, I just want to point out a little more relation to the visibility timeout. As I said, when a message is retrieved by a consumer, the visibility timeout is started. The default time is 30 seconds, but it can be set up to as long as 12 hours. During this period, the consumer processes the message. If it fails to process a message, perhaps due to a communication error, the consumer will not send a delete message request back to SQS. As a result, if the visibility timeout expires and it doesn't receive the request to delete the message, the message will become available again in the queue for other consumers to process. This message will then appear as a new message to the queue. The value of your visibility timeout should be longer than it takes for your consumers to process your messages.
I mentioned earlier that there were different types of queues. These being standard queues, first in, first out queues and dead-letter queues. Standard queues, which are the default queue type upon configuration, support at-least-once delivery of messages. This means that the message might actually be delivered to the queue more than once, which is largely down to the highly distributed volume of SQS servers, which would make the message appear out of its original order or delivery. As a result, the standard queue will only offer a best effort when trying to preserve the message ordering from when the message are sent by the producers. Standard queues also offer an almost unlimited number of transactions per second, TPS, making this queue highly scalable.
If message ordering is critical to your solution, then standard queues might not be the right choice for you, instead, you would need to use first in, first out queues. This queue is able to ensure the order of messages is maintained and that there are no duplication of messages within the queue. Unlike standard queues, FIFO queues do have a limited number of transactions per second. These are defaulted to 300 per second for all send and receive and delete operations. If you use batching with SQS, then this changes to 3,000. Batching essentially allows you to perform actions against 10 messages at once within a single action.
So the key takeaways between the two queues are: for standard queues, you have unlimited throughput, at-least-once delivery and best-effort ordering. And for first in, first out queues, you have high throughput, first in, first out delivery and exactly-once processing. For both queues, it is also possible to enable encryption using server-side encryption via KMS. A dead-letter queue differs to the standard and FIFO queues as this dead-letter queue is not used as a source queue to hold messages submitted by producers. Instead, the dead-letter queue is by the source queue to send messages that fail processing for one reason or another. This could be the result of code within your application, corruption within the message or simply missing information within database that the message data relates to.
Either way, if the message can't be processed by a consumer after a maximum number of trials specified, the queue will send the message to a dead-letter queue. This allows engineers to assess why the message failed to identify where the issue is to help prevent further messages from falling into the dead-letter queue. By viewing and analyzing the content of these messages, it might be possible to identify the problem and ascertain if the issue exists from the producer or consumer perspective. A couple of points to make with a dead-letter queue is that it must be configured as the same queue type as the source is used against. For example, if the source queue is a standard queue, the dead-letter queue must also be a standard queue type. And similarly, for FIFO queues, the dead-letter queue must also be configured as a FIFO queue.
Before I end this lecture, I just want to show a quick demonstration on how to set up a queue and some of the configuration options available during this process. Okay, so in this demonstration, I'm gonna show you had to set up a queue in SQS. So I'm currently at the AWS Management Console, and if I just search for SQS, we can see that the simple queue service comes up. So if I select on that, and this is the page that you'll get if you've never used SQS before, so it's a splash screen that just gives you a bit of information about the service. From here, I just need to click on Get Started Now. And now it's going to ask us a few questions about the queue. Firstly, we need to enter a queue name. So I'm just gonna call this Cloud Academy. And then we have our region. Currently I'm in the EU region, which is fine. And then down here we have our type of queue. So we can either have our standard queue or our FIFO queue. For this demonstration, I'm just going to stick with the standard queue type.
If I scroll down a bit further, we can see at the bottom here, we have Cancel Configure Queue or Quick Create Queue. I'm gonna select Configure Queue just so we can see some additional details. And here, you can set additional options such as the visibility timeout, the message retention period, which is how long SQS will keep the message if the message isn't deleted, and other options such as the message size and delay delivery, et cetera. In the next section down here, we have the dead-letter queue settings. If we want to set up a dead-letter queue, we select this tick box here that says Use Redrive Policy. And all the redrive policy does is simply explain what conditions has to be met for a message to be moved to the dead-letter queue. And here, we'd enter the name of the dead-letter queue. And also, the maximum number of receives a message can have before it is sent to that queue. For this demonstration, I'm gonna leave the dead-letter queue as blank. And then at the bottom here, we have the server-side encryption settings.
Now if we want to use SSE, we simply click on this tick box and this will open up additional options to allow us to select a customer master key, a CMK. Now AWS will provide a default CMK to use with SQS. And as you can see, this AWS SQS is the default master key that protects SQS messages when no other key is defined. We can see which account it's in and also the ARN. The data key reuse period is a time factor that's used to specify how long SQS can continue to use this key before it has to go back to KMS and request that key again. When I'm happy with all the settings, I simply click on Create Queue. And then we have it. So this is the dashboard of SQS. We can see that we have our Cloud Academy Queue here. And at the bottom of the screen, we can see additional metadata about this queue, such as when it was created, last updated, et cetera. And that's it. That's how you set up an SQS queue. It's a very simple process, very self-intuitive and very quick and easy.
That now brings me to the end of this lecture which covered an introduction to the Simple Queue Service.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.