Designing Highly Available, Cost Efficient Cloud Solutions
Developing Cloud Solutions
An introduction to the AWS components that help us develop highly available, cost-efficient solutions.
- Understand the core AWS services, uses, and basic architecture best practices
- Identify and recognize cloud architecture considerations, such as fundamental components and effective designs
Elasticity and Scalability
Regions and AZ's
Amazon Elastic Load Balancer
Amazon Simple Queue Service
Amazon Elastic IP Addresses
Amazon Auto Scaling
Identify the appropriate techniques to code a cloud solution
Recognize and implement secure procedures for optimum cloud deployment and maintenance
Using Amazon SQS
Using Amazon SNS
Using Amazon SWF
Using Cross Origin Resources (CORS)
If you have thoughts or suggestions for this course, please contact Cloud Academy at firstname.lastname@example.org.
Hi and welcome to this lecture. In the next minutes, I will show you the Simple Queue Service. This overview will stress the main points that you must be aware of in order to ace the exam.
Simple Queue Service is a robust messaging system that handles messages or workflows between other components in a system. SQS is a fundamental element when we are talking about loose coupling or decoupled applications. It allows you to have a single point to store messages as part of a workflow. You could have an image processing portal that receives pictures onto a server, delivers it to an SQS queue within another server's polling for messages and this last server is able to process that image and send it somewhere else. We'll talk about it more in another lecture. For now this is enough for you to understand the main purpose of the Simple Queue Service.
Simple Queue Service is a distributed service. Your messages are going to be stored on multiple servers within a region. It's good for high availability, however it adds a few pitfalls as well.
For example, in case you need to retrieve messages from your queue, it's possible that you could make a retrieve request and yet not receive all the available messages depending on your configuration. That and other behavior anomalies can be explained by the distributed architecture of an SQS queue. That's not the only potential problem. Let me show you some other considerations you must be aware of when using the Simple Queue Service. SQS makes an effort to preserve the order of the messages, but AWS does not guarantee you that the messages are going to be received in that order. If you need, however, you could place an order information in the messages and during the polling process, you could take the order information on the messages and process them in the way that they arrive. Messages can be at the 256 kilobytes in size. If you need to send larger messages, for example a message to process a big image, you should store the image on DynamoDB or S3 and place inside the message or location of the image to be processed.
A single message will be delivered at least once. Another problem with distributed architectures is that sometimes there will be duplicated messages. AWS will only guarantee that the message will be sent and delivered at least once, but there is a way to identify duplicated messages. Just take a look at the documentation for further information.
There's no need to know that in order to pass the exam, by the way. When we're trying to retrieve messages from a queue, it's referred to as polling the queue. SQS has two ways of polling, long and short polling. Long polling is usually the best choice because it will wait until the message is available in a queue before sending a response. With long polling, a retrieve request will only return with no messages for two reasons: if there are no available messages on the queue or if a request timeout is defined for the entire queue. SQS is a service that is build-by-request. So by enabling long polling, you're eliminating the chances of empty responses that only generate cost for you. Short polling is the default behavior. It happens when the received message wait time is set to zero.
Let's take a look at the AWS SQS console to show you where to define its configuration settings. Let's create a new queue. We first have to give our queue a name. After that, we have to define the properties for the queue.
The default visibility timeout is the time that a message will stay available for a single machine during the process time. It ensures that the message will be visible only for the machine responsible to process it. Unless something goes wrong, the message will become available again in the queue. You can also change this timeout directly on the message during the process time. The values must be between 0 seconds and 12 hours, so you can define the timeout that is more suitable for your application workflow. You should not have any problem processing messages in the necessary time window because you can increase your limits for a single message if you need.
The message retention period is the time that a message will be available on a queue before it is automatically deleted.
The values can be between one minute and 14 days and the default value is four days. The next is the message size. As we discussed before, it can be up to 256 kilobytes in size. This is the delivery delay. You can set a default delay time for your messages to become available in a queue. The values can be between zero seconds and 15 minutes and the default delay is set to zero. The receive message wait time is where you can select long polling or short polling as we discussed before. As you can see, here it is configured for short polling, so I will change this value to enable long polling and set the maximum timeout which is 20 seconds.
The re-drive policy is mostly for error handling. AWS provides support for dead letter queues which are queues that other queues can target to send messages that for some reason could not be successfully processed. Using it, you could for instance have a parallel workflow reading the messages on the dead letter queue and trying to determine the error causes, and through this improve your workflow. We don't need that for our test queue so I'll leave it unmarked. Now that we have configured all the settings, we can click on "Create Queue." Now that the queue is created, let's go on to the Permissions tab and assign some permissions to our recently created queue. Currently only IAM users from this account with permissions on SQS can interact with this queue. I'll be lazy here and add a rule they grant all rights for everyone just to show you something. Remember the last lecture when I said that all end points of an SNS topic must be confirmed? The way that we confirm the SNS topic and send messages to an SQS queue is by granting permissions to it. In our case, I'll grant permission for everybody so it will work for SNS as well.
Let's go to the SNS console to the topic that we created in the last lecture and add a new subscription to this topic. We just need to select the proper protocol and paste the ARN of the SQS queue that we created. Now we are able to send messages to our SQS queue by publishing to this SNS topic. I'll quickly add some information in here and publish this message to show you how it will be displayed on SQS.
Back to SQS, let's retrieve the messages in our queue. Just go to Queue Actions and start polling for messages. I've sent another message to this queue before. You can see that the messages are displayed gradually on the screen. Notice that the polling operation is still running even though we know that there are no new messages. We could delete the messages that we've already sent and we could open the message.
Here is the message on SQS. Note the text has the same fields of "E-mail JSON" that I showed you in the last lecture because the message was published in SNS. SQS messages can also have custom attributes.
This can be very useful if your application does not use formatting like JSON in the message body. To process the message information, you could define it in your application to hold, for example, a URL for a picture stored on S3.
Since we've already viewed the messages, let me delete them. This is it for this lecture. I hope you enjoyed it.
Head of Content
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.