This section of the Solution Architect Associate learning path introduces you to the core computing concepts and services relevant to the SAA-C03 exam. We start with an introduction to the AWS compute services, understand the options available and learn how to select and apply AWS compute services to meet specific requirements.
Want more? Try a lab playground or do a Lab Challenge!
Learning Objectives
- Learn the fundamentals of AWS compute services such as EC2, ECS, EKS, and AWS Batch
- Understanding how load balancing and autoscaling can be used to optimize your workloads
- Learn about the AWS serverless compute services and capabilities
There are several ways to invoke a Lambda function. You can invoke it directly using the console, the CLI, SDKs, or you can invoke it automatically by using a trigger such as an AWS service.
No matter how you invoke a Lambda function, even if you’re invoking it directly through the Lambda service itself, you’re using the service’s API. Every invocation goes through the Lambda API. What this API provides to you is three different models of how you can invoke your function:
The first choice is the synchronous or push-based model. This follows the request/response model. For example, let’s say we have a service like API Gateway that gets a request from a client. API Gateway then sends that request to a backend, in this case, Lambda. API Gateway does this by making an invoke call to that function.
After the Lambda function executes, it then returns a response back to API Gateway, which returns the response to the client. Request goes out, response comes in.
For synchronous invocations, if the function fails, the trigger is responsible for retrying it. In some cases, this might mean that there are no retries. For example, with API Gateway, it can send the error message back to the client.
To invoke a Lambda function synchronously, you can set the invocation type using the AWS SDK or you can use the CLI invoke command to do this. Here’s what this command looks like.
You can also use the –invocation-type parameter for this command and set it as RequestResponse to invoke it synchronously.
The second model is the asynchronous model, also called the event-based model. In this configuration, the response does not go back to the original service that invoked the lambda function. In fact, there’s no path back up to the service that triggered the function to run, unless you write that logic yourself. This is common with Amazon S3 and Amazon Simple Notification Service.
For example, say you want to trigger a lambda function to run once an object is placed in a bucket. You can do this, but it’s not going to send a response back to S3 once it finishes executing, without additional business logic. The nice thing about the asynchronous model is that it handles retries if the function returns an error or is throttled. It also uses a built-in queue. So, any event sent to your function is placed on this queue first and then eventually sent to the function.
If any of these events can’t be processed for whatever reason, you can send that failed event to a dead letter queue or use Lambda destinations to send a record of the invocation to a service. The dead letter queue will receive only the content of the event while using Lambda destinations records will include both the request context and payload as well as the response context and payload. While both are a great way to troubleshoot failed events, Destinations is the more feature-rich option.
To invoke a lambda function asynchronously, I can use the invoke command, except this time, I would need to use the invocation type parameter and set it to be “event”.
The last model is the stream model, also called the poll-based model. This is typically used when you need to poll messages out of stream or queue-based services such as DynamoDB streams, Kinesis streams, and Amazon SQS. It’s pretty cool how this model works, as the Lambda service runs a poll-er on your behalf, and consumes the messages and data that comes out of them, filtering through them to invoke your Lambda function on only messages that match your use case. With this model, you would need to create an event source mapping to process items from your stream or queue.
An event source mapping links your event source to your Lambda function so that the events generated from your event source will invoke your function. These mappings, the response you get back, the permissions you set up, the polling behavior and even the event itself, can be very different based on the event source you’re using. However, the way you create event source mappings stays the same. You can do this by using the SDK or CLI. Here is an example of how to create an event source mapping by using the CreateEventSourceMapping CLI command:
You can see that in this command, I’m creating a mapping between a DynamoDB stream and my Lambda function. I’m also specifying that the event source mapping batches records together to send to my function. You can control how it batches records using the batch size and the batching window. When your batching size is met or the batching window reaches its maximum value, in this case, 5 seconds, your lambda function will be invoked.
So which one of these models is right for you? Well, the easiest use case is if you’re processing messages from a stream or queue, the best choice for that is to create an event source mapping and use the polling type of invocation for Lambda.
The next case is if your application needs to wait for a response, then synchronous invocation is the best choice and can help you maintain order.
However, if you have a function that runs for long amounts of time, that does not need to wait for a response, then invoking asynchronously is the preferable option, as it offers automatic retries, a built-in queue, and a dead letter queue for failed events.
Now - keep in mind that if an AWS service invokes your function, the ability to select an invocation type is removed. The service gets this choice instead and selects the invocation method for you. So all of these hard choices go away.
That’s it for this one - I’ll see you next time!
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.