CloudAcademy

Using Lambda Functions in a Simple Application

The course is part of these learning paths

Certified Developer – Associate Certification Preparation for AWS - June 2018
course-steps 27 certification 4 lab-steps 22 description 2
Serverless Computing on AWS for Developers
course-steps 12 certification 1 lab-steps 8
Get Started Building Cloud Solutions
course-steps 14 certification 3 lab-steps 1 quiz-steps 1
Getting Started with Serverless Computing on AWS
course-steps 8 certification 1 lab-steps 9
more_horiz See 1 more
play-arrow
Start course
Overview
DifficultyBeginner
Duration39m
Students1008

Description

In this course, we explore some of the common use cases for serverless functions and get started with implementing serverless functions in a simple application

 

Transcript

 In this lesson, let's show Lambda in action using a custom application we'll call the S3 restaurant. So our simple application will allow a customer to order some food using a mobile app, the restaurant will receive the order, and then when the order is ready, it will dispatch the order to a driver who will deliver the order to a customer. Pretty simple, right? Let's think about how we could do this. So we want a stateless architecture, so we can ensure that any of our services are not dependent on another. We'd like it to be decoupled so that we can add or remove services without it having an impact. We'd like to have access from multiple services, have it be able to automate things as much as possible. Let's review the functions we need to support in our architecture. A customer using either an unseen mobile app or a consumer-facing website will place an order. The order will be uploaded to an Amazon S3 bucket in .JSON format. Amazon S3 will push an event to our Lambda function that will do two things. First, it'll write the information to a DynamoDB orders table. Second, it will make a call to our web application, letting it know that a new order has arrived. Our web application will push the order into ElastiCache and then publish the event to all listening web clients. When the order has been fulfilled, we mark the order as ready, which then executes a Lambda function that sends a message via SNS to our delivery driver. Okay, so our strawman design is pretty basic. We've got Amazon S3 sending a trigger to Lambda to let us know that a order has been uploaded. We've got DynamoDB storing records for us, we've got our application pushing records to ElastiCache, which may not be the best way of doing it, and we can revise that in version two. And we've got Lambda triggering an SNS event to notify drivers that an order is ready to be delivered. Now, I'm going to race ahead with most of these functions as I want to use this as a way of introducing you to how you design, rather than going through the full design process itself. Please forgive me if you like more detail. We will get to that kind of detail in subsequent learning paths. Let's just use this as a way of introducing how we start thinking about using AWS Lambda functions and applications. The first Lambda function we create is responsible for processing the order that is being uploaded to S3. When our function is invoked via an S3 event, we receive S3-specific arguments. We'll use the bucket name and key to get the contents of the uploaded file. Once retrieved, we add a unique identifier, build our content stream, and write to our orders DynamoDB table. Remember, we are running under our execution role that has been given permission to read from our S3 bucket and write to our DynamoDB table. Therefore, we don't need to set up any access keys with our API. Once the order has been written to our table, we make the call to our web application method, using the HTTP object. It would simplify our application if we could write and publish an entry directly into ElastiCache from our Lambda function, but if you recall from our previous lessons, Lambda functions run within and isolated container, and it's not connected to our VPC, so Lambda can't interact with internet-connected resources or other AWS services predefined as AWS trigger points. But as we've seen, there are plenty of those endpoints predefined, but ElastiCache is currently not one of them. This is not such a problem, as we wouldn't want to have ElastiCache work as a trigger event. But let's just clarify the distinction between services that can be used as Lambda triggers and services which we can access from within Lambda. So, to have Lambda talk to ElastiCache, we will need to provide access to our VPC. That is possible, but there's a few considerations, so let's think through our options. One way to allow Lambda to access ElastiCache could be to provide the subnet ID in our Amazon VPC and a VPC security group to allow the Lambda function to access resources in our VPC. Now, Lambda uses this information to set up elastic network interfaces, or ENIs, that enable our function to connect securely to other resources within our private VPC. Now, this means that the Lambda function execution role must have permissions to create, describe, and delete elastic network interfaces. To enable this access, we would create a new execution role and attach the AWS Lambda VPC access execution role policy, which is available as a pre-baked policy. That policy grants permissions for the ec2 actions that AWS Lambda needs to manage ENIs. You can view this AWS management policy in the IAM console. We then identify the subnet IDs and security groups and pass those as parameters when creating our function. Now, when our Lambda function is configured to run within a VPC, it will have the overhead of starting up an elastic network interface or interfaces. This means address resolution may be slightly delayed when trying to connect to network resources. So also keep in mind, you cannot use the internet gateway attached to your VPC. The ENI attaches a private IP address. The ENI would have to have a public IP address to access the internet through your internet gateway. So, we also want to avoid DNS resolution of public host names for our VPC, as this can take several seconds to resolve, which can add several seconds of billable time onto your requests. And just FYI, you can't enable a Lambda to access resources within a dedicated tenancy VPC. We could also choose the VPC from the VPC option box when creating our Lambda function from the console, and then we can select the subnet and security groups from the provided fields. Now, when you enable VPC access to a Lambda function, that function will only be able to access resources in that VPC. If a Lambda function needs to access both VPC resources and the public internet, then the VPC also needs to have a network address translation instance, or NAT, inside the VPC. So if your Lambda function does need internet access, do not attach it to a public subnet, or to a private subnet without internet access. Instead, attach it to a private subnet with internet access through a NAT instance, or an Amazon VPC NAT gateway. All right, so we have a few overheads, dependencies, and security considerations if we want to allow Lambda to access our VPC. So, another method to access ElastiCache from our function could be to create a simple web function to accept the order and then publish it to ElastiCache. All right, so we're going to go with this design. It is probably not the best way of doing it, but it's the easiest way for our current scenario, and it gives us an opportunity to explore how we can work around some of the pre-configured triggers. Okay, so next, we want to instruct Lambda that we have completed our work via the context.done method call. Now, the context object and its context.done method is used in Javascript where the program model is based on callbacks. You don't have to use context.done in Python, Java, and C# because your function execution will reach the last line of your code and automatically end. And this example is a bit of a heck as we're calling our context.done as our last instruction, which is not probably how we would do this in a real-world scenario, 'cause we want to call our context.done when we're sure that the HTTP call is complete inside our callback. Anyhow, let's keep going. Now, a context object is one of the arguments sent into our function by Lambda. When calling context.done, we specify two arguments. The first argument indicates if a function was successful or not. A null value means success. Any other value tells Lambda that an error occurred. Any non-null value is included in the Cloudwatch logstream to help us determine what happened if there is a problem. The second argument is a string that will be written to the console regardless of success or failure, and this is optional. If we don't call context.done when our function is completed, our function may run longer than necessary. So, we want to make the call at the earliest possible moment, in a way that does not negatively affect the workload itself. So, that ensures that we're only paying for the processing time that we used. We have added a few libraries to assist with our workload, which means we will need to create a deployment package. Before we zip everything up, we need to run the npm install command to add the dependencies to our local folder. When we zip up the entire folder, we can upload it to Lambda. Once uploaded, we need to specify the file to execute and our handler name. This tells Lambda what to execute, basically, as we've covered in our other lectures. We assigned a role in the execution policy, and under advanced settings, we set the memory allocated to our function. We can also set the timeout before Lambda terminates the execution of our function. After choosing our options, we save the function or save and invoke the function using a sample event. Our second Lambda function is responsible for publishing an SNS topic. This function will accept an order with the ID, build a message, and publish to Simple Notification Service. So, we'll create this function within the inline editor since we don't need any additional libraries. So once in the editor here, we can change all of the same settings, except for the file to execute, which is not necessary, given the inline file setting. We save the function and now are ready to test our system. Before we can run our test, we have to let Amazon S3 know we want to invoke Lambda when an event occurs. So under the properties for the S3 bucket, we add a notification giving it a name, selecting the S3 event we want to trigger our function, selecting the Lambda recipient, and entering the ARN for the Lambda function that we want to execute, and the invocation role. Now, we could also do this using the S3 trigger, so we'd create an S3 trigger and choose it in the same way from the trigger menu. So, here is our test .JSON order file. We just upload it to S3 via the S3 console, and based on the available resources, it might take some time before our Lambda function is invoked. And once it has been invoked, we will see a marker showing up on a map here, and, and... Entries here in the orders table. So, we'll now mark the order as ready to be delivered. Now, that will push a message to our Simple Notification Service topic. Now, the web application will invoke the Lambda function via a call through the STK. Once the Lambda function is complete, our theoretical driver will be notified via email. Here's the message to show what was sent. So, that shows how easy it is to create an application around the Lambda event stack. To summarize what we covered, we implemented a simple application, our restaurant S3 ordering application, we've used Lambda to trigger the ordering event, we've used the Simple Notification Service to notify of an order being placed. We've also used DynamoDB and ElastiCache to store and serve event data.

About the Author

Students50438
Courses76
Learning paths28

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.