Creating Our First Lambda Function
In this course, we will delve into implementing a series of AWS Lambda functions to help build our knowledge and familiarity with serverless computing.
Following this course, we will be able to explain and implement an AWS Lambda code function that meets 3 common AWS Lambda use cases - API Gateway, S3, and DynamoDB. In this course, we will create a simple AWS Lambda function from scratch which will read data using the mock interface provided by AWS Lambda. Once we have the function working, we will connect the Lambda function up to the API Gateway we created in the previous course. During the launch, we discuss launch parameters, versioning, and aliases.
- [Teacher] Hi, and welcome back to starting with serverless computing. In this lecture, we are going to create a serverless function using the AWS Lambda service. In our previous lecture, we designed and created two API endpoints using the AWS API Gateway. Now those two endpoints display a list of items or a specific item from a published URL endpoint for us. Our endpoints currently display mock data only using the AWS Gateway's mock data setting. Now we are going to create a new Lambda function that we can connect up to both of these two endpoints. Let's go into the AWS console and begin by creating our AWS Lambda function. You can see here, there are a number of function blueprints displayed. Now these blueprints are excellent time savers once you know your way around the AWS Lambda functions. They are great time savers put together by the open-source community, by AWS and a number of contributors, and they're very great. You can easily customize them to your particular use case. I didn't find them a great place to start with Lambda, so let's just build our first Lambda function from scratch so we get to see how things are put together. So let's choose a Blank Function from this screen. So what is it that we want to build here? Remember, our API can display an item from our catalog or the full list of catalog items using the /items or /items/ID variable parts, which we created. Now the parts are displaying mock data only, so we now want a function to display real data from our backend service. So we are going to create a new AWS Lambda function that will first check whether a specific item or the entire list of items is being requested, then the function will return our list of items or just that individual catalog item based on the request made. Now since we already created our API, we don't need to use the trigger menu that's displayed here to create a new Lambda-backed API Gateway resource. This trigger list is really useful once you know your way around and you want to create a function and gateway from scratch. So let's call this function ItemsFunction, and we use the Python 2.7 runtime. You can use any language you like. The function will be very simple without any specific dependencies or infrastructure requirements. So the default configuration should be enough for us. So AWS Lambda provides a manage service, which means it takes out some of the complexity around networking and security groups, to name a few areas, which can really speed things up if we're looking at deploying a solution versus how we might do it if we were setting up an entire EC2 machine. Now one of the benefits is that it means that we can easily provision the access policies. We've got a couple of options here when we first launch our function. So the first is that we've got the option to create a new role from a template, and this essentially makes it easy for us to stand up an IAM role that will be used by Lambda to run this service on our behalf. The other option we have is to use an existing role. So if we have one that we've used previously or we've defined in IAM to be a role for Lambda functions, then we can select that from the dropdown list. If you are doing labs in this learning path, then use the Choose an Existing Role option and choose the basic_execution role. So if we don't have that role, then we need to create one. And if we do that from here, we just go Create New Role from Template, we give it a name, and then we can use one of the policy templates that are provided in the next dropdown. So Simple Microservice permissions would be a great example of a role that would suit this type of Lambda function. Under the advanced settings, we get to set how large we want this machine to be. So the memory parameter, we can scale that up or down, and with memory comes CPU and network performance. So essentially, the higher the memory value, the higher the CPU rate and the better networking we'll get with this function container. Now how this is all hanging together is that AWS is firing up a container and launching our function into it. There's no guarantee that when we fire up another one that we'll be using the same container. So setting these variables right up front is important. You don't need to overspend because by doing that, you're going to increase your potential cost. Alright, so the logic for our function is quite straightforward. We'll define a static list of items. We'll create an Items array, ID 1. Alright, so the logic for our function is quite straightforward. If no ID is given as a variable, then we return the whole list. If the ID exists, then we want to return the corresponding item to that ID. And if neither of those two conditions occur, then we raise an error. So if this was a production environment, we would probably do this with a database, but this array will do us just fine for now. ID 3, let's make it a highlighter. We'll now define our Lambda handler by declaring an event in the context. We'll create a K statement. If it's not an ID, return the items. And in case, if the ID is found or exists, then a direct next return. And else, we'll do an Exception NotFoundError. So we can confirm the function creation and then we can test it, great. The function is working, and a list of results is returned here. Finally, we'll choose No VPC in the options we have here. If we run it without a VPC, then the security and the networking of our function is handled by Lambda, which is what we want. The other option we have is to essentially break the glass on that and to run our function within our own VPC. Okay, let's just remind ourselves of why we're using Lambda. It means we don't have to worry about provisioning the EC2 instance. We don't have to set up the network access control lists, we don't have to set up the security groups, and we don't have to worry about auto-scaling the machine. So there's a lot of benefit that comes from not having to worry about those sort of things. The minute that we break the glass, so to speak, and run it in our own VPC, we're going to have to think about those concerns. So there's a number of reasons why we wouldn't wanna do that. There may be reasons why we do want to do that, and those are advanced use cases that we will explore. For now, we will leave it outside with no VPC, let Lambda do all the hard work, and we'll click the Next button. Now each Lambda function has a default latest version, and this is the one you can always work with and edit. So once your code is stable enough or whenever your code changes significantly, you can create a new version. A simple incremental number will be assigned to the new version, and you'll be able to use and test any version prior or current. So as you can imagine, this is a really useful mechanism to keep track of your functions history if you need to do a rollback for any reason. You can also provide different versions for different partners or customers. So versioning is very helpful. So you can only buy into API Gateway resources and methods to a specific version of your Lambda function. So whenever you create a new Lambda function, you'll need to update your API Gateway configuration as well. So that's where the concept of alias really comes into its own. So an alias is a useful abstraction that allows you to reference a generic Lambda version without actually using a version number. So for example, you may want to create a production alias and then connect it to your API Gateway production stage. You can do the same with a dev alias and bind that to a development stage. The best practice here is usually to always reference an alias when configuring your API Gateway backend integrations so that you have the most recent version by default. By default, API Gateway will point to the latest version. So you can always configure it to use a specific version or alias, like we talked about for our specific use cases or exceptions, where perhaps one gateway that's available only to a certain group of machines or group of customers will reference a specific alias. So here's an example of how the versions and alias mappings work. We've got our items function, which we mapped to our latest version. We've got items function 1, which we mapped to version 1. And, say, items function prod, which will map to our prod alias, which can also point to version 1. So let's proceed and create a new version and two new aliases for our new Lambda function. We will click the Actions Publish New Version, and we have the option to insert a version description before we confirm it. Now we can create two new aliases. We can do one dev, which maps to our latest alias, and a prod, which we map to the stable version we've just published, which should be version 1. Now keep in mind, we cannot modify published versions. We can only work with the latest version. By binding the dev values to our latest version, we'll be able to quickly implement changes and test them without explicitly publishing new versions. Of course we can always adapt this configuration to our specific needs. We can have both versions and aliases under the Qualifiers dropdown, which can also help you navigate and inspect them if you have to come back after a week or two and you've forgotten how you did it. So now our Lambda function is implemented, tested, and configured with versions and aliases. So we can use it as our API Gateway backend. So in the next step, we'll properly configure API Gateway resources to use our new Lambda function. So that brings this lecture to a close. I'll see you in the next one.
Andrew is fanatical about helping business teams gain the maximum ROI possible from adopting, using, and optimizing Public Cloud Services. Having built 70+ Cloud Academy courses, Andrew has helped over 50,000 students master cloud computing by sharing the skills and experiences he gained during 20+ years leading digital teams in code and consulting. Before joining Cloud Academy, Andrew worked for AWS and for AWS technology partners Ooyala and Adobe.