Demo: Creating a Lambda Function

Contents

AWS Lambda
1
Introduction
PREVIEW4m 33s
Summary
6
Summary
8m 28s
Demo: Creating a Lambda Function
Difficulty
Beginner
Duration
51m
Students
17227
Ratings
4.7/5
starstarstarstarstar-half
Description
Transcript

Creating a Deployment Package

Deployment Package Permissions

7-Zip for Windows

AWS: Overview of AWS Identity & Access Management (IAM)

Using AWS X-Ray to monitor a Node.js App deployed with Docker containers

AWS X-Ray 

AWS CloudTrail: An Introduction

Serverless Application Model

Test Events

Hello and welcome to this lecture where I want to cover Lambda functions in greater detail. We learned in the previous lecture that the Lambda function is compiled of your own code that you want Lambda to invoke as per the defined triggers. But how is the function created and how do I upload my code? What else is involved with the creation of a Lambda function and is there more to it than that? Prior to creating your function, you need to have some code to add to it. After all, that's a fundamental element of your function. Once you have successfully written your code, you are ready to import it into Lambda. And this is achieved by creating a deployment package. This deployment package will either be a zip or a jar file and will contain your code and any dependent libraries required. How you create these deployment packages are very dependent on the programming language that you decide to use within your function. 

For details on how to create these packages, please use the following URL links, respectively. Once your deployment package has been created you'll need to modify the permissions against your zip file. This is because Lambda needs global read permissions on the code and any dependent libraries that you have also included within the package. If the permissions are not set correctly then there is a chance that the Lambda function may fail on execution. AWS provides a really good example of how to check and correct the permissions of your deployment package file, which can be found here. It explains that if you are using Linux, Unix, then you can use Zipinfo to check the permissions, by running the following command against your package. The -r and subsequent dashes indicates that these files are only readable by the file owner. As a result, your Lambda function could fail, as they are not set with global read permissions. 

To rectify this and to set the correct permissions for AWS Lambda, you can run the following commands. The first command would ensure that the files in the temp package contents location will have read write permissions to the owner in addition to the read permissions to group and global. The second command simply pushes these permissions down in all other directories. The result of these commands means that the permissions will look as follows for your package. If you are running the Windows OS then it is recommended that you use 7-Zip for Windows instead of Zipinfo. To download 7-Zip please visit the following link. If you were to write your code from within Lambda itself then Lambda would create these deployment packages for you. If authoring your code outside of Lambda, and you are now in a position of having your code written and packaged, next you will need to upload it to the service. This can be done from within the AWS console, the CLI or the SDKs. 

For this course I shall be explaining how to configure Lambda functions through the AWS Management Console. Let's now go ahead and look at how to create our function which will allow us to upload a deployment package. Okay, so I'm within my AWS management console. So let's go straight over to Lambda. And this is the initial splash screen that you'll get if you haven't got any functions created at the moment. So to get started all you need to do is create function. And now we have three options along the top of the screen, Author from scratch, Blueprints, and AWS Serverless Application Repository. These allow you to choose how you want to start creating your function. If you author from scratch you can write your code directly from within Lambda here. By doing so your deployment packages will be created for you in this scenario as I explained earlier. Using a blueprint allows you to base your function off an existent template, which might save you time and will also allow you to modify the code as required to tweak as necessary. As you can see, there's a number of different blueprints, and there's pages and pages of blueprints to choose from. 

The Serverless Application Repository allows you to define and deploy serverless applications that have been created and shared by other third parties, which are often partners of AWS. Similarly to the blueprints option, there's search functionality. So for example, if you wanted to find all elements to do with CloudWatch, and as you can see, there's a number of matches here. To show you how to compile a function we're going to select the author from scratch option. At this point you're required to enter three pieces of data before you can create the function and continued to configure it. Firstly, the name, this is the name you wish to call the function, so let's just enter Myfunction. The runtime, from this dropdown once you can list the programming list and version you'll be using for the code within your function, including the ability to create a custom runtime. The role section is an IAM role, which is required for AWS Lambda to assume and execute the code of your function. 

You can select an existing role, create a new role from scratch, or create one based off a list of predefined templates supplied by AWS Lambda, as shown. For this demonstration I'm just going to select the Lambda basic execution role. Next I just need to click on Create Function. This then takes you to the function itself where you can modify the code and make the necessary configuration changes for that function. To ensure you understand each element of the function, I'm going to explain each part starting with the Designer window. Now as you can see on the left-hand side of the window, we are able to add triggers and the right-hand side shows a graphical representation of how the function is built. So what is a trigger? Well a trigger is essentially an operation from an event source that causes the function to invoke, so essentially triggering that function. As an example we could add Amazon S3 as a trigger. And in this example we can see we have a number of options relating to S3. Depending on what trigger you select will depend on the configurable options. If I selected Kinesis, for example, then I would have options relating to the Kinesis stream instead of an S3 bucket. To complete your trigger configuration, you must supply additional information. And again, in this example, we can see that Lambda needs to know the bucket event type, prefix, and suffix. 

The bucket is straightforward. I simply need to select the bucket that the trigger relates to. The event type shows the options available that could use the trigger to function. As you can see there are a number of options given from object create, PUT, POST, COPY, et cetera. And here I can select what action should trigger the function. If I wanted the function to trigger every time a PUT action occurred within this bucket, then I could simply select PUT. Using the prefix and suffix options, I could then be more specific when the function triggers based off of these values. When you have configured your trigger it is then added to the visual area within the design window, as you can see. Once you've added the trigger, you'll notice that it says unsaved changes. So all you do is simply click on the top right where it says Save. The design window allows you to understand how the function operates in a simple visual graphic. The name of the function appears as the root of the graphic, and triggers are added to the left where we see S3 and then any resources that the function role has access to will appear on the right-hand side. As you can see, Amazon CloudWatch is accessible by the role associated with this function. We can also look at the roles policy by clicking on the key image. And we can see here that the execution role has the ability to create log groups, log screens, and add log events within CloudWatch. You may also notice that as well as the execution role policy you will also see a function policy, which appears once you have configured a trigger for your function. 

Now our function policy simply specifies which AWS resources are allowed to invoke your function. In our example I added an S3 as a trigger, and so Lambda automatically added the following function policy. So to reiterate, role execution policies determine what resources the function role has access to when the function is being run. And the Function policy defines which AWS resources are allowed to invoke your function. Next in the configuration of your function comes the function code window, which consists of a number of different components. You have the code entry type, and this is a dropdown list, which allows you to select where to source your code from, the options being edit code inline, and this option allows you to write your code directly from the function itself using the code editor. You could upload it using a zip file. And here you can upload your deployment package that was discussed previously, which will form the code of your function. And finally, you can upload a file from S3. If a code was stored on S3, then you can simply select this option to import it from there. The runtime section, this is another dropdown list that displays the same runtime options as before. This just gives you the opportunity to change it if need be without recreating the function. The handler section, by adding a handler within your function, it allows Lambda to invoke it when the service executes the function on your behalf. And it's used as the entry point within your code to execute your function. Next we have the environment variables and the tag section. 

Environment variables are key value pairs that allow you to incorporate variables into your function without embedding them directly into your code. So your code can reference these variables, allowing the code and its logic, to be used through the functions lifetime. So for example, you would normally want to test your function prior to moving it into the production environment. By using variables this is very easy to do. You could have the following key value pairs set up within your function ready for testing. So, for example, let's enter S3 bucket as the key, and as the value I'll just put in testbucket. Now if the tests are successful then you could simply change the value in the variable without changing the code, allowing you to keep the code logic exactly the same. So when moving your function into production, you would simply change the variable from testbucket to, let's say productionbucket, and it's as simple as that. You will notice that the environment variable section also has an encryption configuration option, too. Now by default, AWS Lambda encryption environment variables after the function has been deployed using the AWS Labmda master key within that region via KMS, which is the Key Management Service. This data is then decrypted every time your function is invoked. However, if you are storing sensitive information within your variables it is recommended you encrypt this data prior to deployment using this enable helpers for encryption in transit checkbox. To learn more about encryption and KMS service, please see our existing course, How to use KMS encryption to protect your data. 

Now looking at the tags, tags within functions are used within way as tags for other resources are used within AWS. They simply help you to group and identify resources. Again, they are key value pairs and can be used to help associate the function to a particular project, department, or solution, et cetera. Below tags we have six more configurable elements to a Lambda function, the execution role, basic settings, network, debugging and error handling, concurrency, and auditing and compliance. So let's start by looking at the execution role. This gives you the opportunity to change the role you selected on the previous screen. Again, to reiterate from previously, this role is required for AWS Lambda to assume and execute the code of your function. Now within the basic settings section, you are able to determine the compute resource that you want to use to execute your code. The only element you are able to specify is the amount of memory used. AWS Lambda then calculates the CPU power required based off of this selection. Lambda bases this calculation from an instance within the general purpose family of instances. And the slider goes from 128 megabytes and increases all the way up to 3008 megabytes. Below this specification of memory, you can also modify the timeout for your function. And the timeout simply determines how long the function should run before it terminates. 

As a result, you should try and ascertain how long it will take for your code and function to run. The default limit on all new functions is three seconds. Using the configurable elements within this section, you are able to affect the overall costs of your Lambda function. So you might need to experiment a few times to get the most optimal performance for your requirements. The smaller the memory size the cheaper the compute cost will be, and the lower the timeout the less the compute has to run. So again, reducing your costs. Looking at the network section, by default, AWS Lambda is only allowed to access resources that are accessible over the internet. For example, S3. Therefore, any resources that can only be accessed directly from within your VPC requires additional configuration. So this network section provides you with the capability of allowing your function to access your resources via your VPC. When configured, AWS Lambda assigns ENIs, which are Elastic Network Interfaces, to your resources using a private IP address. When this is configured like this, it's important to note that the previous default ability of accessing publicly accessible resources over the internet is removed.

 And to overcome this you must attach the function to a private subnet which has access to a NAT instance or a NAT Gateway. Do not attach it to a public subnet. It should be within a private subnet for greater security and reduced exposure to external threats. Also, in addition to this, Lambda only assigns it a private ENI and not a public address. The network section allows you to add the following. VPC, so from here you can select the VPC that the function will need to access resources within. You could look at the subnets. And here you can select at least one subnet that the function can operate in within your VPC. For high availability and scalability, you really should add an additional subnet. Under security groups, here you can specify the security group for your function to use as a part of the VPC configuration. Once this information is added to your function, Lambda can then set up and configure ENIs as required to securely connect your VPC resources. 

You should also be aware of your limits on your selected subnets, as Functions will fail if those subnets run out of IP addresses or ENIs. An important point to be made aware of is that the execution role of Lambda will need to have specific permissions that allow it to operate within a VPC. And these include permissions that are required to configure the required ENIs, such as ec2:CreateNetworkInterface, ec2:DescribeNetworkInterfaces, and also ec2:DeleteNetworkInterface. Moving over to debugging and error handling, with the debugging and error handling window, you are able to configure two elements. The first, here, is a DLQ, which is the dead letter queue resource. The DLQ resource can be either an SNS topic or an SQS Queue and are used to receive payloads that were not processed due to a failed execution. In the event of your Lambda function failing for one reason or another, it will generate an exception. And these exceptions are then handled different depending on the invocation type at the time of execution. Now invocations can either be synchronous or asynchronous, and this is determined by the event source itself. Remember, an event source is a service that can be used to trigger your Lambda function. If the invocation was asynchronous and a failure occurred, then Lambda would automatically retry the event a further two more times. 

Now if you have a DLQ resource configured for your function using either SNS or SQS, then the event payload will be sent to this dead letter queue to allow you to assess what could be causing the failure at a later date. If you didn't have a DLQ configured, then the payload and event would just simply be deleted. Now synchronous invocations do not automatically retry failed attempts like asynchronous ones do. The invoking application for asynchronous invocations is responsible for all retries in this instance. The second element in this window is enable active tracing, which can either be enabled or disabled simply via this checkbox. By activating this option, it integrates AWS X-Ray to trace the event sources that invoked your Lambda function, in addition to tracing other resources that are called upon in response to your Lambda function running. Moving across to concurrency. Concurrency is based upon scaling. It effectively measures how many functions can be running at the same time. And by default there was an unreserved account concurrency set to 1000. This means you can have 1000 occurrences of Lambda functions running at once. 

Now depending on how many functions you have and what your Lambda functions are being used for, this may or may not be enough. If it's not enough, then you can raise your case to the AWS Support Center to get this soft limit increase for your account. However, you should bear in mind that having a limit can be a beneficial control to have in place as this would stop your costs spiraling out of control if you allowed it to continually scale, especially if there was an issue with your function. There are two limits for the concurrent executions of functions. One that operates at your AWS account level, and the other that operates at the function level. Firstly, the account level limit, which is the unreserved account concurrency, is essentially what I just explained whereby you have a concurrent limit set to 1000 for all the Lambda functions that you have within a single region. All these functions share that pool of 1000 connections. Concurrent execution limits set at the function level, which is reserved concurrency, essentially reserves a pool from the unreserved account concurrency. So the reserved amount is then deducted from the unreserved limit. 

So for example, if I selected to have a reserved concurrent execution limit of 150, the unreserved limit would change from 1000 to 850. As a result there would not be a shared pool of concurrent executions for all of my other functions within my account. By setting this reserve concurrency provides a number of benefits. It ensures that your function will continue to run even if another function has a surge of requests, whereby it scales and utilizes a large quantity of your pool of concurrent connections, which could have prevented your other function from running. Also, you might want to limit the amount of times the function can run, as you may have a limitation on your downstream resources, that the function may call upon. Just because your function can scale rapidly, it doesn't mean that the rest of your infrastructure can. Auditing and compliance. This section simply explains that AWS Lambda is integrated with AWS CloudTrail, which will record API calls triggered from your functions. Now CloudTrail is a great tool when it comes to auditing, compliance, and governance. And if you want more information on CloudTrail then we do have a course on it and how it works, so please do a search for AWS CloudTrail An Introduction. I've now covered all the options within the function that are presented within the different windows. There are, however, additional options that you can specify and configure that you can see along the top of the screen when configuring our function. So let's take a look at these next. As we can see along the top of the screen we have Throttle, Qualifiers, Actions, Select a test event, and Test and Save. So let me run through each of these, starting with Throttle. The Throttle option is closely linked to the concurrency setting that we just talked about.

 By selecting this option it sets the reserve concurrency limit of your function to zero. As you can see from this popup, setting the function throttle to zero will stop all future invocations of the function until you manually change the concurrency setting again. Now you can change it back to using the pool of unreserved concurrency limit by setting a manual throttle limit greater than zero. Looking at the Lambda qualifiers, using the Lambda qualifiers section allows you to change between different versions of your function. When you first create your function, the version is set as latest. As you begin to make changes to your function in this code you can, if necessary, save it as a new version, allowing you to avert to a previous version of your function at any point. You can also change to an alias of your function, which you can create under the actions menu. So let's take a look at that now. And under the actions menu you have four options. Publish a new version. Now by utilizing the versioning option within Lambda, it's possible to create different versions of your function, and this is often used when using Lambda function through the different stages of development before creating your final production version. And when you create a new version of your function, do bear in mind that you're not able to make any further configuration changes to that version.

 So it essentially becomes immutable. For every version of your function that is created, a new ARN is also created. The second option is to create an alias. Now an alias allows you to create a pointer to a specific version of your function. During the creation of an alias, you need to supply a name of the alias, a description, and select the version of your function that you want the alias to point to. In addition to this, you can also specify a second version to allow you to distribute weightings of your functions between versions. Again, this is often used during testing. Now unlike versions, an alias can be changed after it's created. And so if required you can point an alias to an alternative version. Again, a resource much like each version of a function is, an alias also has its own ARN, an Amazon Resource Name. Now the benefit of using an alias is that you can use the ARN from the Alias within your configurations, such as your event source mappings, instead of individual version ARNs wherever they may be used. Events source mappings are associations between your lambda function and your event sources, which as we know, and I keep reiterating, are AWS services used to trigger your functions. Therefore, when you create a new version of your function you don't need to update the ARN within the mappings. Instead, you can simply update the Alias that points to the new version. This simplifies your deployment and testing. The third option, deleting a function. Now this action of deleting a function will remove the function from AWS Lambda. 

You'll also get a message stating that the action will remove the associated code and the associated event source mappings. But the logs and Lambda role will not be deleted. Finally, exporting the function. Now this option allows you to export your Lambda function, what gives you the ability to redeploy at a latter stage, perhaps within a different AWS region. As a part of the export process you could download both the AWS SAM file, and SAM stands for Serverless Application Model, and a deployment package of your function. So when you download the deployment package it will contain your function code and all of your dependent libraries, and it'll be downloaded as a zip file. And the SAM file will download as a yaml file. These files can then be used in conjunction with CloudFormation to deploy your serverless application elsewhere within your environment. For more information on the Serverless Application Model, please see the following link. 

Now next to the actions option, there is a select a test event option. And this is a dropdown list which can display up to 10 tests for your function. And these tests are user specific, meaning that different users config on the function can actually perform their own individual and different tests against the function. You may need to perform different tests against your function, perhaps different triggers, or by setting up with different tests for each trigger, and this easily allows you to test your function against each option quickly and easily. And then once you've selected your test you can simply click on Test to run the test. For more information on these test events, please take a look at the following AWS blog post. Now finally, the last option within the function is the Save button which you saw me use earlier. And this will save all of your configuration changes that you've made during the creation and editing of your function. That has now brought me to the end of this lecture, which focused on the AWS Lambda function configurable elements.

About the Author
Students
218516
Labs
1
Courses
214
Learning Paths
174

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.