How Lambda works

Contents

keyboard_tab
Getting started with Lambda
2
How Lambda works
PREVIEW5m 16s
Practical Usage of Lambda
play-arrow
How Lambda works
Overview
Transcript
DifficultyIntermediate
Duration18m
Students1973

Description

Executing code in response to specific event triggers is a common need. In fact, Event-Driven Programming has long been an important approach in Software Development. So if trigger response is useful within an operating system, why not for cloud-based web applications? To answer precisely that need, Amazon recently released Lambda. Lambda will launch AWS resources and execute snippets of code in response to predefined events, automating the management of all of necessary underlying infrastructure...and at a ridiculously low price.

In this course you'll get complete introduction to AWS Lambda. Lambda is not a particularly complex service, but setting IAM roles and security groups might be tricky. This tutorial, written by our expert Cloud Solutions Architect Kevin Felichko, will help you understand how to get started, and will also show you an interesting and practical use-case.

 

Who should take this course

Being an intermediate level course, you are expected to have at least some experience with AWS and its core services. Also, knowing programming and in particular the Javascript language will definitely help you following the practical part.

If you want to learn more about the AWS services being discussed here, check out our other AWS courses. Also, our Lambda quiz questions can serve as a nice follow up. You will be tested on what you learnt in this video, but will also learn more at the same time thanks to the documentation associated with each question.

Do you have questions on this course? Contact our cloud experts in our community forum.

About the Author

Kevin is a seasoned technologist with 15+ years experience mostly in software development.Recently, he has led several migrations from traditional data centers to AWS resulting in over $100K a year in savings. His new projects take advantage of cloud computing from the start which enables a faster time to market.

He enjoys sharing his experience and knowledge with others while constantly learning new things. He has been building elegant, high-performing software across many industries since high school. He currently writes apps in node.js and iOS apps in Objective C and designs complex architectures for AWS deployments.

Kevin currently serves as Chief Technology Officer for PropertyRoom.com, where he leads a small, agile team.

In this lesson we're going to explain how Lambda works. We are working with the preview mode and aspects of these are subject to change before the final release to the general public. With that in mind let's start by understanding Lambda terminology. In the Push model an event source will launch the Lambda function in response to an event. Event's published via the Push model do not have a guaranteed order, currently only S3 events are supported under the Push model. With the Pull model Lambda will retrieve events from another source, Lambda will pull the events and the order they are published.

As the Lambda preview this model works with Dynamo-DB streams and it can use this only. A Function is the code we want to execute, we have two options for setting up a function. We can edit the code in the Lambda editor or we can upload a deployment package with our function in other libraries. Using the inline editor we can view and edit our function from the Lambda console. This is useful for functions that are not require libraries other than what is included in our execution environment something which we will describe later. The deployment package means we must view and edit code in another editor, zip up the function in libraries and then upload it to Lambda.

This is necessary when we require libraries not included in the execution environment. The Invocation Role grants permission to an event source to communicate with the Lambda function, the permission granted are slightly different based of the model being used. For the Push model permission must be granted to the event source, an access policy allows the event source permission to the Lambda invoke a sync action. The trust policy gives the event source permission to assume the role. For the Pull model permission must granted to Lambda to pull from the event source. The access policy grants Lambda the permission to pull from the event source while the trust policy grants Lambda permission to assume the role. This role is setup via IAM, the execution role defines what resources the function has access to when running. For example, if we are invoking a function via NS3 event, the execution role needs to be given permissions to access the S3 bucket to read the file, we make this possible via the access policy. When the function is executed Lambda will assume the execution role, this is permitted via the trust policy that grants Lambda the right to do this. We set this role up the same way we do the invocation role, when our Lambda function executes, it runs within its own container not connected in anyway to our VPC. We need to keep this in mind when designing how our workload will function since it changes the way we access resources. The container runs on top of an EC2 instance with the allocated memory we choose for the function, based on the allocated memory the function receives a share of the CPU. The ratio of memory to CPU is the same as the general purpose EC2 instance type, the Lambda documentation explains that 128 megabytes of memory equates to roughly 6% CPU share.

The CPU share determines the amount of time allocated to the function which ultimately will effect things such as the latency between the invocation and execution of the function, it's important to understand this trade off of performance and cost when using this with our work loads. There is no EC2 affinity meaning that while the first execution might run on one EC2 instance during its lifetime, subsequent executions might run on any other number of EC2 instances.

Lambda does not give us access to the underline EC2 resources such as the network interface or file system except for a temp folder. Even if the execution context happens to run on the same EC2 instance as the last run, any files added to the temp folder or objects added to memory are no longer available. This means we need to ensure our functions are stateless, all Lambda functions have a few things in common when executed. Functions run on a Linux based Amazon AMI with version 0.10.32 of Node.js, the AMI contains ImageMagick libraries that allow us to manipulate images. Perfect for use cases such as functions that resize images that are uploaded to S3. The AWS SDK module is installed by default in order for us to interact with our other AWS resources.

There are limits to the execution we should know about, the maximum number of processes and threads in total launched by a function is 1,024 the same goes for file descriptors. Lastly we are limited to 512 megabytes of ephemeral storage, if we attempt to go above any of these limits our function will fail immediately. The exception will indicate that we exceeded limits, it is our responsibility to take precautions to ensure our functions do exceed these limits. That concludes this lesson on how Lambda works, let's see Lambda in action with our next lesson.