An Overview of AWS Lambda
An Overview of AWS Lambda
2h 49m

This section of the Solution Architect Associate learning path introduces you to the core computing concepts and services relevant to the SAA-C03 exam. We start with an introduction to the AWS compute services, understand the options available and learn how to select and apply AWS compute services to meet specific requirements. 

Want more? Try a lab playground or do a Lab Challenge

Learning Objectives

  • Learn the fundamentals of AWS compute services such as EC2, ECS, EKS, and AWS Batch
  • Understanding how load balancing and autoscaling can be used to optimize your workloads
  • Learn about the AWS serverless compute services and capabilities

To really understand serverless compute, you have to first understand servers. For example, I want you to think about all the work that goes into running an EC2 instance: you have to install software, patch the instance, manage scaling and high availability, configure storage, and then write your code for your application and deploy it to the instance. And then you have to cry. Well, maybe not that last part - but that’s how I do it. 

Now think about that infrastructure maintenance and administration going away - enabling you to focus solely on your code and business logic. That’s the idea behind serverless. Now of course, this maintenance and server administration still exists behind the scenes, however, it’s no longer your job to do it - it becomes the service’s responsibility

The serverless compute service we’ll focus on in this course is called AWS Lambda. Understanding Lambda is the same as understanding almost any function in a piece of code. There are three major parts: 

  • The first piece is the input 

  • The second piece is the function, and

  • The last piece is the output. 

Let’s start with the function. Just as EC2 is made up of instances, Lambda is made up of functions. Functions are the code that you write that represents your business logic. In this function, you also configure other important details, such as permissions, environment variables, and the amount of power the function needs. The way you specify power is by choosing how much memory you want to allocate to your function. The service then uses this number and provides proportional amounts of CPU, network and disk I/O. 

To upload your code to the service, you can either write the code directly in the service itself or you can upload this code via a zip file or files stored in Amazon S3. The programming language you write your code in must match the runtime you select in the service. 

There are several options for runtimes. You can use a runtime that lambda natively supports, such as Java, Go, Powershell,  Node.js, C#, Python, and Ruby. Or, if you want to use a language that isn’t in that list, you can choose to bring other languages by using the custom runtime API. So if you’re thrilled at the idea of running PHP or C in Lambda, the custom runtime feature is for you. 

Once you’ve uploaded or written your code - how does your code run? Well, it has to be invoked. This is where the first piece of the equation - the input - comes into play. There are several options for your function to be invoked: 

  • Your function could be invoked directly through the console, SDK, AWS toolkits, or through the CLI. 

  • It could be invoked using a function URL, which is an HTTP endpoint you can enable for your Lambda function

  • Or it could be invoked automatically by a trigger, such as an AWS service or resource. These triggers will run your function in response to certain events or on a schedule that you specify. So if you want to run your function every day at 8 am, you can do that. 

When you invoke your function, you can pass in events for the function to process. If a service invokes your function, they can also pass in events - however, the service will be responsible for structuring those events. For example, your code could run in response to a request from API Gateway or an S3 event, such as a PUT object API call. So once the PUT object API call is made, AWS Lambda will run your code, just as you’ve written it, using only the compute power you defined. 

Then you have the third and final piece, which is the output. Once the function is triggered, and your code runs, your Lambda function can then make calls to downstream resources. This means that from your code, you can make API calls to other services like Amazon DynamoDB, Amazon SQS, Amazon SNS and more. 

When your Lambda function is triggered, the service automatically monitors your function through logs and metrics. You can additionally choose to write custom logging statements in your code that will help you identify if your code is operating as expected. These log streams act as a recording of the sequence of events from your function. Lambda also sends common metrics of your functions to CloudWatch for monitoring and alerting. 

So what do you pay for with this service? 

You only pay for what you use - which means three things. 

  • 1. You are charged for the amount of requests that you send to your function. 

  • 2. The Lambda function begins charging you when it is triggered, and stops charging you when the code has been executed. Otherwise known as the duration it runs. This is rounded up to the nearest 1 millisecond of use. 

  •  AND  3. You’re charged based on the amount of compute power you provision for your function. So if you provision the maximum amount of memory, you’ll be charged for that. 

All right - I hope you’ve enjoyed this overview of AWS Lambda. That’s it for this one -  I’ll see you soon. 

About the Author
Learning Paths

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.