re:Invent news about AWS Lambda: we have already listed most of the AWS updates announced during the first and second keynote in Las Vegas. In this article, I’d like to focus on AWS Lambda.
The Serverless landscape will be evolving even faster during the next few months, and AWS is definitely contributing to the Serverless revolution.
Tim Wagner, general manager of both AWS Lambda and Amazon API Gateway, described the capabilities of a modern serverless platform. According to Wagner, such a platform should include a broad set of features and components, including a cloud logic layer, orchestration, and state management, responsive data sources, an application modeling framework, a developer ecosystem, integration libraries, security and access control, reliability and performance, and most of all, it must work on a global scale.
CI/CD for serverless applications
Continuous Integration and Continuous Delivery are finally much easier with the new native functionalities introduced this week.
Thanks to SAM (Serverless Application Model) and its integration in Amazon CloudFormation, Lambda’s Environment Variables and the new AWS CodeBuild, you can now achieve CI/CD with minimal effort.
SAM represents an open common language for describing the content of a serverless app. Since CloudFormation can speak this language, we finally have the tools to easily package and deploy Lambda-based applications.
CodeBuild, on the other hand, allows you to automate the building and testing process. As far as AWS Lambda is concerned, you will also be able to install and compile any additional dependencies of your functions without any manual installation and packaging. For example, you will run npm (for Node.js) or pip (for Python) in your CodeBuild buildspec file and dynamically bundle your code and dependencies together. Even more interesting, you can use your own Docker Image, in addition to all the other supported runtimes and the corresponding versions: Ubuntu base, Android, Java, Python, Ruby, Golang, and Node.js.
Also, by using AWS Lambda’s Environment Variables, you can easily update the runtime environment without redeploying your Function. However, variables are actually immutable and this won’t work unless you are using the $LATEST version.
During Tim Wagner’s session, we learned how to react to GitHub updates and prepare the new Function to be deployed with CodeBuild, and then automatically deploy with CloudFormation and SAM.
#AWS CodeBuild – Fully Managed Build Service – https://t.co/w3PtIgJRLG pic.twitter.com/JAYAhh0FgL
— Jeff Barr ☁️ (@jeffbarr) December 1, 2016
AWS Lambda Apps Diagnostics
AWS announced AWS X-Ray, a fully managed service for analyzing and debugging distributed apps. It’s only available in preview today, and AWS Lambda support will come soon as well.
This powerful new tool will allow you to better visualize your serverless app. You will gain visibility into events traveling through your services and be able to trace calls and timing from AWS Lambda functions to other AWS services.
With such a graphical and dynamic representation of your application topology, you can quickly find dependencies in your microservices and easily detect and diagnose missing events and throttles. Performance profiling will allow you to optimize each function and identify bottlenecks too.
New minor features
The following features didn’t gain as much attention during the official keynotes, but they will have an impact on your serverless apps:
- AT_TIMESTAMP Kinesis iterator: You can start ready Kinesis Streams at any point in time, not only the newest or oldest records. This means that you can stop and start processing without rewinding or losing any data.
- C# and .NET Core: The Lambda C# integration is based on netcoreapp 1.0 on Amazon Linux, with built-in logging and metrics. It already supports common AWS event types such as S3 and SNS, and it’s well integrated with Visual Studio.
- Dead-letter queue (DLQ): A new and reliable end-to-end event processing solution. You can send all unprocessed events to an SQS queue or an SNS topic and always preserve events even if there is an issue in your code. This can be configured on a per-function level, for all async invokes including S3 and SNS events.
- Aurora SQL triggers: You can now invoke AWS Lambda functions directly from your database code.
- Mobile Hub Enterprise Connectors: You can now use built-in SaaS connectors for Salesforce, Microsoft Dynamics, Marketo, HubSpot, Zendesk, QuickBooks, etc. Each connector runs as an AWS Lambda function, and you will be able to integrate your own.
- S3 per-object CloudTrail events: You can finally respond to GET events on Amazon S3 with Lambda.
- CloudWatch metric-to-logs links
- API Gateway pass-through mode: A new default mapping template will send the entire request to your Lambda function.
- API Gateway binary encoding: API Gateway now supports binary payloads and responses. This allows you to have multimedia input and output such as images or compressed files.
- API Gateway AWS Marketplace integration: You can now sell your APIs on the AWS Marketplace. This enables easy discovery and procurement for API consumers, API usage tracking, and automated billing.
New options for using AWS Lambda Functions
After the recent updates, there are a bunch of new places where you can run Amazo Lambda Functions. Here is a short list of new options:
- Lambda Bots and Amazon Lex: You can build interactive experiences with both text and speech, integrating Facebook and Mobile Hub. Slack and Twilio integrations are coming soon.
- Kinesis Firehose: Soon, you will be able to transform, audit, or aggregate Kinesis records on the fly with Lambda. It’s basically a new flexible buffering option, at scale.
- AWS Snowball Edge: Every new Snowball device will come with both storage and processing capabilities so that you can run local AWS Lambda functions to pre-process your data before it’s physically transferred to AWS.
- AWS Greengrass: With Greengrass, (similar to Snowball Edge), you will be able to run local Lambda functions on potentially any device.
- Lambda@Edge: You can now locally execute Lambda functions at each CloudFront Edge to achieve very low-latency request/response. Unfortunately, it’s only available in preview today, and currently limited to only Node.js, 50ms of maximum execution and headers transformation only. For now, you pay around $0.60 per million requests, with 4,000 free requests per month.
Choreographing Lambda Functions with AWS Step Functions
Since every function execution is supposed to be stateless, orchestrating and organizing multiple Lambda functions has been a tough task since day one.
So far, Lambda functions could only interact via hacks or complex workarounds that always required more work than it should. Here are a few strategies used until today:
- Method call: This is the most naive method; you can definitely do better at modularity.
- Chaining: You can always let a Lambda Function asynchronously call another function, but this makes error handling and retry policies very tricky to manage, if not impossible.
- Databases: Storing the state somewhere is very easy to implement, but it always requires a lot of code and you risk deteriorating overall scalability.
- Queues: Using queues is a cloud-native approach and could lead to good results, but it still requires too much work and retry policies are difficult to implement.
A mature functions orchestration system should allow you to scale out without state loss while dealing with errors and timeouts. It should be easy to build and operate with built-in auditing.
With these requirements in mind, AWS released AWS Step Functions.
This new service is backed by Amazon Simple Workflow under the hood. It is already available in 5 regions and it allows you to manage serverless applications state using visual workflows. Step Functions is designed to scale up to billions of invocations and it makes it easy to define finite-state machines (FSM) using a JSON DSL and with the help of an intuitive visual representation (boxes and arrows). You can also use this Ruby gem to validate your JSON FSM.
The service comes with a set of ready-to-use blueprints and it allows you to handle plenty of use cases:
- Functions chaining: Create a sequence of functions and define custom input/output manipulation without changing each function’s source code.
- Parallel execution: Run multiple functions in parallel and wait for their execution before proceeding to the next step.
- Conditional selection: Execute specific functions based on data by defining a choice state that will check on the input variables and select the next function to run.
- Retry policies: Define a Retry field with multiple handlers based on given conditions, the maximum number of attempts, retry intervals, and back off rages.
- Try/catch/finally: Define a Catch field with multiple handlers based on given conditions and specify the next function to run.
- Long-running tasks: You may need to run code for hours. Step Functions allows you to define a Poll step that will check the external execution status: either Success, Failure, or Heartbeat. You can bind polling to any activity, such as EC2 instances, containers, local servers, etc.
These screenshots represent two of the available blueprints: choice state and parallel execution.
Technically, the state is persisted in JSON texts that pass from state to state. You can also configure each step with optional InputPath and ResultPath parameters, which allow you to pre-process the input and post-process the function output. This is a very powerful mechanism that allows you to implement generic functions and adapt the workflow to their input/output requirements in a very flexible way.
One final note about pricing: You will pay $0.025 per thousand state transitions. In other words, you will run 40,000 transitions with $1, plus the cost for AWS Lambda. The Free Tier includes 4,000 free transitions per month.
I am personally very excited about the amount of news and innovation announced by AWS at re:Invent 2016 regarding serverless computing. There is a whole new set of use cases, performance improvements, code refactoring, and cost optimizations to experiment with and evaluate.
I’m looking forward to seeing how the ecosystem will react, especially the wide range of automation tools and frameworks that might be able to simplify their internal logic and quickly add new functionalities. Serverless keeps changing how we think about application development, and things are getting better every month.
Let us know what you like or dislike about the recent news, and how it will affect your next project. I’m sure many of you had new ideas and application scenarios, and I can’t wait to hear about them.