AWS Lambda and the Serverless Cloud
re:Invent news about AWS Lambda: we have already listed most of the AWS updates announced during the first and second keynote in Las Vegas. In this...Learn More
re:Invent news about AWS Lambda: we have already listed most of the AWS updates announced during the first and second keynote in Las Vegas. In this article, I’d like to focus on AWS Lambda.
The Serverless landscape will be evolving even faster during the next few months, and AWS is definitely contributing to the Serverless revolution.
Tim Wagner, general manager of both AWS Lambda and Amazon API Gateway, described the capabilities of a modern serverless platform. According to Wagner, such a platform should include a broad set of features and components, including a cloud logic layer, orchestration and state management, responsive data sources, an application modelling framework, a developer ecosystem, integration libraries, security and access control, reliability and performance, and most of all, it must work at global scale.
Continuous Integration and Continuous Delivery are finally much easier with the new native functionalities introduced this week.
Thanks to SAM (Serverless Application Model) and its integration in Amazon CloudFormation, Lambda’s Environment Variables and the new AWS CodeBuild, you can now achieve CI/CD with minimal effort.
SAM represents an open common language for describing the content of a serverless app. Since CloudFormation can speak this language, we finally have the tools to easily package and deploy Lambda-based applications.
CodeBuild, on the other hand, allows you automate the building and testing process. As far as AWS Lambda is concerned, you will also be able to install and compile any additional dependencies of your functions without any manual installation and packaging. For example, you will run npm (for Node.js) or pip (for Python) in your CodeBuild buildspec file and dynamically bundle your code and dependencies together. Even more interesting, you can use your own Docker Image, in addition to all the other supported runtimes and the corresponding versions: Ubuntu base, Android, Java, Python, Ruby, Golang, and Node.js.
Also, by using AWS Lambda’s Environment Variables, you can easily update the runtime environment without redeploying your Function. However, variables are actually immutable and this won’t work unless you are using the $LATEST version.
During Tim Wagner’s session, we learned how to react to GitHub updates and prepare the new Function to be deployed with CodeBuild, and then automatically deploy with CloudFormation and SAM.
— Jeff Barr ☁️ (@jeffbarr) December 1, 2016
AWS announced AWS X-Ray, a fully managed service for analyzing and debugging distributed apps. It’s only available in preview today, and AWS Lambda support will come soon as well.
This powerful new tool will allow you to better visualize your serverless app. You will gain visibility into events traveling through your services and be able to trace calls and timing from AWS Lambda functions to other AWS services.
With such a graphical and dynamic representation of your application topology, you can quickly find dependencies in your microservices and easily detect and diagnose missing events and throttles. Performance profiling will allow you to optimize each function and identify bottlenecks too.
— Jeff Barr ☁️ (@jeffbarr) December 1, 2016
The following features didn’t gain as much attention during the official keynotes, but they will have an impact on your serverless apps:
After the recent updates, there are a bunch of new places where you can run Amazo Lambda Functions. Here is a short list of new options:
Since every function execution is supposed to be stateless, orchestrating and organizing multiple Lambda functions has been a tough task since day one.
So far, Lambda functions could only interact via hacks or complex workarounds that always required more work than it should. Here are a few strategies used until today:
A mature functions orchestration system should allow you to scale out without state loss while dealing with errors and timeouts. It should be easy to build and operate with built-in auditing.
With these requirements in mind, AWS released AWS Step Functions.
— Amazon Web Services (@awscloud) December 1, 2016
This new service is backed by Amazon Simple Workflow under the hood. It is already available in 5 regions and it allows you to manage serverless applications state using visual workflows. Step Functions is designed to scale up to billions of invocations and it makes it easy to define finite-state machines (FSM) using a JSON DSL and with the help of an intuitive visual representation (boxes and arrows). You can also use this Ruby gem to validate your JSON FSM.
The service comes with a set of ready-to-use blueprints and it allows you to handle plenty of use cases:
These screenshots represent two of the available blueprints: choice state and parallel execution.
Technically, the state is persisted in JSON texts that pass from state to state. You can also configure each step with optional InputPath and ResultPath parameters, which allow you to pre-process the input and post-process the function output. This is a very powerful mechanism that allows you to implement generic functions and adapt the workflow to their input/output requirements in a very flexible way.
One final note about pricing: You will pay $0.025 per thousand state transitions. In other words, you will run 40,000 transitions with $1, plus the cost for AWS Lambda. The Free Tier includes 4,000 free transitions per month.
I am personally very excited about the amount of news and innovation announced by AWS at re:Invent 2016 regarding serverless computing. There is a whole new set of use cases, performance improvements, code refactoring, and cost optimizations to experiment with and evaluate.
I’m looking forward to seeing how the ecosystem will react, especially the wide range of automation tools and frameworks that might be able to simplify their internal logic and quickly add new functionalities. Serverless keeps changing how we think about application development, and things are getting better every month.
Let us know what you like or dislike about the recent news, and how it will affect your next project. I’m sure many of you had new ideas and application scenarios, and I can’t wait to hear about them.
One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...
A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...
The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....
Thousands of cloud practitioners descended on Chicago’s McCormick Place West last week to hear the latest updates around Amazon Web Services (AWS). While a typical hot and humid summer made its presence known outside, attendees inside basked in the comfort of air conditioning to hone th...
Containers can help fragment monoliths into logical, easier to use workloads. The AWS Summit New York was held on July 17 and Cloud Academy sponsored my trip to the event. As someone who covers enterprise cloud technologies and services, the recent Amazon Web Services event was an insig...
If you’re building applications on the AWS cloud or looking to get started in cloud computing, certification is a way to build deep knowledge in key services unique to the AWS platform. AWS currently offers nine certifications that cover the major cloud roles including Solutions Archite...
If you want to deliver digital services of any kind, you’ll need to compute resources including CPU, memory, storage, and network connectivity. Which resources you choose for your delivery, cloud-based or local, is up to you. But you’ll definitely want to do your homework first.Cloud ...
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in the cloud.As the market leader and most mature pro...
Keeping data and applications safe in the cloud is one the most visible challenges facing cloud teams in 2018. Cloud storage services where data resides are frequently a target for hackers, not because the services are inherently weak, but because they are often improperly configured....
Predictive analytics and automation—through AI and machine learning—are increasingly being integrated into enterprise applications to support decision making and address critical issues such as security and business intelligence. Public cloud platforms like AWS offer dedicated services ...
With the average cost of downtime estimated at $8,850 per minute, businesses can’t afford to risk system failure. Full access to services and data anytime, anywhere is one of the main benefits of cloud computing.By design, many of the core services with the public cloud and its underl...
Some of 2017’s largest data breaches involved unprotected Amazon Simple Storage (S3) buckets that left millions of customer data records exposed to the public. The problem wasn’t the technology, but administrators who improperly configured the security settings.For cloud teams in char...