What Is Serverless Computing?
The course is part of these learning pathsSee 2 more
In this group of lectures, we explore what serverless computing is and how using the computing resource as a service differs from traditional computing models.
Having an understanding of what Cloud Computing will help you gain the most from this course, so if you feel unclear on the fundamentals of cloud computing, I recommend completing our What is Cloud Computing? course first.
Following this course, you will have an understanding of what serverless computing is, how serverless computing differs from traditional computing and a high-level understanding of how code functions can be provisioned and used as a service.
Recognize and explain what serverless computing is
Recognize and explain the benefits of serverless computing
Recognize and explain the design approaches for microservices
What is Serverless Computing? The concept of serverless computing was introduced commercially by Amazon Web Services around 2014 with the release of AWS Lambda. While cloud computing has made it possible for us to manage virtual computers and services, customers still had to be proficient with provisioning and managing compute resources. So with the release of AWS Lambda, Amazon Web Services went a step further in making cloud computing easier and more accessible by managing the underlying compute layer for us. AWS runs a code function for you without you needing to provision the machine and the operating system that runs that code.
Well, there are a number of serverless cloud platforms available, the most notable being Amazon Web Services, Microsoft Azure, and the Google Cloud Platform. Now each platform has its own services, solutions, and nuances, and we'll explore some of those once we understand the fundamental concepts of serverless. Let's just pause and clarify a couple of points about this idea of serverless computing.
First, there is still a server involved in the serverless model, but the cloud provider manages that compute resource, not us, so serverless computing is probably better described as Functions as a Service. Serverless computing is a bit like a car share service. You just want a vehicle to get you to your destination, whether that be just across town or across the country. It is expected you will drive carefully when using the vehicle, and you will report any damage. But you are not expected to pay for the car to be built or provisioned first before you use it, and you are not expected to contribute to the cost of buying or preparing the vehicle. We only pay for the time that we use the service.
So the second thing to keep in mind with this serverless model is that there are still things that need to be managed, but essentially, all those provisioning tasks that had to be done don't need to be done anymore. That means you've got more time to spend on delivering the application. You still need to deal with operations, but all those operations are pretty developer-friendly, meaning if you do want to do everything from the browser console or with some API, you can! You can obtain your code, version your code, and test your code in a very easy and user-friendly way.
How does serverless computing work? I'm going to use the AWS Lambda service for an example. Now, in short, with this service, you upload your code snippet to the service provider, which in this case is Amazon Web Services. The Lambda function can be triggered by another service, an HTTP endpoint, or via an application activity. So the majority of serverless cloud providers use a container model to provision and manage the underlying infrastructure as a service to run our functions. If I happen to make 1,000 requests, then AWS scales the beckoned infrastructure to deal with that burst activity. That is not something that I have to manage or provision as a machine type, which is how I might have approached it if I was using EC2 to stand up a machine and store and configure an operating system and then run my function application code.
The benefit of using Functions as a Service is that all platforms will provide you with better scalability and economy, so when a function is invoked, the AWS Lambda service launches a container that is an execution environment based on the configuration settings that you've provided. AWS Lambda manages container creations and deletions, and there is no AWS Lambda API for you to actually manage the container. So it takes a little time for AWS to set up that container, do all the necessary bootstrapping, etc., which does add some latency each time a new Lambda function is invoked.
The Lambda function tries to reuse the container for subsequent invocations of the Lambda function; however, that reuse is not guaranteed. So after a Lambda function is executed, AWS Lambda maintains the container for some time in anticipation of another Lambda function being called, so basically, the service freezes the container after a Lambda function completes and then thaws that container for reuse, if Lambda chooses to reuse the container when the Lambda function is invoked again. So this option to reuse the container provides us with more scale and availability. The container reuse approach means that any declarations to your Lambda function code outside of the actual handler remain initialized, and that means the function can be optimized when it's invoked again.
If your Lambda function establishes a database connection, for example, instead of reestablishing the connection and any latency that's involved with that, the original connection is used or reused in subsequent invocations. You can add logic to your code to check if a connection already exists before creating a new one. Now, each container implemented by Lambda provides some disk space in our /tmp directory, which is currently 500 megabytes. The directory content remains there when the content container is frozen, which provides you with some transient cache that can be used for multiple invocations. Now, you can add extra code to check if the cache has the data that you want available in that storage.
Now, background processes or callbacks initiated by your Lambda function that didn't complete when the function ended resume if the Lambda function chooses to reuse that container. Naturally, if it doesn't reuse that container, they won't be. So you should make sure any background processes or callbacks in your code are completed before the code exits. Now, that's more of a design consideration. When you write your AWS Lambda function code, don't assume that the Lambda function will always reuse the container. Lambda may simply create a new container instead of reusing an existing container, so you need to design your code to be state-independent, ideally storing state in a persistent data store such as DynamoDB.
So, a serverless function is really fast to implement and can be easily scaled, and you only pay for the processing time that you use. So with the serverless model, it means rather than building and implementing a computer with an operating system to run a program, with serverless, you just upload your code and how you want your code to be executed with a number of simple parameters, i.e. how much memory your function needs to run. You can set more parameters if you want, e.g. how long it will run before it times out once the function is loaded and active. You can execute the function using an application interface endpoint. Let's break it down a bit more.
So your code snippet can be as long or as short as you like, and you can upload the code as a compressed archive where you can write or edit the code inline in the Lambda provisioning page. You can write your code in your preferred IDE using the Lambda IDE plugin or one of the STKs. Lambda supports a number of native languages when it runs, so if you are using one of the available language run times, your code will be run natively by the Lambda service. Now, our function is basically an entered object until we tell Lambda how to trigger it or to execute it. Let's think a little more about how we call and execute our code once it's loaded as a Lambda function.
AWS Lambda uses the invoke method to recognize an execution trigger. Lambda supports three types of invocation methods. There's RequestResponse, Event, and DryRun. Now, you can define how the function is invoked when the function is run on-demand. By default, the invoke API assumes the RequestResponse invocation type, so let's start with that one. The RequestResponse runs your function and returns a result back to the requester. If we run the RequestResponse type, the function is run in synchronous mode, i.e. the response is delivered in real time.
Now, the other invocation type is Event. There's two types of event invocation supported, a push and a pull, and there's two modes, synchronous and asynchronous. So the event invocation type allows you to map an event source, and that event source can be another core AWS service, e.g. Amazon S3, where, for example, Amazon S3 will push an event to Lambda when a file is uploaded or changed. This is called the push method.
All right, so Lambda also supports stream-based event sources such as DynamoDB or Kinesis, for example. And with a stream as an event source, Lambda pulls the streams and invokes your Lambda function. Now, this is referred to as the pull model, and with the pull model, the event source mappings are maintained within AWS Lambda. So, AWS Lambda provides the relevant APIs to create and manage event source mappings. Event mapping is really useful, as there's a number of predefined event sources set up in Lambda already, most of the core services, in fact, so S3, DynamoDB, Kinesis, etc., and this makes it really easy.
Now, you can also use your own application as an event source. To do that, we use the invoke method on demand, and a Lambda function can be invoked from, say, your mobile application. So if the event source is our own application or service, that service uses the Lambda invoke API to send the event invocation type. Now, that application can call Lambda from a different account, but it does need a cross-account role with correct privileges to do that type of call.
So, just to clarify, when we use the push model, you can set the invocation event to be asynchronous or synchronous, and with the pull model, the invocation method is defined for us by the Lambda service. Okay, so those are the two main invocation models. We do have a third one, which is called DryRun. The DryRun parameters makes sure that Lambda does everything except execute your function. So this is really important when you want to check your verification. If you've got some sort of cross-account access requirement, and you need to check that your inputs are valid, then using the DryRun invocation means that everything else bar the actual execution is run, and, of course, you'll trap any errors if any occur.
Now, we can also invoke our function over HTTPS:// using a REST API. Now, that's a common use case, and we can do this by creating a custom REST API and endpoint with the Amazon API Gateway service. With that, we can map individual API operations such as get or put, or we can create our own methods if we want and map those to specific Lambda functions. So when you send an HTTPS:// request to the API endpoint, the Amazon API Gateway service invokes the corresponding Lambda function, and using API Gateway, we're using the push model. API Gateway invokes the Lambda function by passing data in the request body as a parameter to that Lambda function.
Now, if you're wondering what a REST API is, don't worry. REST stands for Representational State Transfer, which, in not-so-plain English, means an interface that allows a requesting system to access and manipulate textual representations of web resources using a uniform and predefined set of stateless operations. Okay, so that is a lot of words. In plain English, REST interfaces do not require the requesting client to know anything about the structure of the API. The server only provides the information the requesting client needs to interact with the service. The RESTful approach makes the development of modern web applications much more flexible and maintainable. Serverless computing really helps us simplify. We create a function, upload our code snippet to the function handler, set the required parameters, and it's really easy to integrate with other services.
Okay, so that brings this lecture to a close. I'll see you in the next one where we start exploring some of the building blocks of Functions as a Service.
Head of Content
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.