What Is Serverless Computing?
The course is part of these learning pathsSee 2 more
In this group of lectures, we explore what serverless computing is and how using the computing resource as a service differs from traditional computing models.
Having an understanding of what Cloud Computing will help you gain the most from this course, so if you feel unclear on the fundamentals of cloud computing, I recommend completing our What is Cloud Computing? course first.
Following this course, you will have an understanding of what serverless computing is, how serverless computing differs from traditional computing and a high-level understanding of how code functions can be provisioned and used as a service.
Recognize and explain what serverless computing is
Recognize and explain the benefits of serverless computing
Recognize and explain the design approaches for microservices
So how are people using serverless computing? Well, there's three main patterns that seem to be emerging. The first is nanoservices, where we have one function and one job, where every single one of the functions only deals with one single job. Now, that approach will result in us having to write a lot of functions. The functions are simpler, but ultimately, they're going to be quite difficult to maintain because we're gonna have a lot of them. Most likely our functions will also share a lot of dependencies, which equals quite a lot of code.
Now, let's say if we have a simple request, an update operation, we're gonna need to have one Lambda function for get, one Lambda function for post, and one Lambda function for put. Now, that has a lot of overhead in your operations and code maintenance. So the second pattern was one that's more suitable to HTTP interfaces in that we have one function and multiple jobs or cases. As an example, one serverless function could handle all the get, post, and put operations for your resources. Now, this allows you to have much fewer functions overall, and it's also much easier to share your code and organize your code. Plus, it will make your Lambda functions much faster because you're gonna keep them actively warm by making requests through other types.
So if you have all your get, post, and put logic in one single Lambda function, your Lambda function will remain warm so what that means is if you don't request a function within a certain timeframe window, the underlying container could be dismissed. That means if you don't call the post operation for an hour, the next call could be slower because the container has to be reprovisioned to deliver the function. So creating one function with a number of interfaces is generally a more efficient way to create serverless functions.
The third approach is what I'll label a new monolithic approach. And that's where you manage all of your interaction between your cloud services and your servers with only a minimal number of functions and then use an interface aggregator. There's a number of services in market that can provide this liberal consolidation. This approach where everything is managed by one single function that aggregates and returns the right structure to your clients is becoming popular, and the hope is that it will make managing and maintaining your APIs easier.
So let's look at some of the considerations you might have if you are thinking about running a function as a service. Lambda functions are stateless. So they can quickly scale. More than one serverless function can be added to a single source. Serverless is fast and will execute your code within milliseconds, and serverless manages all of the Compute resources requirement for your function and also provides built-in logging and monitoring through services like Amazon CloudWatch.
Let's have a look at some common examples of where functions as a service can be more efficient for parts, or all, of the processing we might need to complete. One good use case for function as a service is realtime stream processing. Let's think about how we would do this if we didn't have functions as a service, first off. Before being able to host a function to do the work, we'd have to stand up a server and provision it to run our application code. So to do this, we'd go into EC2, select a machine and an Amazon Machine Image, select the memory, configure our storage, and configure our security groups, network access control lists, and then install the application runtime on an operating system, so that might be Java or Node or C#. We then write a trigger or a Dropbox to process a file system or an S3 bucket when a file was added. Running our conversion function, we then save the file back to the appropriate storage.
Now, doing this with AWS Lambda, our architecture needs to be less complex. We can take out the provisioning management and maintenance of the server. By using our AWS Lambda function, all the base infrastructure is provided for us by AWS as the service provider. The service provider runs our code on a highly available Compute infrastructure, and performs all the administration of the Compute resources, and that's gonna include that server itself, the operating system, and all the maintenance impaction that goes with that, and more importantly, the capacity provisioning. That would generally have to be handled by us using an autoscaling rule or autoscaling group. We'd also have the code monitoring and logging, etc., to look after. So all we'd need to do using AWS Lambda is supply our code in one of the supported languages and the service provider executes our code only when it's needed and scales the environment automatically. That will service or support a few requests per day up to thousands per second.
So with these capabilities, you can use AWS Lambda to easily build data processing triggers around AWS services such as Amazon S3, Dynamo DB, or Amazon Kinesis. And that streaming data is stored and processed, and we can create our own backend that operates with AWS doing all of the scale and performance and security. Now, we do get to set some parameters. We can set the memory allocation, and the larger memory also represents an increase in the processor and network performance. So we have just enough control over the performance, which enables us to increase the performance of that base layer if needed.
So each Lambda instance comes with up to 500 megabytes of temporary storage, so we can use that to write out anything we need to. So, we've gotta ask ourselves, how does the cloud provider do all this? Now, behind the scenes, AWS is using containers as we discussed. So when you request a Lambda function, AWS creates the, request an instance within a container that's managed by AWS. The first request to the instance is generally a little slower than subsequent requests, but that difference is minimal.
So realtime event data, for example, sent to Amazon Kinesis, which provides large-scale, durable storage of the events for 24 hours and it also allows multiple AWS Lambda functions to process the same events. We might have two that go to AWS Lambda, one to a Lambda function that processes incoming events and stores event data in a table and Amazon Dynamo DB, which has low latency access. And you can provision the needed capacity of a Dynamo DB table just by changing the configuration values. We might also have another Lambda function that stores incoming events in Amazon S3, which is more durable and cost-effective, and a great long-term storage solution.
So storing data on Amazon S3 makes the data easily accessible for downstream processing and any other analytics that we want to go through. So ultimately, Lambda is going to give us a much simpler architecture for this type of streaming application. One of the workflows we'll do in a later lecture is processing a file using an S3 bucket as our trigger. So we'll upload a file to our bucket, we'll have a Lambda function that is using S3 as its trigger point. So it's going to be an event invocation. It's gonna use push, and it's gonna be asynchronous because that's the way that Lambda and S3 talk to each other. So once that invocation has occurred, our Lambda function code will compress the file we've uploaded, create a slash zip folder, and then save that compressed file as a .zip archive into that new folder. That's a very common workflow, perfect for Lambda.
So that brings this lecture to a close. I'll see you in the next one.
Head of Content
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.