What Is Serverless Computing?
The course is part of these learning pathsSee 1 more
In this group of lectures, we explore what serverless computing is and how using the computing resource as a service differs from traditional computing models.
Having an understanding of what Cloud Computing will help you gain the most from this course, so if you feel unclear on the fundamentals of cloud computing, I recommend completing our What is Cloud Computing? course first.
Following this course, you will have an understanding of what serverless computing is, how serverless computing differs from traditional computing and a high-level understanding of how code functions can be provisioned and used as a service.
Recognize and explain what serverless computing is
Recognize and explain the benefits of serverless computing
Recognize and explain the design approaches for microservices
There's real advantages to developing applications where we don't need to manage the server you deploy the application on. Let's just really think through these in a real-world scenario. The main benefit is that zero administration. With a serverless application, you don't have to worry about the computer. You don't have to set the process up, the memory. You don't have to worry about the disks that the machine will use. You don't have to manage the server at all. With Serverless Computing, the cloud service provider is taking on managing the computing environment for you. You pay for the time that your function is executing, rather than the time a machine is provisioned for. So that straight away gives us a benefit of economy of scale.
Another benefit is that utility model. When we need to scale a function up or down, you just scale that single function. You don't need to scale an entire system. You don't need to scale a container and you don't even need to scale an application. Another real benefit is that serverless has built-in fault tolerance and high availability by design.
Remember when we talked about provisioning our machines with auto scaling groups setting our network access controllers, sending out security groups? That alone is enough of a headache, right? But then we have to think about high availability. Are we gonna run this across multi availability zones? One of the core benefits I see is that it provides us with that high availability out of the box. We get multiple availability zones in each region to help protect our code against individual machine or data sender failure. That means that functions running on the service provide predictable and reliable operational performance. That's saving us a lot of time and a lot of headaches. There's no maintenance windows or scheduled down times for us to worry about.
One key factor to keep in mind is that we need to have that microservice approach. With microservices, we're aiming to break down and de-couple our functions into independent units. This is always going to be easier on a face-forward design. If we're looking at migrating an application and de-coupling it for Lambda functions, we're always gonna have to do more work than if we're designing from scratch for a serverless microservice environment.
Now, if you follow the microservices-orientated design, you can really speed up your development and simplify your workflow. Just removing the need to provision and manage services from your daily tasks takes out a lot of time. Another benefit is that serverless is gonna keep our costs down by charging us only for the execution time of our workloads, not for idle resources, like we do when we have compute resource. We're not paying for service, we're only paying for invocations of your functions, which is a really, really positive model when you're talking to businesses about transaction times and for them to work out actual costs of delivery of a function.
Cost considerations should always be part or a factor of your design. The number of seconds per month you use is gonna be dependent on the amount of memory we allocate our code to run at. If we're using the lowest memory settings, say at 128 megabytes, a maximum memory of one gigabyte is gonna use significantly more processing, so therefore is gonna cost more.
Alright, so let's just think this through. We get 400,000 seconds per month currently, before we get charged with AWS Lambda. Once we exceed the first million requests, we're then charged 20 cents for each additional batch of one million requests. If we exceed our execution time, we're charged per 100 milliseconds of execution, with the amount determined by the memory allocation for each function. So, the charging is basically done in fractions and fractions of cents, per 1,000 to 100 milliseconds, which, of course, is very compelling on paper.
Let's just stick this out and think it through so that we do have some good evaluation criteria. With serverless pricing, we're only paying for the time our function's running, right? So, let's just think through a couple of use cases that we might use to compare it. Let's say we had a hosted website receiving 10,000 hits a day. Let's project 400 milliseconds of execution time per hit, with our execution functions set to 256 megabytes, so mid-range memory. Let's say we have two functions per page request. So that's 600,000 requests per month, which is gonna cost us around 87 cents per month, which, on the current pricing model, would be around a tenth of the cost of running an EC2 t2.nano instance. If we're looking at our price comparison here using the simple monthly calculator, that's gonna be a far cheaper method than starting up a t2.nano and having to run it and manage it.
Alright, so for the low-end processing, absolutely. Let's say we have a scheduled job, we'll run it every hour, and requires quite a bit of crunching time, so we'll give it a whole gigabyte of RAM, and it runs for up to two minutes and 720 requests per month, which on a simple monthly calculator, would come in at $1.44 a month for a Lambda function, which again is around a third of the cost of our t2.nano instance. So we're coming in at a cheaper, and these are very, very sketchy comparisons of course, but the comparison would shift in favor of EC2 if we had a functional script that took more than 60 seconds to complete. That's one consideration, is that scripts can only run for a certain allocated time with Lambda. If you ran that frequently with high CPU utilization, if we're currently running three or four large or xlarge servers with high CPU utilization, Lambda may not provide a large difference in compute costs over those provision machines.
With Serverless, we have to use that microservice approach to get the benefit of reduced maintenance time. However, if we do need to refactor or redeploy an application, if we're modernizing an application for example, tread carefully because we have to factor in any redevelopment and deployment time if our application code is complex. So we need to really be sure that this is gonna give us a lot more of the other benefits, outside of cost. That will be scalability, reduced maintenance time, and, ideally, having something that's going to be a little bit easier to look after going forward.
Just to close out, serverless is not designed for all workloads. Some are better suited than others. The code that does not require human interaction is always going to be a good example of something that could be done by a serverless function.
Okay, so we've gone over what is serverless computing, got more of an idea of what it is. We've talked about how it works. We've looked at what the building blocks of serverless computing are. We've talked about how serverless can differ from some of the other services that are provided and when we might consider using serverless over one of those other services. And we've looked at some common use cases for where serverless computing can be of real value. So in the following courses, let's get into how we stand up and use serverless applications.
Alright, that brings us to the end of this lecture. If you have any questions or you'd like to know more, please contact us at firstname.lastname@example.org. Thank you very much for your attention.
About the Author
Head of Content
Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe. His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.