What are the Building Blocks of Serverless Computing?

The course is part of these learning paths

Certified Developer – Associate Certification Preparation for AWS
Serverless Computing on AWS for Developers
Get Started Building Cloud Solutions
Getting Started with Serverless Computing on AWS
more_horizSee 1 more
Start course


In this group of lectures, we explore what serverless computing is and how using the computing resource as a service differs from traditional computing models. 

Having an understanding of what Cloud Computing will help you gain the most from this course, so if you feel unclear on the fundamentals of cloud computing, I recommend completing our What is Cloud Computing? course first.

Following this course, you will have an understanding of what serverless computing is, how serverless computing differs from traditional computing and a high-level understanding of how code functions can be provisioned and used as a service. 

Learning Objectives
Recognize and explain what serverless computing is
Recognize and explain the benefits of serverless computing
Recognize and explain the design approaches for microservices



So what are the building blocks of functions as a service? The first building block of note is the RESTful API. REST is an architectural reference for developing modern and user-friendly web services. So instead of defining custom methods and protocols, such as SOAP or WSDL, REST is based on HTTP as the transport protocol. So HTTP is used to exchange textual representations of web resources across different systems using predefined methods, such as get, post, put, patch, delete, etc.

Now the standard representation format for REST is JSON, or J-S-O-N, which is also the most convenient format to develop modern web applications since it natively supports JavaScript. Now the level of abstraction provided by a RESTful API should guarantee a uniform interface in a set of stateless interactions. And this means that all the information necessary to process a request needs to be included in the request itself. The server doesn't need to tell the client anything about the structure behind that request so the client and server operating a state where only what's requested is being transmitted between the two. Each resource should eventually be cacheable by the client.

The second building block is the stateless function. So Lambda functions use this RESTful interface and they can be either push or pull, and they can so operate in synchronous or asynchronous modes. AWS Lambda supports both synchronous and asynchronous invocations of a Lambda function. Now the on-demand model means that you control the invocation type only when you invoke a Lambda function, which is referred to as an on-demand invocation. As an example, your custom application invokes a Lambda function, you manually invoke a Lambda function, for example, using the command line interface. Say you're doing a test or something similar, you can call that Lambda invocation. You call a function using the invoke operation and you can specify the invocation type to be either synchronous or asynchronous.

Now, when you're using AWS services as an event source, the invocation type is predetermined for each of the services. You don't have to control what the invocation type is, as the event source basically predetermines that for you. So Amazon S3 always invokes a Lambda function asynchronously, and Amazon Cognito always invokes a Lambda function synchronously. So that makes things a bit easier when you're using an AWS function calling a Lambda function from it, or vice versa.

So for stream-based AWS services, like Kinesis, DynamoDB streams, AWS Lambda pulls the stream and invokes your Lambda function synchronously. Concurrent executions of a function happen at any given time. You can estimate the concurrent execution count but the concurrent execution count will differ depending on whether or not your Lambda function is processing events from the stream-based event source. You can create a Lambda function to process events from event sources other than stream-based ones, so Amazon S3 or API Gateway.

Now each published event is a unit of work. Therefore, the number of events or requests these event sources publish influences the concurrency of that service. There are formulas you can use to estimate the concurrent Lambda function invocations, that basically is events or requests per second times the function duration. Okay, so consider as an example, a Lambda function processes Amazon S3 events and suppose that the Lambda function takes an average of three seconds and Amazon S3 publishes 10 events a second, then we can estimate we'll have 30 concurrent executions of that Lambda function.

Our third building block is having a microservice design pattern. One key priority when designing around functions as a service is to ensure that you take a microservice design approach so that you gain the most out of the serverless computing model. What we need for this is to have state independence, we need to have our layers decoupled so there's no dependency on other services, and to achieve this we do need a few common building blocks.

First, there's the function itself. You need to be developing simple functions using natively supported languages. Secondly, we need to have RESTful APIs. RESTful APIs are a core building block for microservice design. They give us reliable interfaces. So the new model is to build more complex clients with client-side frameworks, such as Node.js, Angular.js, React.js, Polymer.js, etc. And this way your web application can easily be distributed as a set of static assets, those can be HTML pages, JavaScript snippets, or CSS files, And those will upload dynamic content via an application programming interface, or API. So this new architectural pattern allows you to separate business logic from your presentation layers, and at the same time your services will be easier to scale and reuse, and eventually done by more than one client including Mobile X as an example. I suggest you explore the various differences in services and support on a case-by-case basis as all of the providers are evolving rapidly, and there's no one hard and fast rule for choosing one platform provider over another, they all have different ways of doing things and provide different services at different levels.

Now, in the AWS world, a typical configuration looks something like this. We'll have a static website hosted on Amazon S3 and that can be distributed via Amazon CloudFront. We have a RESTful API implemented with the HTTP API Gateway endpoints exposed via the Amazon API Gateway Service. We might have dynamic data stored in Amazon DynamoDB or any other database as a service alternative that we think might be a best use case, and this is how you'd build a completely serverless web application, meaning we won't need to manage, patch, or maintain any server during the development and deployment workflow.

Now let's compare that to how we might build an application in a non-serverless environment. So our traditional model might look something like this. We would look to implement elastic load balancer to handle incoming requests from a public domain. We would then pass that ELB request to an auto scaling group, and that auto scaling group may have a series of scripts to scale up or down based on our launch configuration, and we'll provision EC2 machines.

So those Elastic Compute Cloud machines will be based on an AMI, or Amazon Machine Image, which contains an operating system. We'll bootstrap each of those machines and they will be provisioned. So we have to go through quite a lot of bootstrapping and provisioning to get each of these machines running in a way that's going to deliver a simple application. We need to set our network access control lists to limit who and what can access our VPC. We need to set our security groups to limit any requests to the services themselves. And as importantly, we're going to have to set those auto scaling rules to ensure that the machines that we provisioned can deal with any burst activity that might eventuate.

Overall, there's a lot of extra tasks when I just want to run a simple code snippet. The benefit of AWS Lambda is that it does all that for us. Okay, so we might be wondering how does serverless differ from our traditional computing models, where we have services such as containers or services like Elastic Beanstalk, for example? Now even though containers are highly scriptable, you are still responsible for maintaining them through their lifecycles. For example, Amazon ECS only provides runtime execution services, everything else is in your hands still. So Lambda functions, on the other hand, are far more self-sufficient. Therefore, while Lambda has some features in common with EC2 containers, it's obviously more than that as a service. So, if serverless computing's not the same as containers is it more like a service like Amazon Elastic Beanstalk? So although Lambda does provide a platform for developers, it's much simpler than Amazon Elastic Beanstalk. AWS Lambda inherits some features from the EC2 container service and other features from Elastic Beanstalk, but it's conceptually distant from both of those. This type of processing isn't going to suit every deployment, but for non-human processing, it's often a far easier way to provision and deploy.

So that brings this lecture to a close. I'll see you in the next one where we start to explore some of the common use cases for serverless computing.

About the Author
Learning paths52

Head of Content

Andrew is an AWS certified professional who is passionate about helping others learn how to use and gain benefit from AWS technologies. Andrew has worked for AWS and for AWS technology partners Ooyala and Adobe.  His favorite Amazon leadership principle is "Customer Obsession" as everything AWS starts with the customer. Passions around work are cycling and surfing, and having a laugh about the lessons learnt trying to launch two daughters and a few start ups.