This course covers the core learning objective to meet the requirements of the 'Designing for disaster recovery & high availability - Level 1' skill
Learning Objectives:
- Understand the different AWS architecture design principles, such as design for failure, decoupled components, and event-driven architectures
- Underfstand how high availability is achieved through the uyse of multiple AWS Availability Zones
- Evaluate when to consider the use of multiple AWS Regions
- Understand at a high level the benefits of AWS Edge Location
Hello and welcome to this lecture where we shall take an introductory look at AWS Lambda. AWS Lambda is a serverless compute service which has been designed to allow you to run your application code without having to manage and provision your own EC2 instances. This saves you having to maintain and administer an additional layer of technical responsibility within your solution. Instead, that responsibility is passed over to AWS to manage for you.
Essentially, serverless means that you do not need to worry about provisioning and managing your own compute resource to run your own code, instead this is managed and provisioned by AWS. AWS will start, scout, maintain, and stop the compute resources as required, which can be just as short as just a few milliseconds. Although it's named serverless, it does of course require service, or at least compute power, to carry out your code requests, but because the AWS user does not need to be concerned with what's managing this compute power, or where it's provisioned from, it's considered serverless from the user perspective.
If you don't have to spend time operating, managing, patching, and securing an EC2 instance, then you have more time to focus on the code of your application and its business logic, while at the same time, optimizing costs. With AWS Lambda, you only ever have to pay for the compute power when Lambda is in use via Lambda functions. And I shall explain more on these later.
AWS Lambda charges compute power per 100 milliseconds of use only when your code is running, in addition to the number of times your code runs. With sub-second metering, AWS Lambda offers a truly cost optimized solution for your serverless environment. So how does it work? Well there are essentially four steps to its operation.
Firstly, AWS Lambda needs to be aware of your code that you need run so you can either upload this code to AWS Lambda, or write it within the code editor that Lambda provides. Currently, AWS Lambda supports Notebook.js, JavaScript, Python, Java, Java 8 compatible, C#, .NET Core, Go, and also Ruby. It's worth mentioning that the code that you write or upload can also include other libraries. Once your code is within Lambda, you need to configure Lambda functions to execute your code upon specific triggers from supported event sources, such as S3. As an example, a Lambda function can be triggered when an S3 event occurs, such as an object being uploaded to an S3 bucket. Once the specific trigger is initiated during the normal operations of AWS, AWS Lambda will run your code, as per your Lambda function, using only the required compute power as defined. Later in this course I'll cover more on when and how this compute power is specified. AWS records the compute time in milliseconds and the quantity of Lambda functions run to ascertain the cost of the service.
For an AWS Lambda application to operate, it requires a number of different elements. The following form the key constructs of a Lambda application. Lambda function. The Lambda function is compiled of your own code that you want Lambda to invoke as per defined triggers. Event source. Event sources are AWS services that can be used to trigger your Lambda functions, or put another way, they produce the events that your Lambda function essentially responds to by invoking it. For a comprehensive list of these event sources, please see the following link on the screen. Trigger. The Trigger is essentially an operation from an event source that causes the function to invoke. So essentially triggering that function. For example, an Amazon S3 put request could be used as a trigger. Downstream Resources. These are the resources that are required during the execution of your Lambda function. For example, your function might call upon accessing a specific SNS topic, or a particular SQS queue. So they are not used as the source of the trigger, but instead they are the resources to be used to execute the code within the function upon invocation.
Log streams. In an effort to help you identify issues and troubleshoot issues with your Lambda function, you can add logging statements to help you identify if your code is operating as expected into a log stream. These log streams will essentially be a sequence of events that all come from the same function and recorded in CloudWatch. In addition to log streams, Lambda also sends common metrics of your functions to CloudWatch for monitoring and alerting. At a high level, the configuration steps for creating a Lambda function via the AWS Management Console could consist of selecting a blueprint, and AWS Lambda provides a large number of common blueprint templates which are preconfigured Lambda functions. To save time on your own code, you can select one of these blueprints and then customize it as necessary. An example of one of these blueprints could be the S3 get object, which is an Amazon S3 trigger that retrieves metadata for the object that is being updated. You then need to configure your triggers, and as I just explained, the trigger is an operation from an event source that causes the function to invoke and in my previous statement, I suggested an S3 put request. And then you need to finish configuring your function. And this section requires you to either upload your code or edit it in-line and it also requires you to define the required resources, the maximum execution timeout, the IAM Role, and Handler Name.
A key benefit of using AWS Lambda is that it is a highly scalable serverless service, coupled with fantastic cost optimization compared to EC2 as you are only charged for Compute power while the code is running and for the number of functions called. For more information on AWS Lambda and how to configure it in detail, can be found in our following course. For your own hands on experience with AWS Lambda, please take a look at our labs which will guide you through how to create your first Lambda function.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.