Amazon Compute Services
Installing AWS CLI
Compute Fundamentals for AWS offers you an updated introduction to AWS's cornerstone compute services, and provides a foundation as you build your compute skills for AWS. It includes coverage of:
- Amazon Elastic Compute Cloud (EC2)
- Elastic Load Balancers (ELBs)
- Auto Scaling
- Amazon EC2 Container Registry and Services (ECR and ECS)
- AWS Elastic Beanstalk
- AWS Lambda
Do you have questions on this course? Contact our cloud experts in our community forum.
AWS Lambda is a compute service, where you can upload your code to AWS Lambda and the service runs the code on your behalf using AWS highly available infrastructure without you having to provision or manage the underlying infrastructure. It is a very powerful service with great potential that can reduce the cost and complexity of running business processes on AWS. This introduction will explain how it works along with the pricing model. Currently, this service is available in the following regions. Before you can add your function, you need to create two roles in IAM. The first is an invocation role granting permission for events to invoke the function or for Lambda to pull from streams, depending on the model used. Lambda models include push and pull. The former is invoked by the service triggering the event. The latter is invoked by Lambda to pull events from various streams. The second role is the execution role, which grants access to resources needed by the Lambda function when it is running. AWS Lambda also supports resource policies, which are the recommended way to configure permissions for the push model, as each Lambda function would have a resource policy associated with it. You can then add permissions to your Lambda function, resource policy allowing the event source, for example, Amazon S3 or DynamoDB, permissions to invoke the function. The permission model is an advanced topic, and the best approach will depend on the function you are creating. Once both roles have been created, you can add the function. Lambda functions can be added via the online editor or uploaded as a compressed file. The compressed file is required for functions that have dependencies not included with the Amazon-based image used by Lambda. Currently, functions can be written in Node.js, Java, or Python 2.7, and AWS has stated that more languages will be added overtime. After you add the functions, you will need to specify the function name, as exported by the code, a file name if using a compressed [[00:02:00]] file, the execution role, the amount of memory to allocate, and the timeout period. After saving the function, you can associate it with an event or stream. This is typically done via the service invoking the function. For example, setting up S3 events on a bucket is done from the bucket properties in the S3 Explorer, or functions reading from the DynamoDB update streams are added in the table properties of the DynamoDB Console. Assuming all is well with the configuration, your Lambda function will execute as events occur. All of this setup can be done via the console, Command Line Tools, or the Amazon Web Services SDK. When a Lambda function is invoked, execution time is scheduled on an EC2 instance that is running Amazon's Linux distro. The best part is that you do not have to manage or configure the EC2 instance. You are simply renting time and resources, and the function is then scheduled to execute. It may not run immediately, especially if other resources are being used. That latency depends on the memory allocated for the function. Generally, the more memory allocated, the less latency you will encounter. When the function is loaded, it will run in its own context, separate from all other functions. The function will be able to interact with your other services only if you have permitted access via the proper execution role, or you were using the AWS API with an access key and secret key. Security was an important factor in building Lambda, and you can rest assured that your functions are safe from others. Whether using the pull or push model, the function will receive an argument containing information specific to the type of event, including data that triggered the event such as an S3 bucket and file name. It will also receive a context object used to communicate with Lambda. Your code will then execute just as you defined it. Upon completion, you notify Lambda of the result via the context object. You can return a success indicator or an error if something happened. [[00:04:00]] If you do not make the call through the context object, your function will time out at the interval you specified. This will result in wasted time that will ultimately affect the cost. The function will log the start time, end time, and a summary for every execution. Your function can add custom information to the log with a call to console.log. Logs are written to the CloudWatch Log stream dedicated to the function. This is useful when you are trying to debug a Lambda function. You can also see metrics and create alarms based on those metrics. The metrics currently gathered are invocations, which measure the number of times a function is invoked in response to an event or API call, errors which measure the number of invocations it filed due to errors in the function, response code 4xx, and currently includes handled exceptions, unhandled exceptions that cause the code to exit, out of memory exceptions, timeouts, and permission errors. Note, this does not include invocations that fail due to the invocation rates exceeding default concurrent limits, error code 429, as these are captured by the throttle metric or failures due to internal service errors, error code 500. The next metric is duration, which measures the elapsed time from when the function code starts executing as a result of an invocation to when it stops executing, and the final metric is throttle, which measures the number of Lambda function invocation attempts that were throttled due to invocation rates exceeding your concurrent limits, error code 429. A combination of these metrics along with the log stream data can help you determine where bottlenecks and other issues may be occurring, and is very useful when used with the built-in ability to test the invocation of a function from within the AWS Lambda Console before making the function live in the production environment. The following tables show the limits for AWS Lambda, and you need to take into [[00:06:00]] consideration these limits when building your function. This table lists the runtime resource limits for a Lambda function per invocation. This table lists the service limits for deploying a Lambda function. This table shows the limits on a per region basis. AWS Lambda's pricing model involves two distinct independent metrics, per request and total time of execution in seconds. The first 1 million requests and up to 400,000 gigabyte seconds compute time per month are included in the free tier. The number of free seconds depends on the amount of memory you have chosen to allocate for the function. If you have chosen the minimum amount of memory, which is 128 megabytes, you have 3.2 million seconds at your disposal. At the maximum amount of 1.5 gigabytes, you only have 266,667 seconds. Above the free tier, each batch of additional million requests costs $0.20, while the price per 100 milliseconds varies depending on the memory allocated. Note, if your Lambda function utilizes other AWS services or transfers data, you will be subject to billing for those services at their prevailing rate. Lambda is perfect for custom micro services used by your environment that do not require a dedicated EC2 instance, and it's used as a wide reaching from simple data processing to real-time processing, mobile applications to IOT. As the service continues to evolve, you can expect further integration with more events from other AWS services, new programming language support and additional configuration options.
David's acknowledged hands on experience in the IT industry has seen him speak at international conferences, operate in presales environments and conduct actual design and delivery services.
David also has extensive experience in delivery operations. David has worked in the financial, mining, state government, federal government and public sectors across Asia Pacific and the US