AWS Compute Fundamentals
EC2 Auto Scaling
Elastic Load Balancing
ELB & Auto Scaling Summary
EC2 Instances for SAP Workloads on AWS
The course is part of this learning path
In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to the various Compute services currently available in AWS that are relevant to the PAS-C01 exam.
- Identify the various Compute services available in AWS
- Define the different families within AWS Compute services
- Identify the purpose of load balancers and Elastic Load Balancing
- Understand how Auto Scaling can enable your Compute resources to scale elastically based on varying levels of demand
- Identify supported EC2 instance types for SAP on AWS
The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services, including AWS Compute services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.
Hello, and welcome to this final lecture, which will highlight the key points taken from the previous lectures within this course. I began by giving an introduction to the service itself. And here I explained that AWS Lambda is a serverless compute service designed to run application code without having to manage and provision your own EC2 instances. You only ever have to pay for the compute power when your Lambda functions are running to the closest millisecond. In addition to compute power, you are also charged based on the number of times your code runs. To use Lambda, there are four steps required. You must either upload your code to Lambda, or write it within the code editor that Lambda provides. You need to configure your Lambda function to execute upon specific triggers from supported event sources, and once your trigger is initiated, Lambda will run your code as per your Lambda function, using only the required compute power as defined.
And lastly, AWS records the compute time in milliseconds, and the quantity of Lambda functions run to ascertain the cost of the service. Lambda is found within the AWS Management Console under the Compute category. And a Lambda function is compiled of your own code that you want Lambda to invoke. Event sources are AWS services that can be used to trigger your Lambda functions, and downstream resources are resources that are required during the execution of your Lambda function. And log streams help to identify issues and troubleshoot issues with your Lambda function. Following this lecture, I then focused on Lambda functions themselves, explaining how to create them, and what each of the configurable components were. This lecture covered a lot of elements, and in this lecture we learned that Lambda supports the following languages: Node.js, Java, C Sharp, Python, Go, PowerShell, and Ruby. You can import code into Lambda by creating a deployment package, and Lambda will need global read permissions to your deployment package to perform the import function. You can upload your code using the Management Console, AWS CLI or the SDK, and if you created your code from within Lambda itself, then Lambda would create the deployment package for you. There are three different options when creating a function. You can author it from scratch, use a blueprint, or use the serverless application repository.
You must provide the name of your function, the run time, and the IAM role to be used to create your function. The designer window in the function allows you to configure triggers, and a trigger is an operation from an event source that causes the function to invoke. Configured triggers are then added to the design window. To view policy information for the execution policy, and the function policy, you can select the key icon in the design window, and the role execution policy determines what resources the function role has access to when the function is being run. The function policy defines which AWS resources are allowed to invoke your function, and the function code window allows you to define, write, and import your code. The handler within your function allows Lambda to invoke it when the service executes the function on your behalf, and it's used as the entry point within your code to execute your function.
Environment variables are key value pairs that allow you to incorporate variables into your function without embedding them thoroughly into your code. By default, AWS Lambda encrypts your environment variables after the function has been deployed using KMS. Basic settings allows you to determine the compute resource that you want to use to execute your code, and you can only alter the amount of memory used. AWS Lambda then calculates the CPU power itself, based off of this selection. The function timeout determines how long the function should run before it terminates. And by default, AWS Lambda is only allowed to access resources that are accessible over the internet. To access resources within your VPC requires additional configuration.
The execution role will need permissions to configure ENIs in your VPC. A dead-letter queue is used to receive payloads that were not processed due to a failed execution. And failed asynchronous functions would automatically retry the event a further two more times. Synchronous invocations do not automatically retry failed attempts. Enable active tracing is used to integrate AWS X-Ray to trace event sources that invoked your Lambda function, in addition to tracing other resources that were called upon in response to your Lambda function running. Concurrency measures how many functions can be running at the same time, with a default unreserved concurrency set to 1,000. AWS CloudTrail integrates with AWS Lambda, aiding with auditing and compliance.
Throttling sets the reserved concurrency limit of your function to zero, and will stop all future invocations of the function until you change the concurrency setting. Lambda qualifiers allow you to change between versions of an alias of your function, and when you create a new version of your function, you're not able to make any further configuration changes, making it immutable. An alias allows you to create a pointer to a specific version of your function. By exporting your function, you can redeploy at a later stage, perhaps within a different AWS region. And by creating a test event, you can easily perform different tests against your function. Following this lengthy lecture on Lambda functions, I then expanded in greater detail the use of event sources and mappings. And within this lecture, I explained that an event source is an AWS service that produces the events that your Lambda function responds to by invoking it. Event sources can either be poll or push-based, and at the time I'm writing this course, the current poll-based event sources are Amazon Kinesis, Amazon SQS, and DynamoDB.
Push-based event sources cover all the remaining supported event sources. An event source mapping is the configuration that links your event source to your Lambda function. And with push-based event sources, the mapping is maintained within the event source. Poll-based event source mappings are held within your Lambda function, and by manually invoking that Lambda function, you have the ability to use the invoke option, allowing you to invoke it synchronously or asynchronously. Synchronous invocation enables you to assess the result of the function before moving on to the next operation required, and asynchronous invocations can be used when there is no need to maintain an order of function execution. When event sources are used to call and invoke your function, the invocation type is dependent on the service. Poll-based event sources always have an invocation type of synchronous, but with push-based event sources, invocation types are dependent on each service. Finally, I explained how you can monitor and troubleshoot issues with your Lambda functions. And during this lecture, we learned that statistics related to your Lambda functions are by default, monitored by Amazon CloudWatch.
And CloudWatch uses the following metrics: invocations, errors, dead letter errors, duration, throttles, iterator age, concurrent executions, and unreserved concurrent executions. In addition to these metrics, CloudWatch also gathers log data sent by Lambda. And each function relates to a different log group. The log group name is defined as AWS Lambda, and then the function name, and it's possible to create custom logging statements into your function code, which are then sent to CloudWatch logs. Common issues as to why your function might not run relate to permissions. And you should check your IAM role execution policy and function policies to ensure the correct access has been issued to run your function.
That has now brought me to the end of this lecture, and to the end of this course. You should now have a greater understanding of AWS Lambda and how the service is configured and can be used within your environment to help create serverless applications using minimal compute resources. As mentioned at the start of this course, I recommend that you take the following labs to put this theory knowledge into better practice. Introduction to AWS Lambda, Process Amazon S3 Events with AWS Lambda, Configure Amazon DynamoDB Triggers with AWS Lambda, and Create Scheduled Tasks with AWS Lambda. If you have any feedback on this course, positive or negative, please contact us by sending an email to firstname.lastname@example.org. Your feedback is greatly appreciated.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.