AWS Compute Fundamentals
EC2 Auto Scaling
Elastic Load Balancing
ELB & Auto Scaling Summary
EC2 Instances for SAP Workloads on AWS
The course is part of this learning path
In this section of the AWS Certified: SAP on AWS Specialty learning path, we introduce you to the various Compute services currently available in AWS that are relevant to the PAS-C01 exam.
- Identify the various Compute services available in AWS
- Define the different families within AWS Compute services
- Identify the purpose of load balancers and Elastic Load Balancing
- Understand how Auto Scaling can enable your Compute resources to scale elastically based on varying levels of demand
- Identify supported EC2 instance types for SAP on AWS
The AWS Certified: SAP on AWS Specialty certification has been designed for anyone who has experience managing and operating SAP workloads. Ideally you’ll also have some exposure to the design and implementation of SAP workloads on AWS, including migrating these workloads from on-premises environments. Many exam questions will require a solutions architect level of knowledge for many AWS services, including AWS Compute services. All of the AWS Cloud concepts introduced in this course will be explained and reinforced from the ground up.
Hello and welcome to this lecture covering the different methods to help you monitor and troubleshoot issues that you may experience with your Lambda functions. Thankfully, monitoring statistics related to your Lambda function within CloudWatch is by default already configured. This also includes monitoring your functions as they are running. CloudWatch has the following metrics that are automatically populated by Lambda. Invocations. This determines how many times a function has been invoked and will match the number of billed requests that you are charged. Errors. This metric counts the number of failed invocations of the function, for example, the result of a permissions error. DeadLetterErrors. This counts the number of times Lambda failed to write the dead letter queue, for example, due to misconfigured resources or permission issues. Duration. This metric simply measures how long the function runs for in milliseconds from the point of invocation to when it terminates its execution. Again, used for billing, however, Lambda billed requests are measured in per millisecond time frames. Throttles. This is to count as how many times the function was invoked and throttled due to the limit of concurrency having been reached. IteratorAge. This is only used for stream-based invocations such as Amazon Kinesis. It measures in time how long Lambda took to receive a batch of records to the time of the last record written to the stream. This IteratorAge is measured in milliseconds. ConcurrentExecutions.
This is a combined metric for all of your Lambda functions that you have running within your AWS account in addition to functions with a custom concurrency limit. It calculates the total sum of concurrent executions at any point in time. UnreservedConcurrentExecutions. Again, this is also a combined metric for all of your functions in your account, and it calculates the sum of the concurrency of functions without a custom concurrency limit at any given time. By utilizing these metrics that are published into CloudWatch you are able to maintain an overview of your functions, and if you are experiencing any unexpected errors. Using the features of CloudWatch you can easily create a dashboard relating to your functions. For more information on CloudWatch and how to configure the service to get the most from your metrics, please see our existing course here. In addition to these metrics, CloudWatch also gathers log data sent by Lambda which are very useful to drill into if you are noticing erroneous patterns emerging from your CloudWatch metrics. For each function that you have running, CloudWatch will create a different log group. You can also add custom logging statements into your function code. As you can see here, CloudWatch has automatically created two different log groups, one for each function.
The log group name will be prefixed with aws and lambda. So in this example, we have the logs from functions MyNewFunction and Testing. You can also add custom logging statements into your function code, which are then used to ensure that your function is operating correctly. These logging statements are used to push data to CloudWatch logs automatically in addition to the managed messages that are sent by default. Depending on the language you are using your function will depend on the statements used. As an example, if using Node.js, the following statement can be used to generate custom logging statements. For more information on how to configure and set these logging statements up for each programming model, please see the following AWS link. When working with Lambda people often come across similar issues and I just want to spend a couple of minutes talking about these points to help you easily identify what might be causing the issue. Some of the common issues as to why your functions might not run relate to permissions. As we know, an AIM role is required for Lambda to assume and execute the code of your function.
In addition to this role, we also have to create a function policy which specifies which AWS resources are allowed to invoke your function. Having an error in either one of these components can cause your function to fail. If within the role execution policy, for example, you fail to add permissions to operate your function within a VPC to allow the creation of ENIs using the following permissions, then your function would fail. Alternatively, if your function policy does not include the correct permissions to allow your push-based event source to trigger the function required, then again you will receive a failure. Going back to CloudWatch and logs and how they are used by Lambda. Well, if the CloudWatch permissions were removed from the execution role, your logs would not be published to CloudWatch and would fail to be created. Permissions and access to resources from both your execution role and function policy are the most likely cause of issues as to why a function would fail. Be sure to check your policies and understand which policy is used for which purpose.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.