Understanding the fundamentals of AWS is critical if you want to deploy services and resources within the AWS Cloud. The Compute category of services are key resources that allow you to carry out computational abilities via a series of instructions used by applications and systems. These resources cover a range of different services and features, these being:
- EC2 - Amazon Elastic Compute Cloud
- ECS - Amazon Elastic Container Service
- ECR - Amazon Elastic Container Registry
- EKS - Amazon Elastic Container Service for Kubernetes
- AWS Elastic Beanstalk
- AWS Lambda
- AWS Batch
- Amazon Lightsail
This course will provide the fundamental elements of all of these Compute services and features that will allow you to select the most appropriate service for your project and implementations. Each have their advantages by providing something of value that’s different to the others, which will all be discussed.
Topics covered within this course consist of:
- What is Compute: This lecture explains what 'Compute' is and what is meant by Compute resources and services
- Amazon Elastic Compute Cloud (EC2): This is one of the most common Compute services, as a result this will likely be the longest lecture as you will cover a lot of elements around EC2 to ensure you are aware of how it’s put together and how it works
- Amazon ECS (EC2 Container Service): Within this lecture you will gain a high-level overview of what the EC2 Container Service is and how it relates to Docker
- Amazon Elastic Container Registry: In this lecture you will consider how this service links closely with ECS to provide a secure location to store and manage your docker images
- Amazon Elastic Container Service for Kubernetes (EKS): Here you will look at how EKS provides a managed service, allowing you to run Kubernetes across your AWS infrastructure without having to take care of running the Kubernetes control plane
- AWS Elastic Beanstalk: This lecture will provide an overview of the service, showing you how it’s used to automatically deploy applications using EC2 and a number of other AWS services
- AWS Lambda: This lecture covers the Lambda ‘serverless’ service, where you will explore what serverless means and how this service is used to run your own code in response to events
- AWS Batch: Here you will consider a high-level overview of this service that relates to Batch Computing
- Amazon Lightsail: Finally we will look at Amazon Lightsail, a Virtual Private Server solution used for small-scale projects and use cases
If you want to learn the differences between the different Compute services, then this course is for you!
With demonstrations provided, along with links to a number of our labs that allow you to gain hands-on experience in using many of these services, you will gain a solid understanding of the Compute services used within AWS.
If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.
Resources referenced within this lecture:
Understanding AWS Lambda to Run and Scale your Code
Lab: Introduction to AWS Lambda
Lab: Process Amazon S3 events with AWS Lambda
Lab: Automating EBS snapshots with AWS Lambda
Transcript
Hello and welcome to this lecture where we shall take an introductory look at AWS Lambda. AWS Lambda is a serverless compute service which has been designed to allow you to run your application code without having to manage and provision your own EC2 instances. This saves you having to maintain and administer an additional layer of technical responsibility within your solution. Instead, that responsibility is passed over to AWS to manage for you.
Essentially, serverless means that you do not need to worry about provisioning and managing your own compute resource to run your own code, instead this is managed and provisioned by AWS. AWS will start, scout, maintain, and stop the compute resources as required, which can be just as short as just a few milliseconds. Although it's named serverless, it does of course require service, or at least compute power, to carry out your code requests, but because the AWS user does not need to be concerned with what's managing this compute power, or where it's provisioned from, it's considered serverless from the user perspective.
If you don't have to spend time operating, managing, patching, and securing an EC2 instance, then you have more time to focus on the code of your application and its business logic, while at the same time, optimizing costs. With AWS Lambda, you only ever have to pay for the compute power when Lambda is in use via Lambda functions. And I shall explain more on these later.
AWS Lambda charges compute power per 100 milliseconds of use only when your code is running, in addition to the number of times your code runs. With sub-second metering, AWS Lambda offers a truly cost optimized solution for your serverless environment. So how does it work? Well there are essentially four steps to its operation.
Firstly, AWS Lambda needs to be aware of your code that you need run so you can either upload this code to AWS Lambda, or write it within the code editor that Lambda provides. Currently, AWS Lambda supports Notebook.js, JavaScript, Python, Java, Java 8 compatible, C#, .NET Core, Go, and also Ruby. It's worth mentioning that the code that you write or upload can also include other libraries. Once your code is within Lambda, you need to configure Lambda functions to execute your code upon specific triggers from supported event sources, such as S3. As an example, a Lambda function can be triggered when an S3 event occurs, such as an object being uploaded to an S3 bucket. Once the specific trigger is initiated during the normal operations of AWS, AWS Lambda will run your code, as per your Lambda function, using only the required compute power as defined. Later in this course I'll cover more on when and how this compute power is specified. AWS records the compute time in milliseconds and the quantity of Lambda functions run to ascertain the cost of the service.
For an AWS Lambda application to operate, it requires a number of different elements. The following form the key constructs of a Lambda application. Lambda function. The Lambda function is compiled of your own code that you want Lambda to invoke as per defined triggers. Event source. Event sources are AWS services that can be used to trigger your Lambda functions, or put another way, they produce the events that your Lambda function essentially responds to by invoking it. For a comprehensive list of these event sources, please see the following link on the screen. Trigger. The Trigger is essentially an operation from an event source that causes the function to invoke. So essentially triggering that function. For example, an Amazon S3 put request could be used as a trigger. Downstream Resources. These are the resources that are required during the execution of your Lambda function. For example, your function might call upon accessing a specific SNS topic, or a particular SQS queue. So they are not used as the source of the trigger, but instead they are the resources to be used to execute the code within the function upon invocation.
Log streams. In an effort to help you identify issues and troubleshoot issues with your Lambda function, you can add logging statements to help you identify if your code is operating as expected into a log stream. These log streams will essentially be a sequence of events that all come from the same function and recorded in CloudWatch. In addition to log streams, Lambda also sends common metrics of your functions to CloudWatch for monitoring and alerting. At a high level, the configuration steps for creating a Lambda function via the AWS Management Console could consist of selecting a blueprint, and AWS Lambda provides a large number of common blueprint templates which are preconfigured Lambda functions. To save time on your own code, you can select one of these blueprints and then customize it as necessary. An example of one of these blueprints could be the S3 get object, which is an Amazon S3 trigger that retrieves metadata for the object that is being updated. You then need to configure your triggers, and as I just explained, the trigger is an operation from an event source that causes the function to invoke and in my previous statement, I suggested an S3 put request. And then you need to finish configuring your function. And this section requires you to either upload your code or edit it in-line and it also requires you to define the required resources, the maximum execution timeout, the IAM Role, and Handler Name.
A key benefit of using AWS Lambda is that it is a highly scalable serverless service, coupled with fantastic cost optimization compared to EC2 as you are only charged for Compute power while the code is running and for the number of functions called. For more information on AWS Lambda and how to configure it in detail, can be found in our following course. For your own hands on experience with AWS Lambda, please take a look at our labs which will guide you through how to create your first Lambda function.
That now brings me to the end of this lecture covering AWS Lambda. Coming up next, I shall be discussing AWS Batch.
Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.
To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.
Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.
He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.
In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.
Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.