Contents

Introduction
1
Introduction
PREVIEW4m 38s
What is Compute?
Summary
11
Summary
7m 48s

The course is part of these learning paths

more_horizSee 1 more
AWS Batch
Difficulty
Beginner
Duration
1h 19m
Students
60281
Ratings
4.7/5
starstarstarstarstar-half
Description

Understanding the fundamentals of AWS is critical if you want to deploy services and resources within the AWS Cloud. The Compute category of services are key resources that allow you to carry out computational abilities via a series of instructions used by applications and systems. These resources cover a range of different services and features, these being:

  • EC2 - Amazon Elastic Compute Cloud 
  • ECS - Amazon Elastic Container Service
  • ECR - Amazon Elastic Container Registry
  • EKS - Amazon Elastic Container Service for Kubernetes
  • AWS Elastic Beanstalk
  • AWS Lambda
  • AWS Batch
  • Amazon Lightsail

This course will provide the fundamental elements of all of these Compute services and features that will allow you to select the most appropriate service for your project and implementations. Each have their advantages by providing something of value that’s different to the others, which will all be discussed.

Topics covered within this course consist of:

  • What is Compute:  This lecture explains what 'Compute' is and what is meant by Compute resources and services
  • Amazon Elastic Compute Cloud (EC2): This is one of the most common Compute services, as a result this will likely be the longest lecture as you will cover a lot of elements around EC2 to ensure you are aware of how it’s put together and how it works
  • Amazon ECS (EC2 Container Service): Within this lecture you will gain a high-level overview of what the EC2 Container Service is and how it relates to Docker
  • Amazon Elastic Container Registry: In this lecture you will consider how this service links closely with ECS to provide a secure location to store and manage your docker images
  • Amazon Elastic Container Service for Kubernetes (EKS): Here you will look at how EKS provides a managed service, allowing you to run Kubernetes across your AWS infrastructure without having to take care of running the Kubernetes control plane
  • AWS Elastic Beanstalk: This lecture will provide an overview of the service, showing you how it’s used to automatically deploy applications using EC2 and a number of other AWS services
  • AWS Lambda: This lecture covers the Lambda ‘serverless’ service, where you will explore what serverless means and how this service is used to run your own code in response to events
  • AWS Batch: Here you will consider a high-level overview of this service that relates to Batch Computing
  • Amazon Lightsail: Finally we will look at Amazon Lightsail, a Virtual Private Server solution used for small-scale projects and use cases

If you want to learn the differences between the different Compute services, then this course is for you! 

With demonstrations provided, along with links to a number of our labs that allow you to gain hands-on experience in using many of these services, you will gain a solid understanding of the Compute services used within AWS.

If you have thoughts or suggestions for this course, please contact Cloud Academy at support@cloudacademy.com.

Transcript

Hello, and welcome to this lecture where I'll provide a high level overview of AWS Batch. As the name suggests, this service is used to manage and run Batch computing workloads within AWS. Before we go any further, I just want to quickly clarify what Batch computing is. 

Batch computing is primarily used in specialist use cases which require a vast amount of compute power across a cluster of compute resources to complete batch processing executing a series of jobs or tasks. Outside of a cloud environment, it can be very difficult to maintain and manage a batch computing system. It requires specific software and requires the ability to consume the resources required, which can be very costly. However, with AWS Batch, many of these constraints, administration activities and maintenance tasks are removed. You can seamlessly create a cluster of compute resources which is highly scalable, taking advantage of the elasticity if AWS, coping with any level of batch processing while optimizing the distribution of the workloads. All provisioning, monitoring, maintenance and management of the clusters themselves is taken care of by AWS, meaning there is no software to be installed by yourself. 

There are effectively five components that make up AWS Batch service which will help you to start using the service, these being: Jobs. A job is classed as a unit of work that is to be run by AWS Batch. For example, this can be a Linux executable file, an application within an ECS cluster or a shell script. The jobs themselves run on EC2 instances as a containerized application. Each job can at any one time be in a number of different states, for example, submitted, pending, running, failed, among others. Job definitions. These define specific parameters for the jobs themselves. They dictate how the job will run and with what configuration. Some examples of these may be how many vCPUs to use for the container, which data volume should be used, which IAM role should be used, allowing access for AWS Batch to communicate with other AWS services, and mount points.

Job queues. Jobs that are scheduled are placed into a job queue until they run. It's also possible to have multiple queues with different priorities if needed. One queue could be used for on-demand EC2 instances, and another queue could be used for the spot instances. Both on-demand and spot instances are supported by AWS Batch, allowing you to optimize cost, and AWS Batch can even bid on your behalf for those spot instances. 

Job scheduling. The Job Scheduler takes control of when a job should be run and from which Compute Environment. Typically it will operate on a first-in-first-out basis, and it will look at the different job queues that you have configured, ensuring that higher priority queues are run first, assuming all dependencies of that job have been met. 

Compute Environments. These are the environments containing the compute resources to carry out the job. The environment can be defined as managed or unmanaged. A managed environment means that the service itself will handle provisioning, scaling and termination of your Compute instances based on the configuration parameters that you would enter regarding the instance type, purchase method, such as on-demand or spot. This environment is then created as an Amazon ECS Cluster. Unmanaged environments are provisioned, managed and maintained by you, which gives greater customization. However, it does require greater administration and maintenance and also requires you to create the necessary Amazon ECS Cluster that the managed environment would have done on your behalf. 

If you have a requirement to run multiple jobs in parallel using Batch computing, for example, to analyze financial risk models, perform media transcoding or engineering simulations, then AWS Batch would be a perfect solution.

About the Author
Students
219451
Labs
1
Courses
213
Learning Paths
174

Stuart has been working within the IT industry for two decades covering a huge range of topic areas and technologies, from data center and network infrastructure design, to cloud architecture and implementation.

To date, Stuart has created 150+ courses relating to Cloud reaching over 180,000 students, mostly within the AWS category and with a heavy focus on security and compliance.

Stuart is a member of the AWS Community Builders Program for his contributions towards AWS.

He is AWS certified and accredited in addition to being a published author covering topics across the AWS landscape.

In January 2016 Stuart was awarded ‘Expert of the Year Award 2015’ from Experts Exchange for his knowledge share within cloud services to the community.

Stuart enjoys writing about cloud technologies and you will find many of his articles within our blog pages.