Skip to main content

Which AWS Compute Service Do I Need?

With the ever increasing and expanding service catalog being developed by the engineers at AWS, it’s easy to get confused when it comes to understanding which AWS Compute service you need and which service you should be using for your deployments. Which service offers me the quickest deployment?” or “Which service offers the best managed solution?”or “Which AWS Compute service do I need?” are some of the most frequently asked questions.

Whether you are looking for the right compute, storage, database, or networking service, there is an array to choose from, each with a unique list of benefits, use cases, methodologies, and mechanisms to suit your specific need. However, if you don’t know what’s available, you’re likely to incur greater inefficiencies and resources, and consequently, greater costs, as a result.

In this post, we’ll explore the range of compute services available for AWS to help you choose the one that’s right for you.

Getting clear on “compute”

Before going any further, let’s clarify what ‘compute’ actually refers to so that we have an understanding of the services that fall into this category.

Compute resources can be considered the brains and processing power that are required by applications and systems to carry out computational abilities via a series of instructions.

Essentially, compute is closely related to common server components that many of you will be familiar with such as central processing units (CPUs) and random access memory (RAM). With that in mind, a physical server within a data center would be considered a compute resource as it may have multiple CPUs and many gigabytes of RAM to process instructions given by the operating system and applications.

Within AWS, compute resources can be consumed in different quantities, for different periods of time, and across a range of use cases offering a wide scope of performance options. Choosing the right AWS compute resource will really depend on your requirements, so understanding this is key.

With that in mind, you must first define the requirements for your solution: What are you trying to achieve?  For example, you may just want to deploy a couple of instances to act as Bastion Hosts within your public subnet of your VPC, or provision a number of servers to act as a web tier receiving HTTP requests for your website. Or, you may need to deploy new applications using Docker within your AWS environment.

These scenarios all require compute resources in some form to implement the solution. However, each would be best implemented using a different compute service. Knowing this can save you time, money, and effort across your deployments.

AWS Compute options

The range of compute services available is growing all the time, with two of the most recent,  Amazon Lightsail and AWS Batch, released at the end of 2016. The current AWS Compute category (at the time of this post) consists of six different services. Here is a high-level overview of each: 

  Amazon Elastic Compute Cloud (EC2)      

EC2 is the most common compute service that AWS offers. It allows you to deploy a selection of on-demand instances offering a wide array of different performance benefits within your AWS environment. These can be scaled up and down as necessary.

 EC2 Container Service (ECS)

This service allows you to run Docker-enabled applications packaged as containers across a cluster of EC2 instances without requiring you to manage a complex and administratively heavy cluster management system.  

AWS Elastic Beanstalk

AWS Elastic Beanstalk is a managed service that will take your uploaded web application code and automatically provision and deploy the appropriate and necessary resources within AWS to make the web application operational.  These resources can include other AWS services and features such as an EC2, auto scaling, application health monitoring, and Elastic Load Balancing.  

 AWS Lambda

AWS Lambda is a service that allows you to run your own code using only milliseconds of compute resource in response to event driven triggers in a highly available and scalable serverless environment.  This makes it easy to build applications that will respond quickly to new information.

 AWS Batch

This service is used to manage and run batch computing workloads. Batch Computing requires a vast amount of compute power across a cluster of compute resources to complete batch processing by executing a series of jobs or tasks.

 Amazon Lightsail

Amazon Lightsail is essentially a virtual private server (VPS) backed by AWS infrastructure, much like an EC2 instance but without as many configurable steps throughout its creation.  It has been designed to be simple, quick, and easy to use for small scale use cases by small businesses or for single users.

Other compute services

Some compute services have been created in response to the requirements and requests of the community. As the consumers of cloud resources, we want to be able to provision these quickly, reliably, and with minimal manual input to help reduce errors along the way.

AWS recognizes that not every implementation or solution that requires a compute resource fits the parameters and restrictions of existing services such as Amazon EC2 or AWS Elastic Beanstalk. New services are born and developed to meet and serve a specific compute request.

As other technologies such as Docker and serverless computing become more prevalent within the cloud computing environment, the need to develop compute resources that are optimized for these technologies becomes a must. In fact, some of these technologies are only possible by doing so. For example, AWS Lambda allows customers to take advantage of serverless computing. This continual evolution of services ensures that customers can take advantage of the latest technologies within AWS.

Others services have been designed for different purposes, such as enhanced deployment management, which brings convenience and simplicity to the customer, such as AWS Elastic Beanstalk.

The solutions and resources that are provisioned by the AWS Elastic Beanstalk service can be created manually using other services, and by importing your application code.  Using Elastic Beanstalk, the manual provisioning and configuration are taken care of by the service itself.

This is perfect for engineers who may not have the familiarity or skills with AWS that they need to deploy, monitor, and scale the correct environment to run their developed applications themselves.  Instead, this responsibility is passed on to AWS Elastic Beanstalk to deploy the correct infrastructure to run the uploaded code. This provides a simple, effective, and quick solution to the application deployment rollout.

As I mentioned previously, it’s important to understand which service options are available to you. Selecting the most appropriate service for your needs can help you to save money by reducing the amount of internal effort required from a personnel perspective alone. Using the AWS Elastic Beanstalk service example above, if you moved from manual deployments to using this service, then time, efficiency, resource, and cost will ALL benefit by passing additional responsibility onto the AWS service specifically around provisioning.

This ultimately allows you to spend more time in developing great new applications and less time on planning your deployment strategies.

Which AWS Compute service do I need?

So, when you find yourself asking “Which AWS Compute service do I need?”, here are some questions that you’ll need to answer: 

  1. What is the end goal for your deployment? Which aspect is most important to the solution: Is it deployment time, simplicity, management, security, responsibility, cost, or something else?  Knowing this will help you select the features that best meet your requirements.
  2. What are your compute requirements from a performance perspective? How much CPU, memory, and network bandwidth do you need? Although some services do not require all of this information, it’s still recommended that you know the minimum specifications for your application or service deployment.
  3. Do you know which AWS Compute options are available to you that are suitable for your deployment? If the answer is no, I recommend that you invest time and effort into gaining this knowledge as it will ultimately help you deploy a robust and cost-effective solution.

If you would like to know more about the AWS Compute services in greater detail, I highly recommend our “Compute Fundamentals for AWS” course.
Compute Fundamentals for AWS Course
On completion of the 90+ minute course, you will:

  • Understand compute resources
  • Be able to explain each of the compute resources used within AWS
  • Be able to select the most appropriate compute resource based on your requirements
  • Understand the benefits of Elastic Load Balancing and Auto Scaling and how they can work together to manage resource demand

The topics covered within this course include:

  • What is Compute?
  • Amazon Elastic Compute Cloud (EC2)
  • Elastic Load Balancing & Auto Scaling
  • Amazon ECS
  • AWS Elastic Beanstalk
  • AWS Lambda
  • AWS Batch
  • Amazon Lightsail

Written by

Stuart Scott

Stuart is the AWS content lead at Cloud Academy where he has created over 40 courses reaching tens of thousands of students. His content focuses heavily on cloud security and compliance, specifically on how to implement and configure AWS services to protect, monitor and secure customer data and their AWS environment.

Related Posts

Sanket Dangi
— February 11, 2019

WaitCondition Controls the Pace of AWS CloudFormation Templates

AWS's WaitCondition can be used with CloudFormation templates to ensure required resources are running.As you may already be aware, AWS CloudFormation is used for infrastructure automation by allowing you to write JSON templates to automatically install, configure, and bootstrap your ...

Read more
  • AWS
  • formation
Andrew Larkin
— January 24, 2019

The 9 AWS Certifications: Which is Right for You and Your Team?

As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing.As the market leader and most ma...

Read more
  • AWS
  • AWS certifications
Andrew Larkin
— November 28, 2018

Two New EC2 Instance Types Announced at AWS re:Invent 2018 – Monday Night Live

The announcements at re:Invent just keep on coming! Let’s look at what benefits these two new EC2 instance types offer and how these two new instances could be of benefit to you. If you're not too familiar with Amazon EC2, you might want to familiarize yourself by creating your first Am...

Read more
  • AWS
  • EC2
  • re:Invent 2018
Guy Hummel
— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
Khash Nakhostin
Khash Nakhostin
— November 13, 2018

Understanding AWS VPC Egress Filtering Methods

In order to understand AWS VPC egress filtering methods, you first need to understand that security on AWS is governed by a shared responsibility model where both vendor and subscriber have various operational responsibilities. AWS assumes responsibility for the underlying infrastructur...

Read more
  • Aviatrix
  • AWS
  • VPC
Jeremy Cook
— November 10, 2018

S3 FTP: Build a Reliable and Inexpensive FTP Server Using Amazon’s S3

Is it possible to create an S3 FTP file backup/transfer solution, minimizing associated file storage and capacity planning administration headache?FTP (File Transfer Protocol) is a fast and convenient way to transfer large files over the Internet. You might, at some point, have conf...

Read more
  • Amazon S3
  • AWS
Guy Hummel
— October 18, 2018

Microservices Architecture: Advantages and Drawbacks

Microservices are a way of breaking large software projects into loosely coupled modules, which communicate with each other through simple Application Programming Interfaces (APIs).Microservices have become increasingly popular over the past few years. The modular architectural style,...

Read more
  • AWS
  • Microservices
Stuart Scott
— October 2, 2018

What Are Best Practices for Tagging AWS Resources?

There are many use cases for tags, but what are the best practices for tagging AWS resources? In order for your organization to effectively manage resources (and your monthly AWS bill), you need to implement and adopt a thoughtful tagging strategy that makes sense for your business. The...

Read more
  • AWS
  • cost optimization
Stuart Scott
— September 26, 2018

How to Optimize Amazon S3 Performance

Amazon S3 is the most common storage options for many organizations, being object storage it is used for a wide variety of data types, from the smallest objects to huge datasets. All in all, Amazon S3 is a great service to store a wide scope of data types in a highly available and resil...

Read more
  • Amazon S3
  • AWS
Cloud Academy Team
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
  • SpotInst
Guy Hummel and Jeremy Cook
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
Stuart Scott
— August 17, 2018

How to Use AWS CLI

The AWS Command Line Interface (CLI) is for managing your AWS services from a terminal session on your own client, allowing you to control and configure multiple AWS services.So you’ve been using AWS for awhile and finally feel comfortable clicking your way through all the services....

Read more
  • AWS