With the ever increasing and expanding service catalog being developed by the engineers at AWS, it’s easy to get confused when it comes to understanding which AWS Compute service you need and which service you should be using for your deployments. Which service offers me the quickest deployment?” or “Which service offers the best managed solution?”or “Which AWS Compute service do I need?” are some of the most frequently asked questions.
Whether you are looking for the right compute, storage, database, or networking service, there is an array to choose from, each with a unique list of benefits, use cases, methodologies, and mechanisms to suit your specific need. However, if you don’t know what’s available, you’re likely to incur greater inefficiencies and resources, and consequently, greater costs, as a result.
In this post, we’ll explore the range of compute services available for AWS to help you choose the one that’s right for you.
Getting clear on “compute”
Before going any further, let’s clarify what ‘compute’ actually refers to so that we have an understanding of the services that fall into this category.
Compute resources can be considered the brains and processing power that are required by applications and systems to carry out computational abilities via a series of instructions.
Essentially, compute is closely related to common server components that many of you will be familiar with such as central processing units (CPUs) and random access memory (RAM). With that in mind, a physical server within a data center would be considered a compute resource as it may have multiple CPUs and many gigabytes of RAM to process instructions given by the operating system and applications.
Within AWS, compute resources can be consumed in different quantities, for different periods of time, and across a range of use cases offering a wide scope of performance options. Choosing the right AWS compute resource will really depend on your requirements, so understanding this is key.
With that in mind, you must first define the requirements for your solution: What are you trying to achieve? For example, you may just want to deploy a couple of instances to act as Bastion Hosts within your public subnet of your VPC, or provision a number of servers to act as a web tier receiving HTTP requests for your website. Or, you may need to deploy new applications using Docker within your AWS environment.
These scenarios all require compute resources in some form to implement the solution. However, each would be best implemented using a different compute service. Knowing this can save you time, money, and effort across your deployments.
AWS Compute options
The range of compute services available is growing all the time, with two of the most recent, Amazon Lightsail and AWS Batch, released at the end of 2016. The current AWS Compute category (at the time of this post) consists of six different services. Here is a high-level overview of each:
Amazon Elastic Compute Cloud (EC2)
EC2 is the most common compute service that AWS offers. It allows you to deploy a selection of on-demand instances offering a wide array of different performance benefits within your AWS environment. These can be scaled up and down as necessary.
EC2 Container Service (ECS)
This service allows you to run Docker-enabled applications packaged as containers across a cluster of EC2 instances without requiring you to manage a complex and administratively heavy cluster management system.
AWS Elastic Beanstalk
AWS Elastic Beanstalk is a managed service that will take your uploaded web application code and automatically provision and deploy the appropriate and necessary resources within AWS to make the web application operational. These resources can include other AWS services and features such as an EC2, auto scaling, application health monitoring, and Elastic Load Balancing.
AWS Lambda is a service that allows you to run your own code using only milliseconds of compute resource in response to event driven triggers in a highly available and scalable serverless environment. This makes it easy to build applications that will respond quickly to new information.
This service is used to manage and run batch computing workloads. Batch Computing requires a vast amount of compute power across a cluster of compute resources to complete batch processing by executing a series of jobs or tasks.
Amazon Lightsail is essentially a virtual private server (VPS) backed by AWS infrastructure, much like an EC2 instance but without as many configurable steps throughout its creation. It has been designed to be simple, quick, and easy to use for small scale use cases by small businesses or for single users.
Other compute services
Some compute services have been created in response to the requirements and requests of the community. As the consumers of cloud resources, we want to be able to provision these quickly, reliably, and with minimal manual input to help reduce errors along the way.
AWS recognizes that not every implementation or solution that requires a compute resource fits the parameters and restrictions of existing services such as Amazon EC2 or AWS Elastic Beanstalk. New services are born and developed to meet and serve a specific compute request.
As other technologies such as Docker and serverless computing become more prevalent within the cloud computing environment, the need to develop compute resources that are optimized for these technologies becomes a must. In fact, some of these technologies are only possible by doing so. For example, AWS Lambda allows customers to take advantage of serverless computing. This continual evolution of services ensures that customers can take advantage of the latest technologies within AWS.
Others services have been designed for different purposes, such as enhanced deployment management, which brings convenience and simplicity to the customer, such as AWS Elastic Beanstalk.
The solutions and resources that are provisioned by the AWS Elastic Beanstalk service can be created manually using other services, and by importing your application code. Using Elastic Beanstalk, the manual provisioning and configuration are taken care of by the service itself.
This is perfect for engineers who may not have the familiarity or skills with AWS that they need to deploy, monitor, and scale the correct environment to run their developed applications themselves. Instead, this responsibility is passed on to AWS Elastic Beanstalk to deploy the correct infrastructure to run the uploaded code. This provides a simple, effective, and quick solution to the application deployment rollout.
As I mentioned previously, it’s important to understand which service options are available to you. Selecting the most appropriate service for your needs can help you to save money by reducing the amount of internal effort required from a personnel perspective alone. Using the AWS Elastic Beanstalk service example above, if you moved from manual deployments to using this service, then time, efficiency, resource, and cost will ALL benefit by passing additional responsibility onto the AWS service specifically around provisioning.
This ultimately allows you to spend more time in developing great new applications and less time on planning your deployment strategies.
Which AWS Compute service do I need?
So, when you find yourself asking “Which AWS Compute service do I need?”, here are some questions that you’ll need to answer:
- What is the end goal for your deployment? Which aspect is most important to the solution: Is it deployment time, simplicity, management, security, responsibility, cost, or something else? Knowing this will help you select the features that best meet your requirements.
- What are your compute requirements from a performance perspective? How much CPU, memory, and network bandwidth do you need? Although some services do not require all of this information, it’s still recommended that you know the minimum specifications for your application or service deployment.
- Do you know which AWS Compute options are available to you that are suitable for your deployment? If the answer is no, I recommend that you invest time and effort into gaining this knowledge as it will ultimately help you deploy a robust and cost-effective solution.
If you would like to know more about the AWS Compute services in greater detail, I highly recommend our “Compute Fundamentals for AWS” course.
On completion of the 90+ minute course, you will:
- Understand compute resources
- Be able to explain each of the compute resources used within AWS
- Be able to select the most appropriate compute resource based on your requirements
- Understand the benefits of Elastic Load Balancing and Auto Scaling and how they can work together to manage resource demand
The topics covered within this course include:
- What is Compute?
- Amazon Elastic Compute Cloud (EC2)
- Elastic Load Balancing & Auto Scaling
- Amazon ECS
- AWS Elastic Beanstalk
- AWS Lambda
- AWS Batch
- Amazon Lightsail
New Content: Platforms, Programming, and DevOps – Something for Everyone
This month our team of expert certification specialists released three new or updated learning paths, 16 courses, 13 hands-on labs, and four lab challenges! New content on Cloud Academy You can always visit our Content Roadmap to see what’s just released as well as what’s coming soon....
Mastering AWS Organizations Service Control Policies
Service Control Policies (SCPs) are IAM-like policies to manage permissions in AWS Organizations. SCPs restrict the actions allowed for accounts within the organization making each one of them compliant with your guidelines. SCPs are not meant to grant permissions; you should consider ...
New Content: Focus on DevOps and Programming Content this Month
This month our team of expert certification specialists released 12 new or updated learning paths, 15 courses, 25 hands-on labs, and four lab challenges! New content on Cloud Academy You can always visit our Content Roadmap to see what’s just released as well as what’s coming soon. Ja...
New Content: Get Ready for the CISM Cert Exam & Learn About Alibaba, Plus All the AWS, GCP, and Azure Courses You Know You Can Count On
This month our team of intrepid certification specialists released five learning paths, seven courses, 19 hands-on labs, and three lab challenges! One particularly interesting new learning path is Certified Information Security Manager (CISM) Foundations. After completing this learn...
Which Certifications Should I Get?
The old AWS slogan, “Cloud is the new normal” is indeed a reality today. Really, cloud has been the new normal for a while now and getting credentials has become an increasingly effective way to quickly showcase your abilities to recruiters and companies. With all that in mind, the s...
The 12 AWS Certifications: Which is Right for You and Your Team?
As companies increasingly shift workloads to the public cloud, cloud computing has moved from a nice-to-have to a core competency in the enterprise. This shift requires a new set of skills to design, deploy, and manage applications in cloud computing. As the market leader and most ma...
AWS Certified Solutions Architect Associate: A Study Guide
Want to take a really impactful step in your technical career? Explore the AWS Solutions Architect Associate certificate. Its new version (SAA-C02) was released on March 23, 2020. The AWS Solutions Architect - Associate Certification (or Sol Arch Associate for short) offers some ...
New Content: AWS Terraform, Java Programming Lab Challenges, Azure DP-900 & DP-300 Certification Exam Prep, Plus Plenty More Amazon, Google, Microsoft, and Big Data Courses
This month our Content Team continues building the catalog of courses for everyone learning about AWS, GCP, and Microsoft Azure. In addition, this month’s updates include several Java programming lab challenges and a couple of courses on big data. In total, we released five new learning...
Where Should You Be Focusing Your AWS Security Efforts?
Another day, another re:Invent session! This time I listened to Stephen Schmidt’s session, “AWS Security: Where we've been, where we're going.” Amongst covering the highlights of AWS security during 2020, a number of newly added AWS features/services were discussed, including: AWS Audit...
AWS re:Invent: 2020 Keynote Top Highlights and More
We’ve gotten through the first five days of the special all-virtual 2020 edition of AWS re:Invent. It’s always a really exciting time for practitioners in the field to see what features and services AWS has cooked up for the year ahead. This year’s conference is a marathon and not a...
WARNING: Great Cloud Content Ahead
At Cloud Academy, content is at the heart of what we do. We work with the world’s leading cloud and operations teams to develop video courses and learning paths that accelerate teams and drive digital transformation. First and foremost, we listen to our customers’ needs and we stay ahea...
Excelling in AWS, Azure, and Beyond – How Danut Prisacaru Prepares for the Future
Meet Danut Prisacaru. Danut has been a Software Architect for the past 10 years and has been involved in Software Engineering for 30 years. He’s passionate about software and learning, and jokes that coding is basically the only thing he can do well (!). We think his enthusiasm shines t...