With the ever increasing and expanding service catalog being developed by the engineers at AWS, it’s easy to get confused when it comes to understanding which AWS Compute service you need and which service you should be using for your deployments. Which service offers me the quickest deployment?” or “Which service offers the best managed solution?”or “Which AWS Compute service do I need?” are some of the most frequently asked questions.
Whether you are looking for the right compute, storage, database, or networking service, there is an array to choose from, each with a unique list of benefits, use cases, methodologies, and mechanisms to suit your specific need. However, if you don’t know what’s available, you’re likely to incur greater inefficiencies and resources, and consequently, greater costs, as a result.
In this post, we’ll explore the range of compute services available for AWS to help you choose the one that’s right for you.
Getting clear on “compute”
Before going any further, let’s clarify what ‘compute’ actually refers to so that we have an understanding of the services that fall into this category.
Compute resources can be considered the brains and processing power that are required by applications and systems to carry out computational abilities via a series of instructions.
Essentially, compute is closely related to common server components that many of you will be familiar with such as central processing units (CPUs) and random access memory (RAM). With that in mind, a physical server within a data center would be considered a compute resource as it may have multiple CPUs and many gigabytes of RAM to process instructions given by the operating system and applications.
Within AWS, compute resources can be consumed in different quantities, for different periods of time, and across a range of use cases offering a wide scope of performance options. Choosing the right AWS compute resource will really depend on your requirements, so understanding this is key.
With that in mind, you must first define the requirements for your solution: What are you trying to achieve? For example, you may just want to deploy a couple of instances to act as Bastion Hosts within your public subnet of your VPC, or provision a number of servers to act as a web tier receiving HTTP requests for your website. Or, you may need to deploy new applications using Docker within your AWS environment.
These scenarios all require compute resources in some form to implement the solution. However, each would be best implemented using a different compute service. Knowing this can save you time, money, and effort across your deployments.
AWS Compute options
The range of compute services available is growing all the time, with two of the most recent, Amazon Lightsail and AWS Batch, released at the end of 2016. The current AWS Compute category (at the time of this post) consists of six different services. Here is a high-level overview of each:
Amazon Elastic Compute Cloud (EC2)
EC2 is the most common compute service that AWS offers. It allows you to deploy a selection of on-demand instances offering a wide array of different performance benefits within your AWS environment. These can be scaled up and down as necessary.
EC2 Container Service (ECS)
This service allows you to run Docker-enabled applications packaged as containers across a cluster of EC2 instances without requiring you to manage a complex and administratively heavy cluster management system.
AWS Elastic Beanstalk
AWS Elastic Beanstalk is a managed service that will take your uploaded web application code and automatically provision and deploy the appropriate and necessary resources within AWS to make the web application operational. These resources can include other AWS services and features such as an EC2, auto scaling, application health monitoring, and Elastic Load Balancing.
AWS Lambda is a service that allows you to run your own code using only milliseconds of compute resource in response to event driven triggers in a highly available and scalable serverless environment. This makes it easy to build applications that will respond quickly to new information.
This service is used to manage and run batch computing workloads. Batch Computing requires a vast amount of compute power across a cluster of compute resources to complete batch processing by executing a series of jobs or tasks.
Amazon Lightsail is essentially a virtual private server (VPS) backed by AWS infrastructure, much like an EC2 instance but without as many configurable steps throughout its creation. It has been designed to be simple, quick, and easy to use for small scale use cases by small businesses or for single users.
Other compute services
Some compute services have been created in response to the requirements and requests of the community. As the consumers of cloud resources, we want to be able to provision these quickly, reliably, and with minimal manual input to help reduce errors along the way.
AWS recognizes that not every implementation or solution that requires a compute resource fits the parameters and restrictions of existing services such as Amazon EC2 or AWS Elastic Beanstalk. New services are born and developed to meet and serve a specific compute request.
As other technologies such as Docker and serverless computing become more prevalent within the cloud computing environment, the need to develop compute resources that are optimized for these technologies becomes a must. In fact, some of these technologies are only possible by doing so. For example, AWS Lambda allows customers to take advantage of serverless computing. This continual evolution of services ensures that customers can take advantage of the latest technologies within AWS.
Others services have been designed for different purposes, such as enhanced deployment management, which brings convenience and simplicity to the customer, such as AWS Elastic Beanstalk.
The solutions and resources that are provisioned by the AWS Elastic Beanstalk service can be created manually using other services, and by importing your application code. Using Elastic Beanstalk, the manual provisioning and configuration are taken care of by the service itself.
This is perfect for engineers who may not have the familiarity or skills with AWS that they need to deploy, monitor, and scale the correct environment to run their developed applications themselves. Instead, this responsibility is passed on to AWS Elastic Beanstalk to deploy the correct infrastructure to run the uploaded code. This provides a simple, effective, and quick solution to the application deployment rollout.
As I mentioned previously, it’s important to understand which service options are available to you. Selecting the most appropriate service for your needs can help you to save money by reducing the amount of internal effort required from a personnel perspective alone. Using the AWS Elastic Beanstalk service example above, if you moved from manual deployments to using this service, then time, efficiency, resource, and cost will ALL benefit by passing additional responsibility onto the AWS service specifically around provisioning.
This ultimately allows you to spend more time in developing great new applications and less time on planning your deployment strategies.
Which AWS Compute service do I need?
So, when you find yourself asking “Which AWS Compute service do I need?”, here are some questions that you’ll need to answer:
- What is the end goal for your deployment? Which aspect is most important to the solution: Is it deployment time, simplicity, management, security, responsibility, cost, or something else? Knowing this will help you select the features that best meet your requirements.
- What are your compute requirements from a performance perspective? How much CPU, memory, and network bandwidth do you need? Although some services do not require all of this information, it’s still recommended that you know the minimum specifications for your application or service deployment.
- Do you know which AWS Compute options are available to you that are suitable for your deployment? If the answer is no, I recommend that you invest time and effort into gaining this knowledge as it will ultimately help you deploy a robust and cost-effective solution.
If you would like to know more about the AWS Compute services in greater detail, I highly recommend our “Compute Fundamentals for AWS” course.
On completion of the 90+ minute course, you will:
- Understand compute resources
- Be able to explain each of the compute resources used within AWS
- Be able to select the most appropriate compute resource based on your requirements
- Understand the benefits of Elastic Load Balancing and Auto Scaling and how they can work together to manage resource demand
The topics covered within this course include:
- What is Compute?
- Amazon Elastic Compute Cloud (EC2)
- Elastic Load Balancing & Auto Scaling
- Amazon ECS
- AWS Elastic Beanstalk
- AWS Lambda
- AWS Batch
- Amazon Lightsail
New Content: AWS Terraform, Java Programming Lab Challenges, Azure DP-900 & DP-300 Certification Exam Prep, Plus Plenty More Amazon, Google, Microsoft, and Big Data Courses
This month our Content Team continues building the catalog of courses for everyone learning about AWS, GCP, and Microsoft Azure. In addition, this month’s updates include several Java programming lab challenges and a couple of courses on big data. In total, we released five new learning...
Where Should You Be Focusing Your AWS Security Efforts?
Another day, another re:Invent session! This time I listened to Stephen Schmidt’s session, “AWS Security: Where we've been, where we're going.” Amongst covering the highlights of AWS security during 2020, a number of newly added AWS features/services were discussed, including: AWS Audit...
AWS re:Invent: 2020 Keynote Top Highlights and More
We’ve gotten through the first five days of the special all-virtual 2020 edition of AWS re:Invent. It’s always a really exciting time for practitioners in the field to see what features and services AWS has cooked up for the year ahead. This year’s conference is a marathon and not a...
WARNING: Great Cloud Content Ahead
At Cloud Academy, content is at the heart of what we do. We work with the world’s leading cloud and operations teams to develop video courses and learning paths that accelerate teams and drive digital transformation. First and foremost, we listen to our customers’ needs and we stay ahea...
Excelling in AWS, Azure, and Beyond – How Danut Prisacaru Prepares for the Future
Meet Danut Prisacaru. Danut has been a Software Architect for the past 10 years and has been involved in Software Engineering for 30 years. He’s passionate about software and learning, and jokes that coding is basically the only thing he can do well (!). We think his enthusiasm shines t...
New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More
This month our Content Team released two big certification Learning Paths: the AWS Certified Data Analytics - Speciality, and the Azure AI Fundamentals AI-900. In total, we released four new Learning Paths, 16 courses, 24 assessments, and 11 labs. New content on Cloud Academy At any ...
New Content: Azure DP-100 Certification, Alibaba Cloud Certified Associate Prep, 13 Security Labs, and Much More
This past month our Content Team served up a heaping spoonful of new and updated content. Not only did our experts release the brand new Azure DP-100 Certification Learning Path, but they also created 18 new hands-on labs — and so much more! New content on Cloud Academy At any time, y...
AWS Certification Practice Exam: What to Expect from Test Questions
If you’re building applications on the AWS cloud or looking to get started in cloud computing, certification is a way to build deep knowledge in key services unique to the AWS platform. AWS currently offers 12 certifications that cover major cloud roles including Solutions Architect, De...
Overcoming Unprecedented Business Challenges with AWS
From auto-scaling applications with high availability to video conferencing that’s used by everyone, every day — cloud technology has never been more popular or in-demand. But what does this mean for experienced cloud professionals and the challenges they face as they carve out a new p...
Constant Content: Cloud Academy’s Q3 2020 Roadmap
Hello — Andy Larkin here, VP of Content at Cloud Academy. I am pleased to release our roadmap for the next three months of 2020 — August through October. Let me walk you through the content we have planned for you and how this content can help you gain skills, get certified, and...
New Content: Alibaba, Azure AZ-303 and AZ-304, Site Reliability Engineering (SRE) Foundation, Python 3 Programming, 16 Hands-on Labs, and Much More
This month our Content Team did an amazing job at publishing and updating a ton of new content. Not only did our experts release the brand new AZ-303 and AZ-304 Certification Learning Paths, but they also created 16 new hands-on labs — and so much more! New content on Cloud Academy At...
Blog Digest: Which Certifications Should I Get?, The 12 Microsoft Azure Certifications, 6 Ways to Prevent a Data Breach, and More
This month, we were excited to announce that Cloud Academy was recognized in the G2 Summer 2020 reports! These reports highlight the top-rated solutions in the industry, as chosen by the source that matters most: customers. We're grateful to have been nominated as a High Performer in se...