Amazon Web Services

Amazon Elastic Inference – GPU Acceleration for Faster Inferencing

“Add GPU acceleration to any Amazon EC2 instance for faster inference at much lower cost (up to 75% savings)”

So you’ve just kicked off the training phase of your multilayered deep neural network. The training phase is leveraging Amazon EC2 P3 instances to keep the training time to a minimum, but it’s still going to take a while. With time in hand, you begin to contemplate what infrastructure you’ll use to run your inferences.

You’re already familiar with the merits of using GPUs for the training phase. GPUs have the ability to parallelize massive amounts of simple math computations, which makes them perfect for training neural networks. GPUs are more expensive to run than CPUs, but because they can parallelize the number crunching, you don’t need to run them as long as you would the equivalent training performed on CPUs. In fact, training on GPUs can be orders-of-magnitude quicker. So it may cost you more per hour to run a GPU, but you won’t need to run it anywhere nearly as long when on a CPU. Besides factoring in cost, training your models faster allows you to get them into production quicker to perform inferences. So in terms of the training phase, it makes complete sense to go with GPUs.

So your contemplation now focuses on whether to use GPU or CPU infrastructure to perform inferencing once the training completes and your model is ready. We know that GPUs cost more per hour to run. Performing inferences through a trained neural network are far less taxing in terms of required computation and data volume that needs to be ingested and processed. Therefore, CPUs seem to be the way to go. However, you know from past experiences that over time, your CPU hosted inferencing tends to bottleneck due to overwhelming demand and this makes you reconsider running the inferencing on GPUs, but you now need to budget in the extra cost as a project consideration. This dilemma of whether to use GPUs versus CPUs for inferencing, with respect to both cost and performance is all too familiar for many organizations. The choice of using a GPU or CPU was a fairly mutually exclusive upfront decision made when using EC2. As of today, this is no longer the case.

Amazon Elastic Inference is a new service from AWS which allows you to complement your EC2 CPU instances with GPU acceleration, which is perfect for hosting your inferencing models. You can now select the appropriate CPU sized EC2 instance and boost its number crunching ability with GPU processing. Like with many other AWS services, you only pay for the actual accelerator hours you use. What this means is that you can get full GPU processing power but being up to 75% cheaper than running an equivalent GPU sized EC2 instance.

See: https://aws.amazon.com/machine-learning/elastic-inference/
(You might also want to read up on this year’s announcements from re:Invent, particularly our blog post on how Amazon FSx for Lustre Makes High Performance Computing More Accessible.)

For starters, Amazon Elastic Inference is launching with 3 types of Teraflop mixed precision powered accelerators: eia1.medium, eia1.large, and the eia1.xlarge

Amazon Elastic Inference has been seamlessly integrated into both the AWS EC2 console and the AWS CLI. In the following EC2 console screenshot, attaching GPU acceleration, is as simple as enabling the “Add an Elastic Inference accelerator” option:

The equivalent AWS CLI command looks like the following, noting that the existing API has been extended with a new optional elastic-inference-accelerator parameter:

aws ec2 run-instances \ 
--image-id ami-00ffbd996ef2211e3 \
--key-name DNN_Key
--security-group-ids sg-12345678 \
--subnet-id subnet-12345678 \ 
--instance-type c5.xlarge \
--elastic-inference-accelerator Type=eia1.large
--iam-instance-profile Name="InferenceAcceleratorProfile"

The following list itemizes several prerequisites that need to be in place to leverage Amazon Elastic Inference:

  • A Private Link endpoint configured for Elastic Inference must be present
  • An IAM role with the necessary policies to connect to the Elastic Inference accelerator
  • Build your models using TensorFlow, Apache MXNet, and/or ONNX
  • Use the latest AWS Deep learning AMIs, which have been updated with Amazon Elastic Inference support baked directly into the TensorFlow, Apache MXNet deep learning frameworks

As you can see with a few extra configuration options in place you can have the best of both worlds, CPU hosted inferencing with GPU acceleration. You no longer need to spend time contemplating CPUs over GPUs – take both!!

Another game changer in the machine learning space from AWS – give it a try and check out our Lab on Analyzing CPU vs GPU Performance for AWS Machine Learning.

Jeremy Cook

Jeremy is currently employed as a Cloud Researcher and Trainer - and operates within CloudAcademy's content provider team authoring technical training documentation for both AWS and GCP cloud platforms. Jeremy has achieved AWS Certified Solutions Architect - Professional Level, and GCP Qualified Systems Operations Professional certifications.

Recent Posts

Get 50% off with the Cloud Academy’s Flash Sale!

It's Flash Sale time! Get 50% off your first year with Cloud Academy: all access to AWS, Azure, and Cloud…

2 weeks ago

New AWS Certified Data Engineer – Associate (DEA-C01) exam goes live on March 12th, 2024!

In this blog post, we're going to answer some questions you might have about the new AWS Certified Data Engineer…

2 months ago

Navigating the Vocabulary of Generative AI Series (3 of 3)

This is my 3rd and final post of this series ‘Navigating the Vocabulary of Gen AI’. If you would like…

3 months ago