Amazon Elastic Inference – GPU Acceleration for Faster Inferencing

“Add GPU acceleration to any Amazon EC2 instance for faster inference at much lower cost (up to 75% savings)”

So you’ve just kicked off the training phase of your multilayered deep neural network. The training phase is leveraging Amazon EC2 P3 instances to keep the training time to a minimum, but it’s still going to take a while. With time in hand, you begin to contemplate what infrastructure you’ll use to run your inferences.

You’re already familiar with the merits of using GPUs for the training phase. GPUs have the ability to parallelize massive amounts of simple math computations, which makes them perfect for training neural networks. GPUs are more expensive to run than CPUs, but because they can parallelize the number crunching, you don’t need to run them as long as you would the equivalent training performed on CPUs. In fact, training on GPUs can be orders-of-magnitude quicker. So it may cost you more per hour to run a GPU, but you won’t need to run it anywhere nearly as long when on a CPU. Besides factoring in cost, training your models faster allows you to get them into production quicker to perform inferences. So in terms of the training phase, it makes complete sense to go with GPUs.

So your contemplation now focuses on whether to use GPU or CPU infrastructure to perform inferencing once the training completes and your model is ready. We know that GPUs cost more per hour to run. Performing inferences through a trained neural network are far less taxing in terms of required computation and data volume that needs to be ingested and processed. Therefore, CPUs seem to be the way to go. However, you know from past experiences that over time, your CPU hosted inferencing tends to bottleneck due to overwhelming demand and this makes you reconsider running the inferencing on GPUs, but you now need to budget in the extra cost as a project consideration. This dilemma of whether to use GPUs versus CPUs for inferencing, with respect to both cost and performance is all too familiar for many organizations. The choice of using a GPU or CPU was a fairly mutually exclusive upfront decision made when using EC2. As of today, this is no longer the case.

Amazon Elastic Inference

Amazon Elastic Inference is a new service from AWS which allows you to complement your EC2 CPU instances with GPU acceleration, which is perfect for hosting your inferencing models. You can now select the appropriate CPU sized EC2 instance and boost its number crunching ability with GPU processing. Like with many other AWS services, you only pay for the actual accelerator hours you use. What this means is that you can get full GPU processing power but being up to 75% cheaper than running an equivalent GPU sized EC2 instance.

See: https://aws.amazon.com/machine-learning/elastic-inference/
(You might also want to read up on this year’s announcements from re:Invent, particularly our blog post on how Amazon FSx for Lustre Makes High Performance Computing More Accessible.)

For starters, Amazon Elastic Inference is launching with 3 types of Teraflop mixed precision powered accelerators: eia1.medium, eia1.large, and the eia1.xlarge

Elastic Inferencing GPU Types

Amazon Elastic Inference has been seamlessly integrated into both the AWS EC2 console and the AWS CLI. In the following EC2 console screenshot, attaching GPU acceleration, is as simple as enabling the “Add an Elastic Inference accelerator” option:

AWS EC2 Console - Elastic Inferencing

The equivalent AWS CLI command looks like the following, noting that the existing API has been extended with a new optional elastic-inference-accelerator parameter:

aws ec2 run-instances \ 
--image-id ami-00ffbd996ef2211e3 \
--key-name DNN_Key
--security-group-ids sg-12345678 \
--subnet-id subnet-12345678 \ 
--instance-type c5.xlarge \
--elastic-inference-accelerator Type=eia1.large
--iam-instance-profile Name="InferenceAcceleratorProfile"

The following list itemizes several prerequisites that need to be in place to leverage Amazon Elastic Inference:

  • A Private Link endpoint configured for Elastic Inference must be present
  • An IAM role with the necessary policies to connect to the Elastic Inference accelerator
  • Build your models using TensorFlow, Apache MXNet, and/or ONNX
  • Use the latest AWS Deep learning AMIs, which have been updated with Amazon Elastic Inference support baked directly into the TensorFlow, Apache MXNet deep learning frameworks
Deep Learning AMIs - Elastic Inferencing Enabled

As you can see with a few extra configuration options in place you can have the best of both worlds, CPU hosted inferencing with GPU acceleration. You no longer need to spend time contemplating CPUs over GPUs – take both!!

Another game changer in the machine learning space from AWS – give it a try and check out our Lab on Analyzing CPU vs GPU Performance for AWS Machine Learning.

Cloud Academy