Analyze CPU vs. GPU Performance for AWS Machine Learning

For teams training complex machine learning models, time and cost are important considerations. In the cloud, different instance types can be employed to reduce the time required to process data and train models.

Graphics Processing Units (GPUs) offer a lot of advantages over CPUs when it comes to quickly processing large amounts of data typical in machine learning projects. However, it’s important for users to know how to monitor utilization to make sure you are not over- or under- provisioning compute resources (and that you aren’t paying too much for instances).

AWS offers several GPU instance types for its Elastic Compute Cloud (EC2) that are aimed at supporting applications that are both compute-intensive and require faster performance speed. For example, in using AWS’s newest GPU instance, P3, Airbnb has been able to iterate faster and reduce costs for its machine learning models that use multiple types of data sources.

Our new Lab “Analyzing CPU vs. GPU Performance for AWS Machine Learning” will help teams find the right balance between cost and performance when using GPUs on AWS Machine Learning.

You will take control of a P2 instance to analyze CPU vs. GPU performance, and you will learn how to use the AWS Deep Learning AMI to start a Jupyter Notebook server, which can be used to share data and machine learning experiments.

In this video, author and Cloud Academy Researcher and Developer Logan Rakai will give you a sneak peek at what you’ll learn in this Hands-on Lab.

 Analyze CPU vs. GPU performance for Amazon Machine Learning - Start Lab
Cloud Academy