Amazon Machine Learning: Use Cases and a Real Example in Python
What is Amazon Machine Learning and how does it work"Amazon Machine Learning is a service that makes it easy for developers of all skill levels t...Learn More
For teams training complex machine learning models, time and cost are important considerations. In the cloud, different instance types can be employed to reduce the time required to process data and train models.
Graphics Processing Units (GPUs) offer a lot of advantages over CPUs when it comes to quickly processing large amounts of data typical in machine learning projects. However, it’s important for users to know how to monitor utilization to make sure you are not over- or under- provisioning compute resources (and that you aren’t paying too much for instances).
AWS offers several GPU instance types for its Elastic Compute Cloud (EC2) that are aimed at supporting applications that are both compute-intensive and require faster performance speed. For example, in using AWS’s newest GPU instance, P3, Airbnb has been able to iterate faster and reduce costs for its machine learning models that use multiple types of data sources.
Our new Lab “Analyzing CPU vs. GPU Performance for AWS Machine Learning” will help teams find the right balance between cost and performance when using GPUs on AWS Machine Learning.
You will take control of a P2 instance to analyze CPU vs. GPU performance, and you will learn how to use the AWS Deep Learning AMI to start a Jupyter Notebook server, which can be used to share data and machine learning experiments.
In this video, author and Cloud Academy Researcher and Developer Logan Rakai will give you a sneak peek at what you’ll learn in this Hands-on Lab.
Predictive analytics and automation—through AI and machine learning—are increasingly being integrated into enterprise applications to support decision making and address critical issues such as security and business intelligence. Public cloud platforms like AWS offer dedicated services ...
LEARNING PATHSIntroduction to KubernetesKubernetes allows you to deploy and manage containers at scale. Created by Google, and now supported by Azure, AWS, and Docker, Kubernetes is the container orchestration platform of choice for many deployments. For teams deploying containeri...
Amazon Web Services is a global public cloud provider, and as such, it has to have a global network of infrastructure to run and manage its many growing cloud services that support customers around the world. In this post, we'll take a look at the components that make up the AWS Global...