Skip to main content

Google Prediction API: a Machine Learning black box for developers


Google Prediction API provides a RESTful interface to build Machine Learning models

This is my third article on how to build Machine Learning models in the Cloud. I previously explored Amazon Machine Learning and Azure Machine Learning – relative newcomers in the cloud data market. Google Prediction API, on the other hand, was released all the way back in 2011, and offers a very stable and simple way to train Machine Learning models via a RESTful interface, although it might seem less friendly if you generally prefer browser interfaces.
I am not going to explore the wide range of services offered by Google Cloud Platform, you can easily check the Developers Console out by yourself for free, sign up for the Free Trial offered by Google ($300 in credit to use for 2 months), and check out Cloud Academy’s courses on Google Cloud Platform.

Google Prediction API: Machine Learning Black Box

We can define Google’s approach as a “black box”, since you get no control over what happens under the hood: your model configuration is restricted to specifying “Classification” vs. “Regression,” or providing a preprocessing PMML (Predictive Model Markup Language) file and a set of weighting parameters in the case of categorical models. That’s it.
Let me clarify a few basic concepts that will help you specifically with Google Prediction API:

  • You need Regression whenever your target output is a numerical continuous variable which may – or may not – span a specific range (i.e. the price of a car, the age of a person, etc).
  • Classification is what you need whenever your target output can assume only a limited set of values, either numbers or strings, based on your application context.
  • Binary Classification is a special case in which your target output can assume only two values (let’s say True or False), for which simpler but more accurate models can be built. In some cases, building a set of binary models and combining their output might perform better than a single multi-class model.

On the other hand, your input features (your columns) can contain any type of data, although certain types are easier to work with (i.e. text analysis is clearly more complex than numerical regression).
The good news is that Google just doesn’t impose any arbitrary constraints on your input data types or require any configuration process. All you need to do is format your dataset the right way. Think of it as a big table, where each row is an input vector and the first column is your target value.
You will need to upload a single CSV file and Google Prediction API will take care of types detection, values normalization, features selection, etc.
Google Prediction API Dataset

Google Prediction API first step: uploading a dataset

The only Google Cloud service we need in order to use Google Prediction API is Cloud Storage, where we will store our dataset. You will not need to enable it on your Console, since it’s automatically enabled on every Google Cloud project.
First of all we will create a new Project. You have to choose a name, an ID and, optionally, the datacenter location.
Google Prediction API - storage project

Just as we did for my previous articles on AmazonML and AzureML, we are going to train a model for HAR (Human Activity Recognition) using an open dataset built by UCI and freely available here.
The dataset is composed of more than 10,000 records, each one defined by 560 input features and one target column, which can take one of the following values: 1, 2, 3, 4, 5 and 6 (walking, walking upstairs, walking downstairs, sitting, standing, laying down).
Every record has been generated on a smartphone, using accelerometer and gyroscope data, and labelled manually based on the performer’s activity.
We are going to build a multi-class model to understand whether a given record of sensor data (generated in real time) can be definitively associated with walking, standing, sitting, laying, etc. Such a model might be useful for things like activity tracking and healthcare monitoring.
I have already gone through the process of manipulating the original dataset to create one single CSV file, since the original dataset has been split into smaller datasets (for training and testing) and input features have been separated from target values. You can find my Python script here.
The next step involves uploading the dataset file to Google Cloud Storage: you can do this from the Developers Console, clicking on “Storage > Cloud Storage > Storage Browser” on the side menu. Here you want to create a new bucket (i.e. a folder), select it, then upload your file. It will take a while (the file contains about 90MB of data). If you’ve got a slow network connection, you might try to upload only a smaller portion of the dataset. The model accuracy should be pretty good with only 20% of our Ground Truth.
Google Prediction API - cloud storage

How to use Google Prediction API

Unfortunately, Google Prediction API doesn’t provide any user-friendly Web interface, and almost every step beyond this point will be performed using Python scripts via an API call. If you really can’t stand coding, you might use the official APIs Explorer, but that’s not how you want to build your products, right?
Google Prediction API Explorer

Before making real API calls, we need to enable Google Prediction API on our project. You will find it by clicking on “APIs & auth > APIs”: it will be the last but one item of the Google Cloud APIs list. Enabling an API is quite straightforward and you need to do it only once for each of your projects. A single click on “Enable API” will do the job.
Google Prediction API list
One last step: you need to create a new oAuth2 Client ID. Most Google APIs use oAuth2 for authentication: you can either create a Service account key (for server to server applications) or a Web Application Client. In our case, since we don’t need to work with our users’ data, we are going to use a server to server key.
Eventually, you might use a WebApp Client ID even for a server to server application, but your code will end up being slightly more complicated and you will need to go through the typical oAuth flow (either using your browser or copying and pasting oAuth codes on your terminal).
Let’s proceed. Click on “APIs & auth > Credentials” and “Create new Client ID”. Select “Service account” in the popup and confirm the creation. You will automatically download a JSON file containing the new Client ID data, including ‘client_email’ and ‘private_key’: we will open this file and use these two fields in our code.
Google Prediction API oAuth2

Google Prediction API: Model Training Phase

We are finally ready to use Google Prediction API. I am going to work with their excellent official Python client – even if the documentation might sometimes be misleading. Also, I am going to show small code segments to focus on each sub-task, but you can find the whole script here.
As you can see from the official documentation, you can either use a Hosted Model or train your own ones. In order to train a new model, we are going to use the insert API method.
Every Google Prediction API method takes your project ID as first parameter. The Trainedmodels.Insert method expects a body parameter, containing a model ID (that you choose), your model type (classification or regression), and your dataset (either a Cloud Storage location or a set of instances).

#train a new classification model
api.trainedmodels().insert(project=project_id, body={
    'id': model_id,
    'storageDataLocation': 'machine-learning-dataset/dataset.csv',
    'modelType': 'CLASSIFICATION'
}).execute()

Optionally, you can specify the following parameters:

  • sourceModel: the ID of an existing model, in case you want to clone it.
  • storagePMMLLocation: a preprocessing file (PMML format).
  • utility: a weighting function for categorical models.

Of course the training phase is asynchronous and you will need to check your model status using the Trainedmodels.get method. As long as your model’s trainingStatus property is not “DONE”, you won’t be able to use it.

Is your model good enough?

Now you can start generating new predictions, but you might want to first analyze it to see what kind of accuracy you can expect. You can call the Trainedmodels.analyze method and be given a lot of useful information about your model.

#retrieve the new model's analysis
analysis = api.trainedmodels().analyze(project=project_id, id=model_id).execute()

This API call returns insights about your dataset (dataDescription), providing three numerical statistics for each input feature: count, mean and variance. These might be useful, but aren’t anything special. After all, you could have computed them by yourself without creating a new model.
What we really need is the modelDescription field. Indeed, it contains a confusionMatrix structure. Although it’s not that easy to read in a JSON format, this structure will tell you how your model behaves.
In order to process it, Google had to split your dataset into two smaller sets. The first one was used to train the model, and the second one to evaluate it. If you do the math, based on the values and on the dataset size, you will notice that Google applied a 90/10 split.

{
    'confusionMatrix': {
        '1': {
            '1': '166.00',
            '2': '0.00',
            '3': '1.00',
            '4': '0.00',
            '5': '0.00',
            '6': '0.00'
        },
        '2': {
            '1': '0.00',
            '2': '164.00',
            '3': '0.00',
            '4': '0.00',
            '5': '0.00',
            '6': '0.00'
        },
        '3': {
            '1': '0.00',
            '2': '2.00',
            '3': '158.00',
            '4': '0.00',
            '5': '0.00',
            '6': '0.00'
        },
        '4': {
            '1': '0.00',
            '2': '0.00',
            '3': '0.00',
            '4': '161.00',
            '5': '5.00',
            '6': '1.00'
        },
        '5': {
            '1': '0.00',
            '2': '0.00',
            '3': '0.00',
            '4': '9.00',
            '5': '180.00',
            '6': '0.00'
        },
        '6': {
            '1': '0.00',
            '2': '0.00',
            '3': '0.00',
            '4': '0.00',
            '5': '0.00',
            '6': '196.00'
        }
    },
    'confusionMatrixRowTotals': {
        '1': '167.00',
        '2': '164.00',
        '3': '160.00',
        '4': '167.00',
        '5': '189.00',
        '6': '196.00'
    },
    'modelinfo': {
        'kind': 'prediction#training'
    }
}

I admit that some percentage values would have been easier to read, and maybe some precision/recall statistics would also have been nice. You can always compute those by yourself but let’s say that – very intuitively – you should see a lot of zeros around, and higher numbers on the main diagonal. Every non-zero value outside the main diagonal means that your model wrongly classified N of your records.
With this dataset and the applied data split, here is how the Confusion Matrix looks:

C. MatrixClass 1Class 2Class 3Class 4Class 5Class 6
Class 199.40%00.60%000
Class 20100.00%0000
Class 301.25%98.75%000
Class 400096.40%3.00%0.60%
Class 50004.80%95.20%0
Class 600000100.00%

It is not too bad, considering the required effort and the total absence of configuration and data normalization. Google has successfully created a reliable model and we can now generate new predictions, based on new unlabelled data.

How to generate new Predictions

In order to simplify this demo, I am assuming that we have already computed every input feature on our smartphone, sent it to our server, and stored it into a local CSV file.
Therefore I am just reading the file and calling Trainedmodels.predict, which takes a csvInstance input, in the form of a simple list of values.

#activities dict
labels = {
	'1': 'walking', '2': 'walking upstairs', '3': 'walking downstairs',
	'4': 'sitting', '5': 'standing', '6': 'laying'
}
#read new record from local file
with open('record.csv') as f:
	record_str = f.readline()
#obtain new prediction
prediction = api.trainedmodels().predict(project=project_id, id=model_id, body={
	'input': {
		'csvInstance': record_str.split(',')
	},
}).execute()
#retrieve classified label and reliability measures
label = prediction.get('outputLabel')
stats = prediction.get('outputMulti')
#show results
print("You are currently %s (class %s)." % (labels[label], label) )
print(stats)

This API call is pretty fast (with respect to other ML services) and will return the following:

  • outputLabel: the predicted class (in our case the classified activity).
  • outputMulti: a list of reliability measure for each class.

If your model average accuracy is high enough, you can just take outputLabel as your prediction result. In case you can’t blindly trust your model – or if you can make more advanced decisions based on you application context – you may want to inspect outputMulti and take your final decision based on each class’ reliability measure.

Google Prediction API: what’s next?

I believe Google’s black box reached a pretty high level of abstraction for developers, although a more flexible dataset configuration and better analysis visualization would make the product easier to use for everyone, especially non-coders.
One nice feature is that you can always keep your model updated, adding new data on the fly, without going through the whole training phase again. This is especially nice for systems that span long periods of time, so that you can easily adapt your model to new data and conditions, without the need for a new modelling phase.
As far as speed and performance, Google Prediction API seems like a great candidate for your real-time predictions. With respect to other ML services – and with this open dataset – it achieved the highest accuracy, taking only a couple of minutes for training and an average response time below 1.3 seconds for real-time predictions.

Written by

Alex is a Software Engineer with a great passion for music and web technologies. He's experienced in web development and software design, with a particular focus on frontend and UX.

Related Posts

— November 21, 2018

Google Cloud Certification: Preparation and Prerequisites

Google Cloud Platform (GCP) has evolved from being a niche player to a serious competitor to Amazon Web Services and Microsoft Azure. In 2018, research firm Gartner placed Google in the Leaders quadrant in its Magic Quadrant for Cloud Infrastructure as a Service for the first time. In t...

Read more
  • AWS
  • Azure
  • Google Cloud
— October 30, 2018

Azure Stack Use Cases and Applications

This is the second of a two-part series covering Azure Stack. Our first post provided an introduction to Azure Stack. Why would your organization consider using Azure Stack? What are the key differences between Azure Stack and Microsoft Azure? In this post, we'll begin to answer bot...

Read more
  • Azure
  • Hybrid Cloud
  • Virtualization
— October 3, 2018

Highlights from Microsoft Ignite 2018

Microsoft Ignite 2018 was a big success. Over 26,000 people attended Microsoft’s flagship conference for IT professionals in sunny Orlando, Florida. As usual, Microsoft made a huge number of announcements, ranging from minor to major in importance. To save you the trouble of sifting thr...

Read more
  • Azure
  • Ignite
— September 20, 2018

Planning for Microsoft Ignite 2018 Sessions: What Not to Miss

Cloud Academy is proud to be a sponsor of the Microsoft Ignite Conference to be held September 24 - 28 in Orlando, Florida. This is Microsoft’s biggest event of the year and is a great way to stay up to date on how to get the most from Microsoft’s products. In this post, I’ll help you p...

Read more
  • Azure
— September 18, 2018

How to Optimize Cloud Costs with Spot Instances: New on Cloud Academy

One of the main promises of cloud computing is access to nearly endless capacity. However, it doesn’t come cheap. With the introduction of Spot Instances for Amazon Web Services’ Elastic Compute Cloud (AWS EC2) in 2009, spot instances have been a way for major cloud providers to sell sp...

Read more
  • AWS
  • Azure
  • Google Cloud
— August 23, 2018

What are the Benefits of Machine Learning in the Cloud?

A Comparison of Machine Learning Services on AWS, Azure, and Google CloudArtificial intelligence and machine learning are steadily making their way into enterprise applications in areas such as customer support, fraud detection, and business intelligence. There is every reason to beli...

Read more
  • AWS
  • Azure
  • Google Cloud
  • Machine Learning
— July 5, 2018

How Does Azure Encrypt Data?

In on-premises environments, data security is typically a siloed activity, with a company's security team telling the internal technology groups (server administration, database, networking, and so on) what needs to be protected against intrusion.This approach is absolutely a bad...

Read more
  • Azure
— June 26, 2018

Disadvantages of Cloud Computing

If you want to deliver digital services of any kind, you’ll need to compute resources including CPU, memory, storage, and network connectivity. Which resources you choose for your delivery, cloud-based or local, is up to you. But you’ll definitely want to do your homework first.Cloud ...

Read more
  • AWS
  • Azure
  • Cloud Computing
  • Google Cloud
Albert Qian
— June 19, 2018

Preparing for the Microsoft Azure 70-535 Exam

The credibility of Microsoft Azure continues to grow in the first quarter of 2018 with an increasing number of enterprises migrating their workloads, resulting in a jump for Azure from 10% to 13% in market share. Most organizations will find that simply “lifting and shifting” applicatio...

Read more
  • Azure
  • Compute
  • Database
  • Security
— April 12, 2018

Azure Migration Strategy: A Checklist to Get Started

By now, you’ve heard it many times and from many sources: cloud technology is the future of IT. If your organization isn’t already running critical workloads on a cloud platform (and, if your career isn’t cloud-focused), you’re running the very real risk of being overtaken by nimbler co...

Read more
  • Azure
— March 2, 2018

Three Must-Use Azure Security Services

Keeping your cloud environment safe continues to be the top priority for the enterprise, followed by spending, according to RightScale’s 2018 State of the Cloud report.The safety of your cloud environment—and the data and applications that your business runs on—depends on how well you...

Read more
  • Azure
  • Security
— February 15, 2018

Is Multi-Cloud a Solution for High Availability?

With the average cost of downtime estimated at $8,850 per minute, businesses can’t afford to risk system failure. Full access to services and data anytime, anywhere is one of the main benefits of cloud computing.By design, many of the core services with the public cloud and its underl...

Read more
  • AWS
  • Azure
  • Cloud Adoption
  • Google Cloud