Google Prediction API: a Machine Learning black box for developers

Google Prediction API provides a RESTful interface to build Machine Learning models

This is my third article on how to build Machine Learning models in the Cloud. I previously explored Amazon Machine Learning and Azure Machine Learning – relative newcomers in the cloud data market. Google Prediction API, on the other hand, was released all the way back in 2011, and offers a very stable and simple way to train Machine Learning models via a RESTful interface, although it might seem less friendly if you generally prefer browser interfaces.

I am not going to explore the wide range of services offered by Google Cloud Platform, you can easily check the Developers Console out by yourself for free, sign up for the Free Trial offered by Google ($300 in credit to use for 2 months), and check out Cloud Academy’s courses on Google Cloud Platform.

Google Prediction API: Machine Learning Black Box

We can define Google’s approach as a “black box”, since you get no control over what happens under the hood: your model configuration is restricted to specifying “Classification” vs. “Regression,” or providing a preprocessing PMML (Predictive Model Markup Language) file and a set of weighting parameters in the case of categorical models. That’s it.

Let me clarify a few basic concepts that will help you specifically with Google Prediction API:

  • You need Regression whenever your target output is a numerical continuous variable which may – or may not – span a specific range (i.e. the price of a car, the age of a person, etc).
  • Classification is what you need whenever your target output can assume only a limited set of values, either numbers or strings, based on your application context.
  • Binary Classification is a special case in which your target output can assume only two values (let’s say True or False), for which simpler but more accurate models can be built. In some cases, building a set of binary models and combining their output might perform better than a single multi-class model.

On the other hand, your input features (your columns) can contain any type of data, although certain types are easier to work with (i.e. text analysis is clearly more complex than numerical regression).

The good news is that Google just doesn’t impose any arbitrary constraints on your input data types or require any configuration process. All you need to do is format your dataset the right way. Think of it as a big table, where each row is an input vector and the first column is your target value.

You will need to upload a single CSV file and Google Prediction API will take care of types detection, values normalization, features selection, etc.

Google Prediction API Dataset

Google Prediction API first step: uploading a dataset

The only Google Cloud service we need in order to use Google Prediction API is Cloud Storage, where we will store our dataset. You will not need to enable it on your Console, since it’s automatically enabled on every Google Cloud project.

First of all we will create a new Project. You have to choose a name, an ID and, optionally, the datacenter location.

Google Prediction API - storage project


Just as we did for my previous articles on AmazonML and AzureML, we are going to train a model for HAR (Human Activity Recognition) using an open dataset built by UCI and freely available here.

The dataset is composed of more than 10,000 records, each one defined by 560 input features and one target column, which can take one of the following values: 1, 2, 3, 4, 5 and 6 (walking, walking upstairs, walking downstairs, sitting, standing, laying down).

Every record has been generated on a smartphone, using accelerometer and gyroscope data, and labelled manually based on the performer’s activity.

We are going to build a multi-class model to understand whether a given record of sensor data (generated in real time) can be definitively associated with walking, standing, sitting, laying, etc. Such a model might be useful for things like activity tracking and healthcare monitoring.

I have already gone through the process of manipulating the original dataset to create one single CSV file, since the original dataset has been split into smaller datasets (for training and testing) and input features have been separated from target values. You can find my Python script here.

The next step involves uploading the dataset file to Google Cloud Storage: you can do this from the Developers Console, clicking on “Storage > Cloud Storage > Storage Browser” on the side menu. Here you want to create a new bucket (i.e. a folder), select it, then upload your file. It will take a while (the file contains about 90MB of data). If you’ve got a slow network connection, you might try to upload only a smaller portion of the dataset. The model accuracy should be pretty good with only 20% of our Ground Truth.

Google Prediction API - cloud storage

How to use Google Prediction API

Unfortunately, Google Prediction API doesn’t provide any user-friendly Web interface, and almost every step beyond this point will be performed using Python scripts via an API call. If you really can’t stand coding, you might use the official APIs Explorer, but that’s not how you want to build your products, right?

Google Prediction API Explorer


Before making real API calls, we need to enable Google Prediction API on our project. You will find it by clicking on “APIs & auth > APIs”: it will be the last but one item of the Google Cloud APIs list. Enabling an API is quite straightforward and you need to do it only once for each of your projects. A single click on “Enable API” will do the job.

Google Prediction API list

One last step: you need to create a new oAuth2 Client ID. Most Google APIs use oAuth2 for authentication: you can either create a Service account key (for server to server applications) or a Web Application Client. In our case, since we don’t need to work with our users’ data, we are going to use a server to server key.

Eventually, you might use a WebApp Client ID even for a server to server application, but your code will end up being slightly more complicated and you will need to go through the typical oAuth flow (either using your browser or copying and pasting oAuth codes on your terminal).

Let’s proceed. Click on “APIs & auth > Credentials” and “Create new Client ID”. Select “Service account” in the popup and confirm the creation. You will automatically download a JSON file containing the new Client ID data, including ‘client_email’ and ‘private_key’: we will open this file and use these two fields in our code.

Google Prediction API oAuth2

Google Prediction API: Model Training Phase

We are finally ready to use Google Prediction API. I am going to work with their excellent official Python client – even if the documentation might sometimes be misleading. Also, I am going to show small code segments to focus on each sub-task, but you can find the whole script here.

As you can see from the official documentation, you can either use a Hosted Model or train your own ones. In order to train a new model, we are going to use the insert API method.

Every Google Prediction API method takes your project ID as first parameter. The Trainedmodels.Insert method expects a body parameter, containing a model ID (that you choose), your model type (classification or regression), and your dataset (either a Cloud Storage location or a set of instances).

Optionally, you can specify the following parameters:

  • sourceModel: the ID of an existing model, in case you want to clone it.
  • storagePMMLLocation: a preprocessing file (PMML format).
  • utility: a weighting function for categorical models.

Of course the training phase is asynchronous and you will need to check your model status using the Trainedmodels.get method. As long as your model’s trainingStatus property is not “DONE”, you won’t be able to use it.

Is your model good enough?

Now you can start generating new predictions, but you might want to first analyze it to see what kind of accuracy you can expect. You can call the Trainedmodels.analyze method and be given a lot of useful information about your model.

This API call returns insights about your dataset (dataDescription), providing three numerical statistics for each input feature: count, mean and variance. These might be useful, but aren’t anything special. After all, you could have computed them by yourself without creating a new model.

What we really need is the modelDescription field. Indeed, it contains a confusionMatrix structure. Although it’s not that easy to read in a JSON format, this structure will tell you how your model behaves.

In order to process it, Google had to split your dataset into two smaller sets. The first one was used to train the model, and the second one to evaluate it. If you do the math, based on the values and on the dataset size, you will notice that Google applied a 90/10 split.

I admit that some percentage values would have been easier to read, and maybe some precision/recall statistics would also have been nice. You can always compute those by yourself but let’s say that – very intuitively – you should see a lot of zeros around, and higher numbers on the main diagonal. Every non-zero value outside the main diagonal means that your model wrongly classified N of your records.

With this dataset and the applied data split, here is how the Confusion Matrix looks:

C. Matrix Class 1 Class 2 Class 3 Class 4 Class 5 Class 6
Class 1 99.40% 0 0.60% 0 0 0
Class 2 0 100.00% 0 0 0 0
Class 3 0 1.25% 98.75% 0 0 0
Class 4 0 0 0 96.40% 3.00% 0.60%
Class 5 0 0 0 4.80% 95.20% 0
Class 6 0 0 0 0 0 100.00%

It is not too bad, considering the required effort and the total absence of configuration and data normalization. Google has successfully created a reliable model and we can now generate new predictions, based on new unlabelled data.

How to generate new Predictions

In order to simplify this demo, I am assuming that we have already computed every input feature on our smartphone, sent it to our server, and stored it into a local CSV file.

Therefore I am just reading the file and calling Trainedmodels.predict, which takes a csvInstance input, in the form of a simple list of values.

This API call is pretty fast (with respect to other ML services) and will return the following:

  • outputLabel: the predicted class (in our case the classified activity).
  • outputMulti: a list of reliability measure for each class.

If your model average accuracy is high enough, you can just take outputLabel as your prediction result. In case you can’t blindly trust your model – or if you can make more advanced decisions based on you application context – you may want to inspect outputMulti and take your final decision based on each class’ reliability measure.

Google Prediction API: what’s next?

I believe Google’s black box reached a pretty high level of abstraction for developers, although a more flexible dataset configuration and better analysis visualization would make the product easier to use for everyone, especially non-coders.

One nice feature is that you can always keep your model updated, adding new data on the fly, without going through the whole training phase again. This is especially nice for systems that span long periods of time, so that you can easily adapt your model to new data and conditions, without the need for a new modelling phase.

As far as speed and performance, Google Prediction API seems like a great candidate for your real-time predictions. With respect to other ML services – and with this open dataset – it achieved the highest accuracy, taking only a couple of minutes for training and an average response time below 1.3 seconds for real-time predictions.

Alex Casalboni

Alex Casalboni

Alex is a Software Engineer with a great passion for music and web technologies, experienced in web development and software design, with a particular focus on frontend and UX. His sound and music engineering background allows him to deal with multimedia, signal processing, machine learning, AI and a lot of interesting tools, which are even more powerful when you merge them with the Cloud.

More Posts - Website

Follow Me:
TwitterLinkedInGoogle Plus

  • anonymus

    The above discussion was very helpful.
    I used this for regression but getting wrong predicted value.
    the predicted value is far away from the actual value.
    Please help

    • Hi,
      what dataset have you been using? (how many rows and columns)
      Can you share a small subsample of it, some code snippets or your model evaluation confusionMatrix?

  • Anonymous

    Hi Alex,

    Great article, thanks for posting. One question I’ve been having about Google’s Prediction API I haven’t found the answer to anywhere is how many models can you have trained? I am planning to use the API to classify some data with a major and minor classification, in a range of about 20 categories. Would you suggest training models for each category if that’s possible?

    • I don’t think there is any limitation on the maximum number of models you can train, while you have some on the # of predictions, updates and training size (Free Quota). Not such a problem if you go for paid usage.

      As far as your 20 categories, you may want to have 20 individual binary models but – assuming a linear distribution – in the average case you’d have to generate about 10 predictions (which might take several seconds, unless you perform the API call from within a Google datacenter).

      I would suggest to create a multi-class model first and see how it performs. In case it doesn’t perform well for a considerable number of classes, you still may want to use it as a first step (hopefully it will perform well for the most important classes), and go for additional binary predictions in order to increase the overall accuracy.

      Does it make sense to you? Let me how it goes.

  • deeplearning4j

    Do you know which algorithms they’re using? Logistic regression, etc?

    • Hi @deeplearning4j:disqus, apparently there is no way to know since it’s a complete black box. I guess you could try to take a guess by solving a set of problems with different approaches and compare the results to the Google’s ones, although I never took the time to investigate deeper. Let me know if you find out.

  • Idan Cohen

    Hi Alex, great article ! thanks for the information. After reviewing the 3 ML tools (amazon, azure and google), can you please summaries your comparison between the three?

  • Not sure if this was around when you wrote article but there is GUI of sorts with google spreadsheets. It’s very slow if you are doing a large dataset but an easy way to get started with the API – a

    • Hi @danvoell:disqus,
      thank you for mentioning the Google Spreadsheet Add On.

      Yes, it was already available back then, but I didn’t mention it as I was more interested in describing the API itself since – in most scenarios – you want to use the API as a real-time service and feed your model with new unseen data on the fly.

      As you said though, if you have a static and reasonably small dataset, the Google Spreadsheets Add On would be a much simpler solution, especially for non-technical users.

      It would be nice if the API natively accepted Google Spreadsheets or Google FusionTables as input, but I’m afraid the Google Cloud team is not going into that direction anytime soon.

  • genericid

    I just wanted to add that you can have a quick bite of Google Prediction API right in the browser by enabling the Google Smart Autofill Spreadsheets addon in Google Spreadsheets as described here at

    You will be able to get predictions for column values based on other columns! It’s free and it’s remarkably slow, but it can come handy and it’s really helpful at giving ML a go!

    • Hi @genericid:disqus, yes the add on has been mentioned in other comments too.

      It’s definitely a quick way to find out if the API can fit a useful model for your data, then I’d always recommend to build an actual model, both for reusability and performance concerns.

      • genericid

        Sure, of course… I think the main value of that addon is more of showing what ML is all about, rather than using it for actual work… but I will keep it in consideration in cases I have to work with big data, and I’ll see what happens… To the same regard, do you happen to know any other tools like this that can be used to quickly make use of a ML algorithm for learning purposes? Cheers!

        • Totally agree :)

          I can’t think of any other tool like this right now, but in the meanwhile you can have a look at our recent webinar: we discussed how we use ML in the Cloud at Cloud Academy. You will also find an interesting introduction to ML technologies, focused on what ML can do for you.

  • Teddy Morris-Knower

    Thanks for the super helpful and thorough walkthrough! I just wanted to note that Google moved the class SignedJwtAssertionCredentials to oauth2client.service_account.ServiceAccountCredentials, so for anyone following and getting an AttributeError: ‘module’ object has no attribute ‘SignedJwtAssertionCredentials’ error, look through this discussion for the fix.

    I was able to fix it by changing
    client.SignedJwtAssertionCredentials(email, key, scope=scope)
    ServiceAccountCredentials.from_json_keyfile_name(‘service_account.json’, scopes=scope)

    note that you also have to import ServiceAccountCredentials from oauth2client.service_account

    • Hi @ted@teddymorrisknower:disqus,

      thank you for the heads-up!

      I will add a note on my gist and update it asap.

  • Elvis Gomez

    I like the article, however a screenshot of the output would’ve been nice, to determine how well the model performed. I was certainly interested to see the output-multi screenshot, as this would show the different output parameters.

    Thanks for writing this!