1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Introduction to Google Cloud Machine Learning Engine

TensorFlow

The course is part of these learning paths

Machine Learning on Google Cloud Platform
course-steps 2 certification 1

Contents

keyboard_tab
Introduction
Training Your First Neural Network
3
Improving Accuracy
Conclusion
play-arrow
Start course
Overview
DifficultyIntermediate
Duration55m
Students1114
Ratings
4.9/5
star star star star star-half

Description

Machine learning is a hot topic these days and Google has been one of the biggest newsmakers. Recently, Google’s AlphaGo program beat the world’s No. 1 ranked Go player. That’s impressive, but Google’s machine learning is being used behind the scenes every day by millions of people. When you search for an image on the web or use Google Translate on foreign language text or use voice dictation on your Android phone, you’re using machine learning. Now Google has launched Cloud Machine Learning Engine to give its customers the power to train their own neural networks.

If you look in Google’s documentation for Cloud Machine Learning Engine, you’ll find a Getting Started guide. It gives a walkthrough of the various things you can do with ML Engine, but it says that you should already have experience with machine learning and TensorFlow first. Those are two very advanced subjects, which normally take a long time to learn, but I’m going to give you enough of an overview that you’ll be able to train and deploy machine learning models using ML Engine.

This is a hands-on course where you can follow along with the demos using your own Google Cloud account or a trial account.

Learning Objectives

  • Describe how an artificial neural network functions
  • Run a simple TensorFlow program
  • Train a model using a distributed cluster on Cloud ML Engine
  • Increase prediction accuracy using feature engineering and both wide and deep networks
  • Deploy a trained model on Cloud ML Engine to make predictions with new data

Resources

Updates

  • Nov. 16, 2018: Updated 90% of the lessons due to major changes in TensorFlow and Google Cloud ML Engine. All of the demos and code walkthroughs were completely redone.

Transcript

Before we get started learning about TensorFlow, I want to let you know about a useful resource for this course. Since we’ll be going through a lot of hands-on demos, I created a page where you’ll find all of the long commands you’ll need. You can copy and paste them instead of having to type them yourself. Some of the commands are extremely long, so I highly recommend using it. Go to https://github.com/cloudacademy/mlengine-intro.

 

TensorFlow is open-source software and its documentation is at tensorflow.org. One thing to keep in mind when you’re looking at TensorFlow examples is that TensorFlow has several different APIs. The low-level TensorFlow API gives you complete flexibility to build neural networks, but it requires a lot of coding and is harder to understand. Fortunately, there are quite a few high-level APIs, such as Keras and tf.estimator, that greatly simplify many tasks, require much less code, and are easier to understand. In fact, Google’s example code for ML Engine uses the tf.estimator API.

 

To get started, we’re going to create a neural network that will model a classic machine learning data set known as the Iris flower data set. It contains the lengths and widths of the petals and sepals of 150 irises. There are 50 samples each of three different species of irises: Iris setosa, Iris versicolor, and Iris virginica. Back in 1936, a man named Ronald Fisher developed a statistical model to distinguish the species from each other based on these four measurements.

 

First, we have to install TensorFlow, which is a Python library. You probably already have Python installed, but you’ll want to check the version. Cloud ML Engine supports both Python 2.7 and 3.4 and higher, but it runs 2.7 by default. Google’s Getting Started tutorial also uses version 2.7, so that’s what we’re going to use in this course. You can see which version or versions you have installed by running “python -V” to check the Python 2 version and “python3 -V” to check the Python 3 version.

 

OK, now there are a few different ways to install TensorFlow. If you want to keep your TensorFlow development isolated from the rest of your Python environment, then you can install it in a virtual Python environment. That way, you can install different versions of Python libraries than what you have in your native Python environment and won’t have to worry about the libraries needed for TensorFlow interfering with the libraries needed for your other Python applications or vice versa.

 

That’s the way I’m going to do it. To install the virtual environment, we need to use pip, the Python package installer.  You can copy this from the readme file in the GitHub repository for this course, if you’d like.

 

Now, use this command to install the virtualenv package. If you already have virtualenv installed, it’ll upgrade it. To create your virtual environment, type “virtualenv” and then what you want to call the environment. I’ll call it “mlenv”. Now you can go into that virtual environment by typing “source mlenv/bin/activate”. You can tell that we’re in that environment now because it says “mlenv” in brackets over here. If you need to exit the virtual environment at some point, type “deactivate”.

 

Alright, we’re finally ready to install TensorFlow. All you have to do is type “pip install tensorflow”. This will take a while, so I’ll fast forward.

 

OK, now let’s have a look at the sample code for the Iris dataset in the GitHub repository. The code does 5 things: load the data, construct a neural network, train the model by running lots of data through it , evaluate the accuracy, and classify new samples. I’ll show you which parts of the code do these things.

 

First, there are a bunch of import statements. The first few are so that it will run in older versions of Python. You’ll notice that it imports pandas, so we need to install that, too.

 

OK, back to the code. Now, this whole big chunk here just reads in the data. It downloads the csv files and then it reads them into datasets called “train” and “test”. The first one contains 80% of the iris samples. The second one contains the other 20%. This is because you typically want to train your model using the majority of the data, usually 70-80% of it, and then evaluate its accuracy based on some data that it hasn’t seen before, which is why you hold back 20-30% for that purpose.

 

This next block is all you need to construct a neural network. That’s because we’re using the high-level API. If we were using TensorFlow’s regular API, it would take a lot more code.

 

The next line is what makes tf.estimator so powerful and easy to use. Instead of having to write the code to build a neural network, we can just tell it to build one for us. The first thing to note is that it’s called “DNNClassifier”. This says what type of neural network we want. Here’s a list of the different ones that are available.

 

There are two main types of estimators supported by tf.estimator: regressors and classifiers. A regressor is used to predict a single numeric value, such as the home value in my earlier example. A classifier is used to predict which class something falls into, such as which of the three species an iris is.

 

The estimator we’re going to use is called DNNClassifier. The DNN means Deep Neural Network. I’ll explain what that means after we run the program.

 

Alright, back to the code. To construct a DNNClassifier, you need to tell it what the feature columns look like, how many hidden units to use (which is a deep neural network concept I’ll explain later), and the number of classes, which is 3 in this case because we want it to classify the irises into 3 species. The feature columns are defined up here. This says that the features (which are the four length and width measurements of the flowers) are all numeric.

 

Normally, this part would take a lot more code, too, even with the high-level API, but since all of the features are numeric, we can get away with defining them with these few lines.

 

This statement creates the DNNClassifier, that is, the neural network, but it doesn’t run it. To do that, we need to call its train method. You only need to provide two arguments: the function that will input the data (which is up here) and the number of times to run data through the model, which is set to 2,000 here.

 

Note that the input function specifies features and labels. The “label” gives the correct classification of a flower, that is, which of the three species the flower is. We need this so we can compare the model’s guess with the correct answer.

 

Once it’s done training the model, you need to run the test data through it to see how accurate the model is. The test data is the 20% of the original data that we held back so the classifier hasn’t seen it before. The code for this is the same as it was for the training run, except that we call the evaluate method instead of the train method, and we specify an input function that provides the test data instead of the training data.

 

When it’s done, we print the accuracy score. If it’s a good score, then we can be satisfied with our model and use it to classify new irises in the future. Normally, you’d want to save this model so you could use it in the future without having to run through the whole training process again, but this example script doesn’t do that. It just runs the model on three new irises to demonstrate how the predict method works. I’ll show you how to save a model later on in the course.

 

The predict method only needs one argument, which is the input function to use. Here, since there are only three new iris samples to classify, it just hardcodes the data in the script.

 

The predict method returns the classifications for the data you give it, which in this case, should be a Setosa, a Versicolor, and a Virginica.

 

OK, now that you’ve seen what the code is supposed to do, it’s time to run it. Go to GitHub. You can either download the repository as a zip file or do a “git clone” if you have git installed on your computer. I’m going to do a “git clone”. Then go into “mlengine-intro/iris/trainer”. If you downloaded the zip file, then the base directory will have a “-master” at the end of it. The script we just went through is in a file called iris.py, so run it with “python iris.py”.

 

First, it says that the accuracy it achieved on the test data was 96.7%, which is quite good. Then it says that the classifications for the three new iris samples are Setosa, Versicolor, and Virginica, which is what we expected, so the predict method worked.

 

Great. You’ve successfully trained a neural network to classify irises. In the next lesson, I’ll explain deep neural networks.

About the Author

Students14498
Courses41
Learning paths21

Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).