1. Home
  2. Training Library
  3. Google Cloud Platform
  4. Courses
  5. Building Convolutional Neural Networks on Google Cloud

Conclusion

The course is part of these learning paths

Machine Learning on Google Cloud Platform
course-steps 2 certification 1

Contents

keyboard_tab
Introduction
Convolutional Neural Networks
Improving a Model
Scaling
7
Scaling4m 50s
Conclusion
8
play-arrow
Start course
Overview
DifficultyAdvanced
Duration38m
Students528

Description

Course Description

Once you know how to build and train neural networks using TensorFlow and Google Cloud Machine Learning Engine, what’s next? Before long, you’ll discover that prebuilt estimators and default configurations will only get you so far. To optimize your models, you may need to create your own estimators, try different techniques to reduce overfitting, and use custom clusters to train your models.

Convolutional Neural Networks (CNNs) are very good at certain tasks, especially recognizing objects in pictures and videos. In fact, they’re one of the technologies powering self-driving cars. In this course, you’ll follow hands-on examples to build a CNN, train it using a custom scale tier on Machine Learning Engine, and visualize its performance. You’ll also learn how to recognize overfitting and apply different methods to avoid it.

Learning Objectives

  • Build a Convolutional Neural Network in TensorFlow
  • Analyze a model’s training performance using TensorBoard
  • Identify cases of overfitting and apply techniques to prevent it
  • Scale a Cloud ML Engine job using a custom configuration

Intended Audience

  • Data professionals
  • People studying for the Google Certified Professional Data Engineer exam

Prerequisites

This Course Includes

  • Many hands-on demos

Resources

The github repository for this course is at https://github.com/cloudacademy/ml-engine-doing-more.



Transcript

I hope you enjoyed learning more about TensorFlow and Machine Learning Engine. Let’s do a quick review of what you learned.

 

A convolutional neural network typically consists of convolutional, pooling, and fully-connected layers. A convolutional layer attempts to extract higher-level features from an image by applying weights to a sliding window, which results in a feature map. Zero-padding is often used to ensure that the feature map is the same size as the image. A convolutional layer applies a number of different weight matrices, called filters, against an image.

 

A pooling layer performs dimensionality reduction, or downsampling, by sliding a window over each feature map, using a stride of more than one, and saving the highest value in the window.

 

In a fully-connected layer, each neuron is connected to every neuron in the previous layer. This allows the network to consolidate all of the features from the whole image so it can make a prediction regarding what’s in the image.

 

A convolutional layer also applies an activation function, which brings nonlinearity to the network. This allows it to model real-world images more accurately. Typically the ReLU activation function is used, which simply makes all negative values zero.

 

You can easily build a convolutional model in TensorFlow using the tf.layers API. You also need to use the Estimator class to build a custom estimator.

 

TensorBoard is a handy visualization tool that comes with TensorFlow. To use it, you just have to specify the location of the directory where the model’s checkpoints were saved.

 

A loss function estimates how far off a model’s predictions are from the truth. After every batch of data gets run through the model, the optimizer tries to reduce the loss by adjusting the weights. The amount it changes the weights is determined by the learning rate. If you set the learning rate too high, then the optimizer may not be able to find optimum values, but if you set it too low, then it will take a long time for the model to find optimum values. You can dramatically improve how quickly a model converges by using a sophisticated optimizer, such as the AdamOptimizer.

 

Overfitting occurs when a model has much higher accuracy on the training data than on other data. Some of the techniques that can help reduce overfitting are: using more training data, using fewer features in your model, early stopping (that is, stopping a training run before overfitting occurs), regularization (which means keeping the weights close to zero), and dropout, which is where random neurons get removed during every training step, but are put back in during the evaluation phase.

 

Using more worker nodes can make an ML Engine job run much faster without costing more, in many cases. To create your own cluster configuration, use the custom scale tier and a config file that specifies the number and type of workers and parameter servers to deploy, as well as the type of master node. Using GPUs will not necessarily improve performance and using TPUs requires code changes and may not be fully supported yet.

 

Now you know how to build a convolutional neural network in TensorFlow, analyze a model’s training performance using TensorBoard, identify cases of overfitting and apply techniques to prevent it, and scale a Cloud ML Engine job using a custom configuration.

 

To learn more about Machine Learning Engine, you can read Google’s documentation. Also watch for new Google Cloud Platform courses on Cloud Academy, because we’re always publishing new courses. Please give this course a thumbs-up or thumbs-down rating. If you have any questions or comments, please let us know. Thanks and keep on learning!

About the Author

Students13760
Courses41
Learning paths22

Guy launched his first training website in 1995 and he's been helping people learn IT technologies ever since. He has been a sysadmin, instructor, sales engineer, IT manager, and entrepreneur. In his most recent venture, he founded and led a cloud-based training infrastructure company that provided virtual labs for some of the largest software vendors in the world. Guy’s passion is making complex technology easy to understand. His activities outside of work have included riding an elephant and skydiving (although not at the same time).