- Home
- Training Library
- Big Data
- Courses
- Getting Started With Deep Learning: Improving Performance

# Embeddings Continued

## Contents

###### Improving Performance

## The course is part of this learning path

**Difficulty**Beginner

**Duration**1h 2m

**Students**208

**Ratings**

### Description

Move on from what you learned from studying the principles of recurrent neural networks, and how they can solve problems involving sequencing, with this cohesive course on Improving Performace. Learn to improve the performance of your neural networks by starting with learning curves that allow you to answer the right questions. This could be needing more data or, even, building a better model to improve your performance.

Further into the course, you will explore the fundamentals around bash normalization, drop-out, and regularization.

This course also touches on data augmentation and its ability to allow you to build new data from your starting training data, culminating in hyper-parameter optimization. This is a tool to that aids in helping you to decide how to tune the external parameters of your network.

This course is made up of 13 lectures and three accompanying exercises. This Cloud Academy course is in collaboration with Catalit.

**Learning Objectives**

- Learn how to improve the performance of your neural networks.
- Learn the skills necessary to make executive decisions when working with neural networks.

**Intended Audience**

- It is recommended to complete the Introduction to Data and Machine Learning course before starting.

### Transcript

Hey guys, welcome back. In this video I'm going to show you how to use an embedding layer. As we've seen, embedding layers are useful when you're dealing with text, because they allow you to go from a very large space to a very reasonable space. So the first thing you've got to do is import the embedding layer from the keras layer. Then we're going to build, like we did for the convolutional layers to test them out we're going to build a very simple model with just an embedding layer. We'll have an input dimension of 100 and an output dimension of 2. So you see our model only has one embedding layer and it has 200 parameters, because essentially it's a fully connected layer from an input space of 100 nodes to an output space of 2 nodes. So the difference between an embedding layer and a fully connected, is that the embedding layer does not need a sparse vector with 100 coordinates but will embed a number between 0 and 99.

So if it has 100 input space, it's able to interpret each number between 0 and 99. So here we pass a two dimensional array with 12 numbers between 0 and 99, and what the model will do, what the embedding layer will do, is it will convert each of these numbers into two dimensional array, into a two dimensional vector. So let's see the embedded space shape. So as you can see it still has three rows, four columns, but now each of the numbers went from being a number between 0 and 99, to being a vector with two coordinates. Let's have a look at it, yes, so as you can see, each inner element now has 2 coordinates, instead of being a number. Thank you for watching, and see you in the next video.

**Students**3623

**Courses**8

**Learning paths**4

I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.