Continue the journey to data and machine learning, with this course from Cloud Academy.
In previous courses, the core principles and foundations of Data and Machine Learning have been covered and best practices explained.
This course gives an informative introduction to deep learning and introducing neural networks.
This course is made up of 12 expertly instructed lectures along with 4 exercises and their respective solutions.
Please note: the Pima Indians Diabetes dataset can be found at this GitHub repository or at Kaggle page mentioned throughout the course.
Learning Objectives
- Understand the core principles of deep learning
- Be able to execute all factors of the framework of neural nets
Intended Audience
- It would be advisable to complete the Intro to Data and Machine Learning course before starting.
Hello and welcome to this video on neural networks. In this video you will learn what neural networks are and we will introduce a diagram notation to specify the architecture. Artificial neural networks pick their name and inspiration from biology. Brain tissues composed by cells and these cells are called neurons. Neurons exchange signals with one another forming a very dense and complex network. Artificial neural networks are simpler than the neural networks found in the brain but they do share a couple of key components in the architecture. Of all the branches of a neuron, only one carries an output signal while all the others are input that are listening to the signals from other neurons.
This is what they had in common with artificial neural networks. An artificial neural network is composed by nodes, and each node can have many inputs but it will only have one output. In this respect, biology served as an inspiration for the design of artificial neural networks. Now we want to establish a diagram notation and draw the algorithms we've encountered so far as neural networks. Let's start with linear regression. We can represent the operation of linear regression as an artificial neural network like the one in this figure. This network has only one node, the output node, represented by the circle in the diagram.
This node is connected to the input X by a weight W. A second edge enters the node carrying the value of the parameter B, which by now we know it's called the bias. The output variable Y is calculated by the node as Y equals X times W plus B. Now that we have a simple way to represent linear operations in a graph, let's extend the graph to the multiple input features case. In order to do that we will need to add multiple inputs each with it's own weight. The output node here is connected to the end input through N weights and it ia also connected to a bios parameter. The operation is the same as before, the output Y is equals to X dot W plus B where we use the vector notation here because both X and W are vector's with end components. We can now visually represent linear regression with as many inputs as we like. My question for you is, how would you extend this graph to allow for a binary output?
Because if we can do that, we can also represent logistic regression. I'm sure you've guessed all we have to do is to add the sigmoid function to the output of the node in order to map its output to the interval zero one and therefore we will be predicting the probability of a binary outcome. Adding the sigmoid function is just a special case of what is called an activation function. The first neuron Letick invented in fact, had a similar diagram to this one but a different activation function and it is called the perceptron. The perceptron is a binary classifier but instead of using the smooth sigmoid activation function, it uses a step function.
The step function is defined as Y equals to one when it's argument is greater than zero and zero otherwise. So it has a discontinue with Z for the argument at zero. We could even simplify our diagram notation without loosing information by including the bias and the activation symbols in the node itself. Now that we've established the symbolic notation to describe both linear regression and logistic regression in a compact way, in the next video we will see how to expand it. In conclusion, in this video we've introduced the fact that neural networks are inspired from biology. We've introduced a couple of terms like nodes, inputs, outputs, activation functions, weights and biases and we've explained the fact that each node has many inputs but only a single output. Finally, we've drawn a few diagrams and introduced a diagram notation that we will use throughout the rest of the course. So thank you for watching and see you in the next video.
I am a Data Science consultant and trainer. With Catalit I help companies acquire skills and knowledge in data science and harness machine learning and deep learning to reach their goals. With Data Weekends I train people in machine learning, deep learning and big data analytics. I served as lead instructor in Data Science at General Assembly and The Data Incubator and I was Chief Data Officer and co-founder at Spire, a Y-Combinator-backed startup that invented the first consumer wearable device capable of continuously tracking respiration and activity. I earned a joint PhD in biophysics at University of Padua and Université de Paris VI and graduated from Singularity University summer program of 2011.