While learning how to represent neural networks, we worked with a single layer neural network that had a single layer of neurons. To make a prediction, a single layer of neurons in these networks directly fed their results into the output neuron(s).
In this lesson, we'll explore how multi-layer networks (also known as deep neural networks) are able to better capture nonlinearity in the data. In a deep neural network, the first layer of input neurons feeds into a second, intermediate layer of neurons. The intermediate layers are known as hidden layers and can be used to learn more complex relationships to make better predictions.
Not only will you learn how to add hidden layers to a neural network, you will use scikit-learn to build and train a neural network with multiple hidden layers and varying nonlinear activation functions. You will train a logistic regression model and a neural network model with a hidden layer containing a single neuron.
As opposed to working with an external data set, we'll generate the data ourselves. Generating data ourselves gives us more control of the properties of the dataset. In this lesson, we'll be using scikit-learn to generate the data with its convenience functions.
As you work through each concept, you’ll get to apply what you’ve learned from within your browser so that there's no need to use your own machine to do the exercises. The Python environment inside of this course includes answer checking so you can ensure that you've fully mastered each concept before learning the next concept.
1. Hidden Layers
2. Generating Data That Contains Nonlinearity
3. Hidden Layer With A Single Neuron
4. Training A Neural Network Using Scikit-learn
5. Hidden Layer With Multiple Neurons
6. Multiple Hidden Layers
7. Next Steps