Learn the basics of deep neural networks in our Deep Learning Fundamentals course. In this course, you will be using scikit-learn to build and train neural networks. You'll learn concepts such as graph theory, activation functions, hidden layers, and how to classify images.
Then you’ll dig deeper into deep learning to learn about the different kinds of nonlinear activation functions such as the ReLU function, hyperbolic tangent function, and others, to discover how they enable neural networks to capture nonlinearity. You will also learn how to add hidden layers and how the addition of hidden layers can make neural networks more powerful.
At the end of the course, you'll complete a project in which you will build a neural network to classify images of digits in the MNIST dataset. You'll also tweak your neural networks to perform better on handwriting recognition. This project is a chance for you to combine the skills you learned in this course and practice the building neural networks using a typical deep learning workflow. This project also serves as a portfolio project that you can showcase to your future employer.
By the end of this course, you'll be able to:
Learn Fundamentals of Deep Learning
Representing Neural Networks
Learn the representation and key terminology behind neural networks
Nonlinear Activation Functions
Learn about the different activation functions and how they enable neural networks to capture nonlinearity.
Learn about adding hidden layers to a neural network.
Building A Handwritten Digits Classifier
Learn the basics of image classification to build a handwritten classifier.