Deep Learning Fundamentals

Learn the basics of deep neural networks in our Deep Learning Fundamentals course. In this course, you will be using scikit-learn to build and train neural networks. You'll learn concepts such as graph theory, activation functions, hidden layers, and how to classify images.

Then you’ll dig deeper into deep learning to learn about the different kinds of nonlinear activation functions such as the ReLU function, hyperbolic tangent function, and others, to discover how they enable neural networks to capture nonlinearity. You will also learn how to add hidden layers and how the addition of hidden layers can make neural networks more powerful.

At the end of the course, you'll complete a project in which you will build a neural network to classify images of digits in the MNIST dataset. You'll also tweak your neural networks to perform better on handwriting recognition. This project is a chance for you to combine the skills you learned in this course and practice the building neural networks using a typical deep learning workflow. This project also serves as a portfolio project that you can showcase to your future employer.

By the end of this course, you'll be able to:

  • Understand how neural networks are represented.
  • Understand how adding hidden layers can provide improved model performance.
  • Understand how neural networks capture nonlinearity in the data.



By creating an account you agree to accept our terms of use and privacy policy.

Learn the Fundamentals of Deep Learning

Representing Neural Networks

Learn the representation and key terminology behind neural networks

Nonlinear Activation Functions

Learn about the different activation functions and how they enable neural networks to capture nonlinearity.

Hidden Layers

Learn about adding hidden layers to a neural network.

Building A Handwritten Digits Classifier

Learn the basics of image classification to build a handwritten classifier.