Representing Neural Networks
In Machine Learning Fundamentals, Linear Regression, and our other previous machine learning courses, we explored machine learning models in depth. We also learned the difference between supervised machine learning and unsupervised machine learning in those courses.
In this first lesson of our deep learning course, we’ll get familiar with an additional model called an artificial neural network. We’ll focus on becoming familiar with how neural networks are represented and how to represent linear regression and logistic regression models in that representation. You will learn important neural network concepts such as a feedforword function as well as an activation function.
In later lessons of this course, we’ll learn how to introduce nonlinearity in our networks, how to fit complex neural networks, and some real-world best practices.
As opposed to working with an external data set, we’ll generate the data ourselves. Generating data ourselves gives us more control of the properties of the data set. In this lesson, we’ll be using scikit- learn to generate the data with their convenience functions.
As you work through each concept, you’ll get to apply what you’ve learned from within your browser so that there’s no need to use your own machine to do the exercises. The Python environment inside of this course includes answer checking so you can ensure that you’ve fully mastered each concept before learning the next concept.
- Learn how neural networks are represented visually.
- Learn how to implement linear and logistic regression as neural networks.
- Learn the differences between the nonlinear activation functions.
- Nonlinear Models
- Introduction to Graphs
- Computational Graphs
- A Network That Performs Linear Regression
- Generating Regression Data
- Fitting A Linear Regression Network
- Generating Classification Data
- Implementing A Network That Performs Classification
- Next Steps