# The Linear Regression Model

In the Machine Learning Fundamentals course, we walked through the full machine learning workflow using the k-nearest neighbors algorithm. K-nearest neighbors works by finding similar, labelled examples from the training set for each instance in the test set and uses them to predict the label.

K-nearest neighbors is known as an instance-based learning algorithm because it relies completely on previous instances to make predictions. The k-nearest neighbors algorithm doesn’t try to understand or capture the relationship between the feature columns and the target column. Now, we’re going to dig into a different way of making predictions using machine learning: linear regression.

In this lesson, we’ll provide an overview of how we use a linear regression model to make predictions. We’ll use scikit-learn for the model training process, so we can focus on gaining intuition for the model-based learning approach to machine learning. In later lessons in this course, we’ll dive into the math behind how a model is fit to the dataset, how to select and transform features, and more.

As always on Dataquest, this lesson features our interactive code-running system so you can write, run, and check your code all from within your web browser.

## Objectives

- Learn about parametric machine learning algorithms.
- Learn the basics of the linear regression model.

## Lesson Outline

- Instance Based Learning Vs. Model Based Learning
- Introduction To The Data
- Simple Linear Regression
- Least Squares
- Using Scikit-Learn To Train And Predict
- Making Predictions
- Multiple Linear Regression
- Next Steps
- Takeaways

Get started for free

No credit card required.

By creating an account you agree to accept our terms of use and privacy policy.