MISSION 156

Evaluating Model Performance

In the previous mission, we introduced k-nearest neighbors, and used it to write a function to predict the optimal price for an AirBnB rental based on the number of people it can accommodate. In machine learning, the function we wrote is a type of model, which means it outputs a prediction based on the input.

Now, you will learn how to test the quality of your model. In other words, how well does your model accurately predict the pricing of an AirBnB rental? This process is known as validation, and it’s an important step in any machine learning implementation because it ensures your model can make good predictions on new data.

You will learn how to quantify the quality of your predictions using error metrics. The error metrics we’ll use are mean error, mean absolute error, mean squared error, and root mean squared error. For each metric, we’ll cover the advantages and disadvantages and their underlying assumptions, which will help you choose the right error metrics to use when evaluating performance.

As with all our courses, you will be asked to apply what you’re learning in our in-browser app, which will also check your answers so you can ensure you've fully mastered each concept.

Objectives

  • Learn how to evaluate model accuracy using MSE and RMSE.
  • Lean how to compare MSE and RMSE values

Mission Outline

1. Testing quality of predictions
2. Error Metrics
3. Mean Squared Error
4. Training another model
5. Root Mean Squared Error
6. Comparing MAE and RMSE
7. Next steps
8. Takeaways

machine-learning-fundamentals

Course Info:

Beginner

The median completion time for this course is 7 hours. View Details

This course requires a premium subscription. This course includes five missions and one guided project.  It is the 17th course in the Data Scientist in Python path.

START LEARNING FREE

Take a Look Inside