MISSION 483

Evaluating Model Performance

In the last mission, we started learning the machine learning workflow. We did a bit of exploration of Airbnb's Washington D.C listing data set and laid out the problem we wanted to solve with the data: if we had a new listing, how could we use the data to predict an acceptable rental price for it?

We ended the last mission predicting a price for a single listing with three rooms. However, we currently have no way of knowing if this prediction was good or not.

In this mission, we'll follow up on this question and learn how to evaluate the performance of our k-nearest neighbors algorithm. We'll define what we mean by performance, and then look at how to calculate metrics to judge whether the model is "good" or not.

We'll start learning an incredibly handy R library called caret, which is used for creating machine learning models and automating the process of evaluating their performance as well. Instead of having to code everything by hand, we'll learn how to use the caret library to perform the various steps of the machine learning workflow. 

Objectives

  • Learn about assessing model performance
  • Use the caret library to do machine learning in R

Mission Outline

  1. Introduction
  2. Judging performance
  3. Introducing the caret library
  4. Setting up for training
  5. Training the algorithm
  6. Create predictions on the test data
  7. Evaluating predictions
  8. Summarizing errors into a single metric
  9. Next steps
  10. Takeaways
machine-learning-fundamentals

Course Info:

Intermediate

The median completion time for this course is 10 hours. View Details

This course requires a premium subscription. This course includes five missions and one guided project.  It is in the Data Analyst in R path.

START LEARNING FREE

Take a Look Inside