Model Selection and Tuning
In the lesson on feature selection, we worked to optimize our predictions for our machine learning model for Kaggle by creating and selecting the features used to train our model.
Kaggle is a site where people create algorithms and compete against machine learning practitioners around the world. Your algorithm wins the competition if it’s the most accurate on a particular data set. Using Kaggle and this Kaggle Fundamentals course, you will have a fun way to practice your machine learning skills.
In this lesson, we’re going to focus on optimizing the model itself to boost the accuracy of our predictions. To do this, we’ll look at a process known as model selection. Model selection is important because it helps to select the algorithm which gives the best predictions for your data.
As you work through each concept, you’ll get to apply what you’ve learned from within your browser so that there’s no need to use your own machine to do the exercises. The Python environment inside of this course includes answer checking so you can ensure that you’ve fully mastered each concept before learning the next concept.
- Learn how the k-nearest neighbors and random forest algorithms work.
- Learn about hyperparameters and how to select the hyperparameters that give the best prediction.
- Learn how to compare differrent algorithms to improve the accuracy of your predictions.
- Introducing Model Selection
- Training a Baseline Model
- Training a Model using K-Nearest Neighbors
- Exploring Different K Values
- Automating Hyperparameter Optimization with Grid Search
- Submitting K-Nearest Neighbors Predictions to Kaggle
- Introducing Random Forests
- Tuning our Random Forests Model with GridSearch
- Submitting Random Forest Predictions to Kaggle
- Next Steps