Machine Learning Driven Predictions of House Prices in Ames
The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Team Members: Aidan Au, Jacob Smith, and Jordan Hicks
Background & Research Question
When investing in a house, investors often want to maximize their return on investment by buying the house for less than it is worth. Of course, it isn't that simple, they have to take into account fees from middlemen, so it can sometimes be hard to determine how much money they'll make on a house. This is where machine learning comes in. By getting an accurate estimate of how much money they could reasonably expect to sell a house for, the investor can determine if, after the initial cost of the house and all fees are paid, they will make any profit, and whether the potential of profit is worth the risk.
The goal of this project is to predict what a house should sell for in Ames, Iowa as closely as possible by minimizing the RMSE (Root Mean Squared Error), a measure of how much difference there will be between the model's prediction and the real value on average.
Data Source and Data Cleaning Process
All the data collected here was derived from the Ames, Iowa housing Kaggle Dataset, which will be linked below.
Before we could select the best features for the data, we still had a lot of missing information, and some variables were described using different words, which can;t always be thrown into a machine learning model without some pre-processing.
Our process was first to input any missing information, and then split the data into categorical variables and numerical variable,which we would then either dummify, or perform a box-cox transformation on, respectively. After this we would standardize all of the data, and then perform feature selection. That's a lot of information to digest, so let's explain each step of the process.
Data Inputation
The first step was to determine how to handle missing data. After examining he variables more closely, we found three different categories of variables, and handled them accordingly.
- Numerical variables: For these variables we found the median of all results in the column, and simply filled in the missing values with that.
- Categorical variables where the feature may not exist: For these cases, a missing value likely meant the feature did not apply, so we created a new category "None".
- Categorical variables where the feature almost certainly exists: When it was very unlikely that a house simply didn't have the variable listed (such as the type of electrical system), we decided to use the mode, or most common occurrence in that column as a prediction for what this house was likely to have.
Feature Transformation
Numerical Variables
Machine learning models work best with normal distributions, so we used a box-cox tranformation to reduce the skewness of all the numerical variables.
Categorical Variables
In order to plug categorical variables into linear models, we needed to turn the different categories into numbers. We used dummification to get additional columns, each containing zeros and ones to determine if a certain house had a specific feature or not. Fr example, a single house, instead of having an overall quality variable, would now have the variables "low overall quality" and "high overall quality" and a 1 in whichever column matched the feature that it has.
Feature Selection
The raw data had 79 different explanatory variables, and 2580 different homes. After a forward-stepwise feature selection process, we ended up using 47 variables in our machine learning models. Some of the variables with the highest correlation to sale price were the gross living area, the house's overall quality rating, the total square footage of the basement, and the car capacity of the garage.
Target Transformation
When we looked at the price distribution of houses, it turned out that we had a lot of rightward skew. This can cause our models to have trouble predicting the price of a house, so to counteract this, we decided to have the models predict the natural log price of the house. This allowed to predict prices within a normal distribution, and reduce the RMSE.
Model Selection
Once we had finished all of the data cleaning, and all of the feature selection, we were finally ready to test out different machine learning models. we split the data into portions that we would train the model with, and portions that we would to evaluate how the model performed. When we tested a model, we still had to figure out what parameters for the model worked best for the data we had.
To find these we used a combination of grid searching and the Optuna library, although the details of those aren't very important for understanding the results. These are the difference between the "train" and "test" scores listed in the table below. We are trying to find he lowest RMSE test score possible. After testing several different models, we found these final scores.
Models | (10-Fold CV) R^2 Train |
R^2 Test | (10-Fold CV) RMSE Train |
RMSE Test | Rank of RMSE Test |
SVR (RBF Kernel/Gaussian) | 93.0893% | 92.3379% | 0.100817 | 0.109607 | 1 |
Cat Boost | 92.9744% | 92.0410% | 0.101543 | 0.11171 | 2 |
SVR (Linear) | 92.0530% | 91.8460% | 0.102742 | 0.11307 | 3 |
Multiple Linear Regression | 92.8693% | 91.6591% | 0.101799 | 0.114359 | 4 |
Ridge | 92.8705% | 91.6577% | 0.101742 | 0.114369 | 5 |
Lasso | 92.8694% | 91.6581% | 0.101796 | 0.114394 | 6 |
GBM | 91.4051% | 91.5348% | 0.112475 | 0.115208 | 7 |
XG Boost | 91.3796% | 91.3506% | 0.112594 | 0.116455 | 8 |
Light GBM | 91.0295% | 90.2929% | 0.115007 | 0.12337 | 9 |
Random Forest | 90.0454% | 88.2390% | 0.121267 | 0.135796 | 10 |
Reccomendations
As you can see from the above table, it looks like an SVR model (Support Vector Regression Model) with a gausian kernel performs the best at predicting house prices out of these models. The Cat boost model (an iteration of a random forest model) is also a good option. While an SVR model with a linear kernel does have one of the higher scores, these can be very computationally expensive, so we can't recommend using it.
Sources
Dataset: https://www.kaggle.com/c/house-prices-advanced-regression-techniques
Github repo for project: https://github.com/hzeig/ames-housing-predictions.git