Analyzing Data to Accurately Predict House Prices in Ames
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Using Machine Learning to Democratize the Home-Buying Process
Buying a house is a complicated and stressful process that can leave first-time buyers feeling hopeless. Realtors often work off of their intuition built over years of experience which can lead to a lack of transparency. Meanwhile, brokers are not necessarily incentivized to help you find the fairest price. In 2018, data shows there were over $120 billion residential transactions which adds up to $7 billion in fees.
We can solve these pain points by using machine learning techniques to generate fair valuations for those looking to buy or sell a home. Using the Kaggle North Ames, IA data set of 1460 homes with 80 features, we built a model to accurately predict house price.
A Baseline 'Intuitive' Data Model to Predict House Price
There are common factors that we all think of when gauging house prices: square footage, lot size, or the number of bathrooms. Indeed these features are significantly correlated with house price. Therefore we built a baseline model with the baseline features to mimic a realtor’s thought process. This linear model gives a pretty good estimate when we compare the predicted price against the actual price for the whole dataset (Figure 1). However, some houses are underestimated. House #907, for example, is underestimated by $50,000!
Data on Feature Selection and Engineering
To build a better prediction, we first log-transformed Sale Price to correct the skew.
We engineered new features such as the total number of bathrooms which combines the number of full and half baths. We minimized the number of features by dropping redundant data like Garage Area vs Garage Size (Figure 2). Some columns of text were converted to ordinal variables in order to be captured by our models. The neighborhood is incredibly important when buying a home and often captures a significant amount of hidden data like school quality. Therefore we chose to dummify the 25 neighborhoods in order to keep that information.
Missingness and Imputation
Each column was checked for missingness which can occur for several reasons. First data can be purposefully missing such as a house with no garage having missing values for Garage Area. Second, data can be missing due to human error, in which case these values should be corrected using imputation.
We tested several imputation methods including random values, mean values and using K-Nearest Neighbors which fills in the expected data using the most similar houses. All three methods produced significant improvement in model accuracy compared to no imputation. All three imputation methods performed similarly and produced the best results when outliers outside of 4.25 standard deviations were imputed(Figure 3).
Outliers
When you have 80 different things to look at for each house, how do you tell if some are oddballs? We used a technique to collapse the data set into two dimensions so we could better visualize the spread of our data. You can see that 99% of the variability is skewed by a few outlying houses (Figure 4)! Once we pinpointed these anomalies circled here for clarity, we confirmed them as outliers using more rigorous statistical methods and then dropped them from our data set.
Linear Model: Elastic Net
Our first model is a linear model called ElasticNet. We chose this model because many features that determine house price are linear like total square footage. This model has the added benefit of dropping any irrelevant features by adding in regularization terms. We used grid search to tune the lambda, which is the size of the penalty, and rho which controls the balance of Ridge vs Lasso regularization. Because this model employs regularization, we standardized our data using the StandardScaler from sklearn before fitting our model.
Our final model had a small lambda (1e-4) and a higher rho (0.9) indicating our model dropped several non-informative features. Using this model we see that the most important factors in house price are Square Footage, Lot Area, and neighborhoods (Stone Bridge, etc) (Figure 5).
However, this model cannot capture any non-linear relationships in the dataset such as the drop in house price following the 2008 housing crisis.
Tree-based Model Data: XGBoost
Therefore, to capture these non-linear and less intuitive factors we used a tree based model, XGBoost. While this is a more complex model, we can still see which features are important. Unlike the ElasticNet, XGBoost does not require scaling because tree-based models split the data based on a threshold which isn’t affected by monotonic transformations. In this model, the most important features are more qualitative features like fireplace or kitchen quality (Figure 6). Both these models have pros and cons. So is there a way to take the best of both worlds?
Ensembling
It turns out that different models perform differently in different price regions. We divided the house prices into 21 buckets. In Figure 7, each dot represents the winning model in each price range. Elastic Net performs better in the low price region ($105k and below) while XGBoost performs better in the high price region ($300k and above). A weighted average of both models does better in the medium price region ($105k to $300k).
After ensembling models, we are now able to get much closer to the true sale price of a home. House #907, valued at $255,000 and previously over-valued by $50,000 is predicted to be $254,634. Our overall model accuracy is 0.1256, placing our team in the top 25% of all submissions on Kaggle.
Future Work
Our model still has difficulty estimating very low or very highly priced houses. Furthermore, our model runs the risk of being overfit to the train dataset due to our ensembling method. Moving forward we would like to feed our predictions into a random forest to classify as low-medium-high and then predict price. This method is more outlier-resistant than either model used here. We would also like to revisit feature engineering (market temperature, season) and try to incorporate more data to expand the predictive capacity of our modelling.