Using XGBoost's Data to Predict House Prices
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
LinkedIn | GitHub | Email | Data | Web App | Notebook
Introduction
βLocation, location, location.β The likelihood that you will hear that phrase if you are looking into purchasing a house, apartment, condo, or timeshare, is .9999999999. (Yes, I performed that data study myself.) However, there are many other factors that contribute to the price of real estate, some of which do not relate to the quality of the house itself -- like the month in which it is sold.Β Additional factors include the unfinished square feet of basement area. Location itself can be applied in ways that people do not necessarily anticipate, as well, like the location of the homeβs garage.Β
All the variants can add up to hundreds of features. Accordingly, arriving at the correct sale price involves some advanced statistical techniques. Who would want to sit down with paper and pencil to determine how all of those features interact to produce a home's market value? Not me, and I was only working with 79 features. This is where computers help run through calculations that would take far too long to work out paper, but we do need to set them up by training them with models. The challenge of this exercise was selecting the best model to use to predict a home price.Β
Objective
I was tasked with predicting the house prices given a combination of 79 features. I did so mostly following the data science methodology. Using the sklearn.metrics module, I managed to attain the following metric scores in my train-test split:Β
Mean Squared Error 395114426.0445745
Mean Absolute Error 13944.044001807852
R-Squared 0.908991109360274
However, my Kaggle submission was evaluated on Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. My score was 0.13244.
Mean-absolute-error is likely the easiest to interpret of the above metrics, being βthe average of the absolute values of the errorsβ (Root-mean-square deviation - Wikipedia). Basically, my model can predict the price of a house within $13944.05.Β Β
Process
I will provide a simplified overview of the steps I took in order to reach my desired outcome. Feel free to visit my GitHub for a more thorough dive.
Business Understanding
This step determines the trajectory of one's project. Although my undertaking was purely academic in nature, there are conceivably several reasons why a similar goal would be made in the βreal world.β Perhaps an online real estate competitor entered the fray that offered more accurate home value estimates than Zillow does. Not wanting to lose market share, Zillow desires to revamp its home valuation model by utilizing features it had previously ignored, and by considering a wider array of data models. In any case, the objective is fairly straightforward.
Analytic Approach
The approach depends on the goal. Since I must predict sale prices, I know that predicting quantities is a regression problem. If I were predicting labels or discrete values, I would have to utilize classification algorithms. There are different types of regression models. I know that tree-based regression models have typically performed well with similar problems, but I will have to see what the data looks like before I decide. Ultimately, I will evaluate different models and choose the one that performs best.Β
Data Requirements & Data Collection
The data has already been provided. If that were not the case, I would have to define the data requirements, determine the best way of collecting the data, and perhaps revise my definitions depending on whether the data could be used to fulfill the objective.
Data Understanding
This step encompasses exploratory data analysis. Reading relevant information about the data and conducting my own research to increase my domain knowledge were also necessary, as I did not perform the Data Requirements and Data Collection steps myself. The documentation that accompanied the dataset proved useful as it explained much of the missingness.Β
According to the paper (decock.pdf), the dataset describes βthe sale of individual residential property in Ames, Iowa from 2006 to 2010.β Its origins lie in the Ames City Assessorβs Office, but its journey from that office to my computer was not direct. It had been modified by Dean De Cock, who is credited as the individual who popularized this dataset for educational purposes with hopes to replace the Boston Housing dataset, and then again by the community at Kaggle, the website from which I downloaded the data.
My work can be viewed in the Jupyter Notebook I created for this project (give it a few minutes to load). Here are some of the descriptive statistics I performed:
In order to view the feature distributions, I created histograms of the numerical and continuous features and viewed the count distributions of the categorical features.
Continuous Features:
I visualized the missing values present in the dataset then examined the relationships between the missing values and the sale price of the houses.
Correlation among the values with a heatmap.
I also examined the presence of outliers. Throughout this process I noted observations and potential steps I might take when I prepared the data.
Data Preparation
During this stage, missing values, skewed features, outliers, redundant features, and multicollinearity are handled, and feature engineering is performed. As mentioned before, the documentation explained much of the missingness, removing the need to impute any of the missing data. I handled the missing values, removed some outliers, and encoded the categorical features. Dummy encoding was used for the nominal data and integer encoding for the ordinal data. Tree-based models are robust to outliers, multicollinearity, and skewed data, so I decided to utilize those models in order to avoid altering the data further.
Outliers visualized:
Modeling
These stages go hand-in-hand, given that multiple models are generally created, and the one that performs best is chosen. In light of the large number of features, a tree based regression model would be better suited compared to something like linear regression. I decided to utilize the XGBoost Python library due to its known advantages over the Gradient Boosting and Random Forest algorithms in the scikit-learn library. I then used Grid Search to determine the best parameters to use in each model.Β
The XGBRegressor took: 4162.4min to complete while the XGBRFRegressor took 8049.0min.
Interestingly enough, the top features were First Floor in Square Feet and Lot Area in Square Feet for the gradient boost model, and First Floor in Square Feet and Ground Living Area in Square Feet for the random forest model. The scoring metric used was negative root mean squared error. The top features using the R2 was Ground Living Area in Square Feet followed by Overall Quality, which rated the overall material and finish of the house.
Evaluation
While I was surprised that Overall Quality was not at the top, the importance of features that measured the size of the house were in line with some of my expectations (look here and here). In other words, increasing the size of one's house will most assuredly increase the value of one's house.
I interactively explored my best performing model with ExplainerDashboard, an awesome library for building interactive dashboards that explain the inner workings of "black box" machine learning models. My web app, a stripped-down version of the dashboard, can be found here.
I used Heroku, a free cloud application platform to host my web app, alongside Kaffeine to keep it running. If that link does not work, you can go to my notebook and scroll to the bottom to view the dashboard. You can also visit my GitHub for the complete experience. My favorite feature on the dashboard is the ability to adjust the values of features and then generate a predicted house price. Doing so provides a more granular understanding of how variables affect the final price. However, the library comes with a number of other unique visualizations and features.Β
Conclusion
Experimenting with advanced regression techniques on real data in order to create an accurate predictive model was an informative experience. Zillow makes billions a year, which indicates that such models are valuable tools.