Predicting House Prices with XGBoost

Tyrone Wilkinson
Posted on May 4, 2021

LinkedIn | GitHub | Email | Data | Web App | Notebook

 

Introduction

“Location, location, location.” The likelihood that you will hear that phrase if you are looking into purchasing a house, apartment, condo, or timeshare, is .9999999999. (Yes, I performed that study myself.) However, there are many other factors that contribute to the price of real estate, some of which do not relate to the quality of the house itself -- like the month in which it is sold.  Additional factors that contribute to the sale price include the unfinished square feet of basement area. Location itself can be applied in ways that people do not necessarily anticipate, as well, like the location of the home’s garage. 

All the variants can add up to hundreds of features. Accordingly, arriving at the correct sale price involves some advanced statistical techniques. Who is going to want to sit down with a paper and pencil and try and determine how so many features interact to determine a home's market value? Not me, and I was only working with 79 features. This is where computers help run through calculations that would take far too long to work out paper, but we do need to set them up by training them with models. The challenge of this exercise was coming up with the best model to use to predict a home price. 

 

Objective

I was tasked with predicting the house prices given a combination of 79 features. I did so mostly following the data science methodology. Using the sklearn.metrics module, I managed to attain the following metric scores in my train-test split: 

Mean Squared Error 395114426.0445745

Mean Absolute Error 13944.044001807852

R-Squared 0.908991109360274

However, my Kaggle submission was evaluated on Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. My score was 0.13244.

Mean-absolute-error is likely the easiest to interpret of the above metrics, being “the average of the absolute values of the errors” (Root-mean-square deviation - Wikipedia). Basically, my model can predict the price of a house within $13944.05.  

 

Process

Data Science Methodology

I will provide a simplified overview of the steps I took in order to achieve my desired outcome. Feel free to visit my GitHub for a more thorough dive.

 

Business Understanding

This step determines the trajectory of one's project. Although my undertaking was purely academic in nature, there are conceivably several reasons why a similar goal would be made in the “real world.” Perhaps an online real estate competitor entered the fray that offered more accurate home value estimates than Zillow does. Not wanting to lose market share, Zillow desires to revamp its home valuation model by utilizing features it had previously ignored and by considering a wider array of data models. In any case, the objective is fairly straightforward.

 

Analytic Approach

The approach depends on the goal. Since I must predict sale prices, I know that predicting quantities is a regression problem. If I were predicting labels or discrete values, I would have to utilize classification algorithms. There are different types of regression models. I know that tree-based regression models have typically performed well with similar problems, but I will have to see what the data looks like before I decide. Ultimately, I will evaluate different models and choose the one that performs best. 

 

Data Requirements & Data Collection

The data has already been provided. If that were not the case, I would have to define the data requirements, determine the best way of collecting the data, and perhaps revise my definitions depending on whether the data could be used to fulfill the objective.

 

Data Understanding

This step encompasses exploratory data analysis. Reading relevant information about the data and conducting my own research to increase my domain knowledge were also necessary, as I did not perform the Data Requirements and Data Collection steps myself. The documentation that accompanied the dataset proved useful as it explained much of the missingness. 

According to the paper (decock.pdf), the dataset describes “the sale of individual residential property in Ames, Iowa from 2006 to 2010.” Its origins lie in the Ames City Assessor’s Office, but its journey from that office to my computer was not direct. It had been modified by Dean De Cock, who is credited as the individual who popularized this dataset for educational purposes with hopes to replace the Boston Housing dataset, and then again by the community at Kaggle, the  website from which I downloaded the data.

All of my work can be viewed in the Jupyter Notebook I created for this project (give it a few minutes to load.) Here I discuss some of the descriptive statistics I performed. 

In order to view the feature distributions, I created histograms of the numerical and continuous features and viewed the count distributions of the categorical features.

Continuous Features:

 

I visualized the missing values present in the dataset then examined the relationships between the missing values and the sale price of the houses.

 

I examined the correlation among the values with a heatmap.

 

I also examined the presence of outliers. Throughout this process I noted observations and potential steps I might take when I prepared the data.

 

Data Preparation

During this stage, missing values, skewed features, outliers, redundant features, and multicollinearity are handled, and feature engineering is done. As I mentioned before, the documentation explained much of the missingness, removing the need to impute any of the missing data. I handled the missing values, removed some outliers, and encoded the categorical features. Dummy encoding was used for the nominal data and integer encoding for the ordinal data. Tree-based models are robust to outliers, multicollinearity, and skewed data, so I decided to utilize those methods in order to avoid altering the data further.

Here are some of the obvious outliers I removed:

 

Modeling and Evaluation

These stages go hand-in-hand, given that typically multiple models are created in order to find the one that performs best. In light of the high number of features, a tree based regression model would be better suited compared to something like linear regression. I decided to utilize the XGBoost Python library due to its known advantages over the Gradient Boosting and Random Forest algorithms in the scikit-learn library. I then used Grid Search to determine the best parameters to use in each model. 

The XGBRegressor took: 4162.4min to complete while the XGBRFRegressor took 8049.0min.

Interestingly enough, the top features for each model were First Floor in Square Feet and Lot Area in Square Feet for the gradient boost, and First Floor in Square Feet and Ground Living Area in Square Feet for random forest. The scoring metric used was negative root mean squared error. The top features using the R2 was Ground Living Area in Square Feet followed by Overall Quality, which rated the overall material and finish of the house. While I was surprised that Overall Quality was not at the top, the importance of features that measured the size of the house were in line with some of my findings (look here and here). In other words, increasing the size of one's house will most assuredly increase the value of one's house.

I interactively explored my best performing model with ExplainerDashboard, an awesome library for building interactive dashboards that explain the inner workings of machine learning models. My web app, a stripped-down version of the dashboard, can be found here. I used Heroku, a free cloud application platform, to host my web app, alongside Kaffeine to keep it running. If that link does not work, you can go to my notebook and scroll all the way to the bottom to view the dashboard. You can also check out my github for the complete experience. My favorite feature on the dashboard is the ability to adjust the values of features and then generate a predicted house price. Doing so provides a more granular understanding of how variables affect the final price. However, the library comes with a number of unique visualizations and features. It is a must use when working with “black box” models. 

 

Conclusion

Experimenting with advanced regression techniques on real data in order to come up with the best prediction was an informative experience. Zillow makes billions a year, which indicates that a model that accurately predicts the sale price of a house would be a very valuable tool for a competitor or Zillow itself.

 

About Author

Tyrone Wilkinson

Tyrone Wilkinson

| Data Scientist | I love tackling interesting problems. With a degree in Computer Science and background in IT, I now leap into AI. Contact me if you want to change the world.
View all posts by Tyrone Wilkinson >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp