Using XGBoost's Data to Predict House Prices

Posted on May 4, 2021
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

LinkedIn | GitHub | Email | Data | Web App | Notebook

Introduction

β€œLocation, location, location.” The likelihood that you will hear that phrase if you are looking into purchasing a house, apartment, condo, or timeshare, is .9999999999. (Yes, I performed that data study myself.) However, there are many other factors that contribute to the price of real estate, some of which do not relate to the quality of the house itself -- like the month in which it is sold.Β  Additional factors include the unfinished square feet of basement area. Location itself can be applied in ways that people do not necessarily anticipate, as well, like the location of the home’s garage.Β 

All the variants can add up to hundreds of features. Accordingly, arriving at the correct sale price involves some advanced statistical techniques. Who would want to sit down with paper and pencil to determine how all of those features interact to produce a home's market value? Not me, and I was only working with 79 features. This is where computers help run through calculations that would take far too long to work out paper, but we do need to set them up by training them with models. The challenge of this exercise was selecting the best model to use to predict a home price.Β 

 

Objective

I was tasked with predicting the house prices given a combination of 79 features. I did so mostly following the data science methodology. Using the sklearn.metrics module, I managed to attain the following metric scores in my train-test split:Β 

Mean Squared Error 395114426.0445745

Mean Absolute Error 13944.044001807852

R-Squared 0.908991109360274

However, my Kaggle submission was evaluated on Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. My score was 0.13244.

Mean-absolute-error is likely the easiest to interpret of the above metrics, being β€œthe average of the absolute values of the errors” (Root-mean-square deviation - Wikipedia). Basically, my model can predict the price of a house within $13944.05.Β Β 

 

Process

Using XGBoost's Data to Predict House Prices

Data Science Methodology

I will provide a simplified overview of the steps I took in order to reach my desired outcome. Feel free to visit my GitHub for a more thorough dive.

 

Business Understanding

This step determines the trajectory of one's project. Although my undertaking was purely academic in nature, there are conceivably several reasons why a similar goal would be made in the β€œreal world.” Perhaps an online real estate competitor entered the fray that offered more accurate home value estimates than Zillow does. Not wanting to lose market share, Zillow desires to revamp its home valuation model by utilizing features it had previously ignored, and by considering a wider array of data models. In any case, the objective is fairly straightforward.

 

Analytic Approach

The approach depends on the goal. Since I must predict sale prices, I know that predicting quantities is a regression problem. If I were predicting labels or discrete values, I would have to utilize classification algorithms. There are different types of regression models. I know that tree-based regression models have typically performed well with similar problems, but I will have to see what the data looks like before I decide. Ultimately, I will evaluate different models and choose the one that performs best.Β 

 

Data Requirements & Data Collection

The data has already been provided. If that were not the case, I would have to define the data requirements, determine the best way of collecting the data, and perhaps revise my definitions depending on whether the data could be used to fulfill the objective.

 

Data Understanding

This step encompasses exploratory data analysis. Reading relevant information about the data and conducting my own research to increase my domain knowledge were also necessary, as I did not perform the Data Requirements and Data Collection steps myself. The documentation that accompanied the dataset proved useful as it explained much of the missingness.Β 

According to the paper (decock.pdf), the dataset describes β€œthe sale of individual residential property in Ames, Iowa from 2006 to 2010.” Its origins lie in the Ames City Assessor’s Office, but its journey from that office to my computer was not direct. It had been modified by Dean De Cock, who is credited as the individual who popularized this dataset for educational purposes with hopes to replace the Boston Housing dataset, and then again by the community at Kaggle, the website from which I downloaded the data.

My work can be viewed in the Jupyter Notebook I created for this project (give it a few minutes to load). Here are some of the descriptive statistics I performed:

In order to view the feature distributions, I created histograms of the numerical and continuous features and viewed the count distributions of the categorical features.

Continuous Features:

I visualized the missing values present in the dataset then examined the relationships between the missing values and the sale price of the houses.

 

Correlation among the values with a heatmap.

Using XGBoost's Data to Predict House PricesUsing XGBoost's Data to Predict House Prices

 

I also examined the presence of outliers. Throughout this process I noted observations and potential steps I might take when I prepared the data.

 

Data Preparation

During this stage, missing values, skewed features, outliers, redundant features, and multicollinearity are handled, and feature engineering is performed. As mentioned before, the documentation explained much of the missingness, removing the need to impute any of the missing data. I handled the missing values, removed some outliers, and encoded the categorical features. Dummy encoding was used for the nominal data and integer encoding for the ordinal data. Tree-based models are robust to outliers, multicollinearity, and skewed data, so I decided to utilize those models in order to avoid altering the data further.

Outliers visualized:

Using XGBoost's Data to Predict House Prices

 

Modeling

These stages go hand-in-hand, given that multiple models are generally created, and the one that performs best is chosen. In light of the large number of features, a tree based regression model would be better suited compared to something like linear regression. I decided to utilize the XGBoost Python library due to its known advantages over the Gradient Boosting and Random Forest algorithms in the scikit-learn library. I then used Grid Search to determine the best parameters to use in each model.Β 

The XGBRegressor took: 4162.4min to complete while the XGBRFRegressor took 8049.0min.

Interestingly enough, the top features were First Floor in Square Feet and Lot Area in Square Feet for the gradient boost model, and First Floor in Square Feet and Ground Living Area in Square Feet for the random forest model. The scoring metric used was negative root mean squared error. The top features using the R2 was Ground Living Area in Square Feet followed by Overall Quality, which rated the overall material and finish of the house.

Evaluation

While I was surprised that Overall Quality was not at the top, the importance of features that measured the size of the house were in line with some of my expectations (look here and here). In other words, increasing the size of one's house will most assuredly increase the value of one's house.

I interactively explored my best performing model with ExplainerDashboard, an awesome library for building interactive dashboards that explain the inner workings of "black box" machine learning models. My web app, a stripped-down version of the dashboard, can be found here.

I used Heroku, a free cloud application platform to host my web app, alongside Kaffeine to keep it running. If that link does not work, you can go to my notebook and scroll to the bottom to view the dashboard. You can also visit my GitHub for the complete experience. My favorite feature on the dashboard is the ability to adjust the values of features and then generate a predicted house price. Doing so provides a more granular understanding of how variables affect the final price. However, the library comes with a number of other unique visualizations and features.Β 

data

data

data

 

Conclusion

Experimenting with advanced regression techniques on real data in order to create an accurate predictive model was an informative experience. Zillow makes billions a year, which indicates that such models are valuable tools.

About Author

Tyrone Wilkinson

| Data Scientist | I love tackling interesting problems. With a degree in Computer Science from Columbia University and background and IT experience spanning over 5 years, I now leap into AI. Contact me if you want to...
View all posts by Tyrone Wilkinson >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI