This Old House: Using ML to Guide Home Renovations

Posted on Apr 15, 2020
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

GitHub | LinkedIn | ResearchGate

Project Summary

Selling a house can be a uniquely stressful time for homeowners, particularly if the house is older or has been 'worn in' by children now full-grown and off to college. Owners may be tempted to renovate their aged house in an attempt to attract potential buyers and drive a higher selling price, yet it is unrealistic to upgrade the entire house. What features should be prioritized to maximize the selling price, and return on investment?

Using a dataset obtained from Kaggle.com, the current project aims to explore the housing market of Ames, Iowa, and utilize advanced machine learning algorithms to (1) accurately predict house selling prices, and (2) identify key features that most affect the selling price. The results of the present study found that an ensemble learner achieved the highest accuracy, with a mean predictive error of $8,500; placing 724th of 4455 submissions (top  16.25%) (Figure 1).

Key features contributing to the sale price, and thus should be prioritized when renovating, include the condition of the kitchen, exterior, and basement. 

Figure 1. Ranking as of 04/15/2020

Background

The datasets obtained from kaggle.com included a train and test CSV-files containing information on 1460 and 1469 house sales in Ames, Iowa between 2006 and 2010, respectively. Datasets included over 80 variables describing the land property, neighborhood, surrounding features like adjacent railroads and alleys, and the house itself.

Following data wrangling and training machine learning algorithms using the training dataset, users are encouraged to predict and submit the estimated house prices of observations found in the test dataset for comparison to other users/submissions for a final score. 

Methodology House

An overview of the procedural steps necessary for the prediction of house prices is found in Figure 2. First, exploratory data analysis was performed to better understand the datasets used to predict the housing price. Next, feature engineering included identification and removal of outliers, imputing missing values, data transformation, and re-binning to reduce sparsity of categorical features.

Lastly, a diverse array of regularized linear- and tree-based regression algorithms was used to predict house prices; more advanced blending and ensembled learning were applied to further improve the predictions.

Figure 2. Overview of procedural steps

Exploratory Data Analysis on House

General descriptive analysis of features was performed to better understand (1) the distribution of values per feature (Figure 3), and (2) the correlation of numeric variables (Figure 4). Of note, many features were skewed which may affect the distribution of residual errors, a key assumption of regression analysis, and should be addressed during feature engineering.

Similarly, the high correlation between predictive features indicates multi-collinearity, which violates another key assumption of regression analysis. For example, it is within reason to expect a relationship between the garage area and the number of cars. Therefore, regularized regression models should be used to minimize the effect of multicollinearity between the predictive features.  

Figure 3. Descriptive analysis of features
Figure 4. Correlative analysis of numeric features

Feature Engineering House

Exploratory data analysis of the 80+ features led to the identification of several outliers in the training dataset (n=13) (Figure 5). Outliers were removed to improve the accuracy of the machine learning algorithms and reduce the leverage of individual observations. 

Figure 5. Identification and removal of outliers

Handling missing data was complicated by the fact that the dataset contained features where NaN indicated the house lacked a feature, not missing data. Therefore it was imperative to handle both the pseudo-missing data and actual-missing data separately.

Figure 5 highlights the overall imputation strategy used in the present project. Psuedo-missing values were imputed with 'None' to indicate that the house, for example, did not have a pool. Alternatively, actual-missing values were imputed when appropriate.

Of note,  numerical and categorical missing values were imputed with the mean or mode value, respectively, per neighborhood. When imputing missing data, it is imperative to maintain the distribution of values. As highlighted in Figure 6, imputing by the mean value per neighborhood, for example 'LotFrontage', minimally affects the distribution of the numeric features. 

Figure 5. Imputation strategy
Figure 6. Effect of imputation by neighborhood

Data Analysis on House

Initial exploratory data analysis revealed data skewness that may affect the distribution of residuals, a key assumption of regression analysis. As such, both continuous and ordinal features were transformed using box-cox and power transformations. In particular, Figure 6 emphasizes the skewness of the response variable, 'SalePrice', and the improved distribution following a log-transformation. 

house
Figure 6. Log transformation of response variable, 'SalePrice'

Lastly, categorical features were dummified into discrete binary features, a necessary step for both regularized and tree-based regression algorithms. However, dummification expands the number of features, and thus dimensionality of the dataset, limiting the detection of correlation between features. To minimize data sparsity, several categories were re-binned, as shown in Table 1. An example of re-binning is found in Figures 8 and 9.

house
Figures 8 and 9. Example of re-binning

Machine Learning

Several regression algorithms were implemented, including regularized linear models (Lasso, Ridge, and Elastic-Net) and tree-based models (Gradient Boosting and XGBoost). Advanced techniques such as average blending and ensemble learning (StackCVRegressor) models were implemented in an attempt to capitalize on differences between linear and tree-based models. The training and Kaggle submission root-mean-square-error scores of all models are found in Table 2.

The ensemble learner of all five models, with a Lasso meta-model, achieved the highest accuracy, with an average predictive error of $8,500; placing 724th of 4455 submissions (top  16.25%) (Figure 1). 

house

What Should Owners Upgrade House to Improve Sale Price?

Lasso regularized linear regression, and Gradient Boosting and XGBoost tree-based regression models provide key insight into the house features that most affect the sale price. The top 20 features by feature importance, as determined by the Lasso model, are found in Figure 10. Key upgradable features that most significantly affect the house price include Overall Condition, Kitchen Quality, and Functionality of the house.

Tree-based Gradient Boosting and XGBoost algorithms identify overall quality, the total number of baths, and the quality of the kitchen, basement, and exterior (Figure 11). While the cumulative suggestions by the three models are not necessarily surprising to anyone that has ever purchased (or rented) a house, machine learning can be used to guide homeowners as to which renovations may be the most impactful.

house
Figure 10. Feature selection of Lasso regularized regression algorithm
house
Figure 11. Feature importance of (left) Gradient Boosting, and (right) XGBoost models

Conclusions

The presented Kaggle competition was a successful exercise in data wrangling, exploratory data analysis, outlier removal, missing data imputation, and application of advanced machine learning techniques. An ensemble learner, composed of a diverse array of algorithms, achieved the highest accuracy, with a mean predictive error of $8,500; placing 724th of 4455 submissions (top  16.25%). Furthermore, kitchen, exterior, and basement quality were the most impactful features contributing to the final sale price of the house.

Homeowners looking to renovate their house before entering the market would be best served to upgrade the aforementioned features.

About the Author

If you would like to learn more about the author, please check out my LinkedIn profile. Furthermore, if you would check out my relevant code, please check out my GitHub account. 

About Author

Jon Harris

Jon is a certified Data Scientist and accomplished quantitative healthcare researcher with real-world experience in research methodologies, interpreting experimental results data, statistical and machine learning modeling, and creating data-driven narratives for multi-level stakeholders. Looking to utilize my strong...
View all posts by Jon Harris >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI