Advanced Regression Modeling on House Prices

Ricky Yue
Jurgen De Jager
and
Posted on Sep 23, 2016

Introduction

The key question addressed in this blog is how we can better predict the sale prices of residential houses. The Ames Housing Price data set recently released on Kaggle is “a modernized and expanded version of the often cited Boston Housing dataset”. It covers all the recorded house sale price in Ames, IA from January 2006 to July 2010. With 79 explanatory variables describing almost every feature of residential homes, we aimed to apply data imputation, feature engineering and machine learning modeling to achieve a better predictive accuracy on the housing price.

The dataset contains 1460 observations in the training set and 1459 observations in the test set.   There are 46 categorical variables including 23 nominal and 23 ordinal ones, and 33 numeric variables in the dataset. The training set also has the sale price as response while the test set doesn’t.

Time Series

It’s important to note that the housing price data ranges from early-2006 to mid-2010. We should be aware that the subprime mortgage crisis happened during this period and contributed to the economic recession of December 2007 and June 2009. We drew the time series plot of monthly median house sale price below and decomposed the time series into trend and seasonality. As shown in the trend panel below, it’s obvious that the monthly median sale price had decreased steadily from early 2008 until late 2009.   That would indicate the house sales in Ames was no exception and was influenced by the mortgage crisis. We derived the trend index and seasonality index from the time series. Since the time series for sale price appears to follow a multiplicative way such that Sale Price = Trend * Seasonality * Cyclicality * Irregularity, we calculated the time series index:

TsIdx  =  TrendIdx * SeasonIdx / max(TrendIdx).

pic1

We considered using those three time series indices as predictors to test if global economy could help predict the housing sale price.

Exploratory Data Analysis

Below are boxplots of some categorical variables vs sale price.  They show consistency with our common sense that neighborhood, zoning, house quality and facility might distinguish the house value.

pic2

Scatterplots of some numeric variables are shown below. Some area related features such as lot area, 1st floor square feet, 2nd floor square feet, and house year built show positive correlations with sale price.

Feature Importance

pic5

 

Outliers

pic6

 

Modeling

We divided our modeling onto two sections. On the one side we modeled to achieve high predictive accuracy, and on the other side we modeled to maintain interpretation. We first discuss modeling that focused on achieving high predictive accuracy. As a first step we tuned parameters of all our base learners. We used grid-search to find the optimal parameters. Below are all the optimal parameters for our Generalized Linear Model, Neural Network, Random Forest and Gradient Boosted Trees .

 

GLM

screen-shot-2016-09-20-at-10-03-57-am

Neural Network

screen-shot-2016-09-20-at-9-56-39-am

Random Forest

screen-shot-2016-09-20-at-9-55-32-am

Gradient Boosted Trees

screen-shot-2016-09-20-at-9-53-57-am

 

Stacking

Next we used ensemble learning to combine our models. Ensemble machine learning methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms. Stacking is a broad class of algorithms that involves training a second-level "metalearner" to ensemble a group of base learners. The type of ensemble learning implemented in H2O is called "super learning", "stacked regression" or "stacking." Unlike bagging and boosting, the goal in stacking is to ensemble strong, diverse sets of learners together. In order to train the ensemble we did the following.

  • Trained each of the L base algorithms on the training set.
  • Performed k-fold cross-validation on each of these learners and collected the cross-validated predicted values from each of the L algorithms.
  • Combined the N cross-validated predicted values from each of the L algorithms to form a new N x L matrix. This matrix, along with the original response vector, is called the "level-one" data.
  • Trained the metalearning algorithm on the level-one data.
  • Used the "ensemble model" consisting of the L base learning models and the metalearning model, to generate predictions on a test set.

 

screen-shot-2016-09-18-at-7-31-16-pm

 

Model Averaging

Stacking did not give us the intended results, although it improved our score slightly and did put us in the top 20% of participants. We therefore decided to use model averaging. This is a simple strategy where you average out your predictions. Below is a simple visual representation.

 

screen-shot-2016-09-22-at-10-43-42-pm

 

screen-shot-2016-09-19-at-10-45-53-pm

 

Seeing as this approach gave us significantly better results, we decided to include even more models into the averaging, placing more weight on the models we know performed well.

screen-shot-2016-09-22-at-10-43-49-pm

screen-shot-2016-09-19-at-11-40-37-pm

 

This approach pushed up to number two in the leader board on Kaggle.

 

About Authors

Ricky Yue

Ricky Yue

As a data enthusiast, Ricky loves to think the real life issues in a quantitative way. He likes to talk about probability and alternative. He’s proud of his Bayesian skepticism based on years of scientific training. He was...
View all posts by Ricky Yue >
Jurgen De Jager

Jurgen De Jager

Jurgen’s fascination with analytics and its applications specifically within data science, led to his decision some time ago that this is the career path he wants to pursue post graduation. In anticipation of this, he has worked extensively...
View all posts by Jurgen De Jager >

Leave a Comment

Avatar
Pallavi January 1, 2017
Hi, Really appreciate your approach on time series analysis on sales price, due to changes in economic conditions. Can you please explain how did you do multivariate time-series analysis? It will be very helpful, if you could share just time-series decomposition codes? Thanks

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Classes Demo Day Demo Lesson Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet Lectures linear regression Live Chat Live Online Bootcamp Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Lectures Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking Realtime Interaction recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp