Advanced Regression Modeling on House Prices

and
Posted on Sep 23, 2016

Introduction

The key question addressed in this blog is how we can better predict the sale prices of residential houses. The Ames Housing Price data set recently released on Kaggle is β€œa modernized and expanded version of the often cited Boston Housing dataset”. It covers all the recorded house sale price in Ames, IA from January 2006 to July 2010. With 79 explanatory variables describing almost every feature of residential homes, we aimed to apply data imputation, feature engineering and machine learning modeling to achieve a better predictive accuracy on the housing price.

The dataset contains 1460 observations in the training set and 1459 observations in the test set.Β Β  There are 46 categorical variables including 23 nominal and 23 ordinal ones, and 33 numeric variables in the dataset. The training set also has the sale price as response while the test set doesn’t.

Time Series

It’s important to note that the housing price data ranges from early-2006 to mid-2010. We should be aware that the subprime mortgage crisis happened during this period and contributed to the economic recession of December 2007 and June 2009. We drew the time series plot of monthly median house sale price below and decomposed the time series into trend and seasonality. As shown in the trend panel below, it’s obvious that the monthly median sale price had decreased steadily from early 2008 until late 2009.Β Β  That would indicate the house sales in Ames was no exception and was influenced by the mortgage crisis. We derived the trend index and seasonality index from the time series. Since the time series for sale price appears to follow a multiplicative way such that Sale Price = Trend * Seasonality * Cyclicality * Irregularity, we calculated the time series index:

TsIdxΒ  =Β  TrendIdx * SeasonIdx / max(TrendIdx).

pic1

We considered using those three time series indices as predictors to test if global economy could help predict the housing sale price.

Exploratory Data Analysis

Below are boxplots of some categorical variables vs sale price. Β They show consistency with our common sense that neighborhood, zoning, house quality and facility might distinguish the house value.

pic2

Scatterplots of some numeric variables are shown below. Some area related features such as lot area, 1stΒ floor square feet, 2ndΒ floor square feet,Β and house year built show positive correlations with sale price.

Feature Importance

pic5

Outliers

pic6

Modeling

We divided our modeling onto two sections. On the one side we modeled to achieve high predictive accuracy, and on the other side we modeled to maintain interpretation. We first discuss modeling that focused on achieving high predictive accuracy. As a first step we tuned parameters of all our base learners. We used grid-search to find the optimal parameters. Below are all the optimal parameters for our Generalized Linear Model,Β Neural Network, Random Forest and Gradient Boosted Trees .

GLM

screen-shot-2016-09-20-at-10-03-57-am

Neural Network

screen-shot-2016-09-20-at-9-56-39-am

Random Forest

screen-shot-2016-09-20-at-9-55-32-am

Gradient Boosted Trees

screen-shot-2016-09-20-at-9-53-57-am

Stacking

Next we used ensemble learning to combine our models. Ensemble machine learning methods use multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms. Stacking is a broad class of algorithms that involves training a second-level "metalearner" to ensemble a group of base learners. The type of ensemble learning implemented in H2O is called "super learning", "stacked regression" or "stacking." Unlike bagging and boosting, the goal in stacking is to ensemble strong, diverse sets of learners together. In order to train the ensemble we did the following.

  • Trained each of the L base algorithms on the training set.
  • Performed k-fold cross-validation on each of these learners and collected the cross-validated predicted values from each of the L algorithms.
  • Combined the N cross-validated predicted values from each of the L algorithms to form a new N x L matrix. This matrix, along with the original response vector, is called the "level-one" data.
  • Trained the metalearning algorithm on the level-one data.
  • Used the "ensemble model" consisting of the L base learning models and the metalearning model, to generate predictions on a test set.

screen-shot-2016-09-18-at-7-31-16-pm

Model Averaging

Stacking did not give us the intended results, although it improved our score slightly and did put us in the top 20% of participants. We therefore decided to use model averaging. This is a simple strategy where you average out your predictions. Below is a simple visual representation.

screen-shot-2016-09-22-at-10-43-42-pm

screen-shot-2016-09-19-at-10-45-53-pm

Seeing as this approach gave us significantly better results, we decided to include even more models into the averaging, placing more weight on the models we know performed well.

screen-shot-2016-09-22-at-10-43-49-pm

screen-shot-2016-09-19-at-11-40-37-pm

This approach pushed up to number two in the leader board on Kaggle.

About Authors

Ricky Yue

As a data enthusiast, Ricky loves to think the real life issues in a quantitative way. He likes to talk about probability and alternative. He’s proud of his Bayesian skepticism based on years of scientific training. He was...
View all posts by Ricky Yue >

Jurgen De Jager

Jurgen’s fascination with analytics and its applications specifically within data science, led to his decision some time ago that this is the career path he wants to pursue post graduation. In anticipation of this, he has worked extensively...
View all posts by Jurgen De Jager >

Leave a Comment

Pallavi January 1, 2017
Hi, Really appreciate your approach on time series analysis on sales price, due to changes in economic conditions. Can you please explain how did you do multivariate time-series analysis? It will be very helpful, if you could share just time-series decomposition codes? Thanks

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI