Accurately Predicting House Prices in Ames, Iowa

Using Machine Learning to Democratize the Home-Buying Process

Buying a house is a complicated and stressful process that can leave first-time buyers feeling hopeless. Realtors often work off of their intuition built over years of experience which can lead to a lack of transparency. Meanwhile, brokers are not necessarily incentivized to help you find the fairest price. In 2018, there were over $120 billion residential transactions which adds up to $7 billion in fees. 

We can solve these pain points by using machine learning techniques to generate fair valuations for those looking to buy or sell a home. Using the Kaggle North Ames, IA data set of 1460 homes with 80 features, we built a model to accurately predict house price.

A Baseline 'Intuitive' Model to Predict House Price

There are common factors that we all think of when gauging house prices: square footage, lot size, or the number of bathrooms. Indeed these features are significantly correlated with house price. Therefore we built a baseline model with the baseline features to mimic a realtor’s thought process. This linear model gives a pretty good estimate when we compare the predicted price against the actual price for the whole dataset (Figure 1). However, some houses are underestimated. House #907, for example, is underestimated by $50,000!

Feature Selection and Engineering

To build a better prediction, we first log-transformed Sale Price to correct the skew. We engineered new features such as the total number of bathrooms which combines the number of full and half baths. We minimized the number of features by dropping redundant data like Garage Area vs Garage Size (Figure 2). Some columns of text were converted to ordinal variables in order to be captured by our models. The neighborhood is incredibly important when buying a home and often captures a significant amount of hidden data like school quality. Therefore we chose to dummify the 25 neighborhoods in order to keep that information.

Missingness and Imputation

Each column was checked for missingness which can occur for several reasons. First data can be purposefully missing such as a house with no garage having missing values for Garage Area. Second, data can be missing due to human error, in which case these values should be corrected using imputation.

We tested several imputation methods including random values, mean values and using K-Nearest Neighbors which fills in the expected data using the most similar houses. All three methods produced significant improvement in model accuracy compared to no imputation.  All three imputation methods performed similarly and produced the best results when outliers outside of 4.25 standard deviations were imputed(Figure 3).

Outliers

When you have 80 different things to look at for each house, how do you tell if some are oddballs? We used a technique to collapse the data set into two dimensions so we could better visualize the spread of our data. You can see that 99% of the variability is skewed by a few outlying houses (Figure 4)!   Once we pinpointed these anomalies circled here for clarity, we confirmed them as outliers using more rigorous statistical methods and then dropped them from our data set.

Linear Model: Elastic Net

Our first model is a linear model called ElasticNet. We chose this model because many features that determine house price are linear like total square footage. This model has the added benefit of dropping any irrelevant features by adding in regularization terms. We used grid search to tune the lambda, which is the size of the penalty, and rho which controls the balance of Ridge vs Lasso regularization. Because this model employs regularization, we standardized our data using the StandardScaler from sklearn before fitting our model. Our final model had a small lambda (1e-4) and a higher rho (0.9) indicating our model dropped several non-informative features. Using this model we see that the most important factors in house price are Square Footage, Lot Area, and neighborhoods (Stone Bridge, etc) (Figure 5). 

However, this model cannot capture any non-linear relationships in the dataset such as the drop in house price following the 2008 housing crisis.

Tree-based Model: XGBoost

Therefore, to capture these non-linear and less intuitive factors we used a tree based model, XGBoost. While this is a more complex model, we can still see which features are important. Unlike the ElasticNet, XGBoost does not require scaling because tree-based models split the data based on a threshold which isn’t affected by monotonic transformations. In this model, the most important features  are more qualitative features like fireplace or kitchen quality (Figure 6). Both these models have pros and cons. So is there a way to take the best of both worlds?

Ensembling

It turns out that different models perform differently in different price regions. We divided the house prices into 21 buckets. In Figure 7, each dot represents the winning model in each price range. Elastic Net performs better in the low price region ($105k and below) while XGBoost performs better in the high price region ($300k and above). A weighted average of both models does better in the medium price region ($105k to $300k). After ensembling models, we are now able to get much closer to the true sale price of a home. House #907, valued at $255,000 and previously over-valued by $50,000 is predicted to be $254,634. Our overall model accuracy is 0.1256, placing our team in the top 25% of all submissions on Kaggle.

Future Work

Our model still has difficulty estimating very low or very highly priced houses. Furthermore, our model runs the risk of being overfit to the train dataset due to our ensembling method. Moving forward we would like to feed our predictions into a random forest to classify as low-medium-high and then predict price.  This method is more outlier-resistant than either model used here. We would also like to revisit feature engineering (market temperature, season) and try to incorporate more data to expand the predictive capacity of our modelling.

About Authors

Josefa Sullivan

Josefa Sullivan

Josefa has a PhD in Neuroscience from the Icahn School of Medicine at Mount Sinai and a BA in Biochemistry & Molecular Biology from Boston University. Her interests include applying data science to the healthcare & biotech fields,...
View all posts by Josefa Sullivan >
Sunny Lee

Sunny Lee

Sunny graduated from Northwestern University with a double major in Economics and Statistics. She joined Goldman Sachs as a Sales Analyst in 2015 and took subsequent roles at the firm as a Fixed Income Trader and as a...
View all posts by Sunny Lee >
Avatar

Michael Emmert

Michael Emmert graduated from The George Washington University in May of 2019 with a Bachelors degree in Mechanical Engineering. Through his Bachelors he gained skills in mathematics, communicating ideas to non-technical groups, data manipulation and trend identification as...
View all posts by Michael Emmert >
Avatar

Vincent Ji

Vincent is a data scientist and a former research data associate at Bridgewater Associates. Prior to that, he was an associate at BlackRock, focusing on data analytics, business strategy, and implementation. He started his career as a management...
View all posts by Vincent Ji >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp