Studying Data to Predict house prices

, and
Posted on Nov 26, 2018
The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.


House prices are affected by various features such as home functionality, area of house, kitchen condition, garage quality, etc. Purchasing a house is a lifetime investment that requires enough research to make the right decision at right time. From customer point of view, this project aims to provide tools and data to help decide which houses are undervalued or overvalued on the basis of various features, so that one can save some dollars in one’s pocket after buying house.

Questions from the company's point of view might include what are the conditions that could be improved to sell the house at better price and satisfy customers for their life time investment. These techniques can also be applied to competitions using machine learning to solve in sites like Kaggle, Zigbang, etc and these techniques are highly demanded in market. Other goals of our group included mastering these machine learning techniques. The primary goal of this project was to predict house price in Iowa using various machine learning techniques.


For this project we used Ames Housing Dataset introduced by Professor Dean De Cock in 2011. Altogether, there are 29,019 observations (including trainee sets)  of housing sales in Ames, Iowa between 2006 to 2010 with 79 provided features.

Data pipeline: 

Beyond viewing the task as a student project, our team additionally aimed to build an automated process that would allow the user to predict sale prices in different settings moving forward. For building simple and efficient cooperative environment, we performed our own research about the data and as our ideas matured, it was deployed in specific directory and controlled (as shown in figure below). We made two pipelines- one data transforming pipeline using various transformers such as NAN remover, scaler, outlier remover, etc. and another model building pipeline for predicting house prices. The general schematic of our pipelines for fast iteration process is shown below:

Studying Data to Predict house prices

Simple but efficient cooperative environment










Pipelines for fast iteration EDA visualization: In order to better understand the data, we started with exploratory data analysis. Here are some of the examples of data visualization. For more detailed EDA visualization, please check our Github link below. We first checked a correlation plot to identify features that are highly correlated with one another.



Correlation map between various household features. Here is an example of some boxplots of OverallQaulity vs SalesPrice where the plot is linear while the plot of SalesPrice vs GarageCars is not linear, due to outliers.

Box plot of SalesPrice vs OverallQuality (right) and SalesPrice vs GarageCars Automatic data transformer for outliers: In order to remove outliers, we first measured Z score,  created effective strategy to remove outliers, and finally made a pipeline element to remove outliers as shown below.

Here is an example of how we removed values that were not available (NAN) for garage related data. We tried to remove NaN values using automatic outlier remover, however it did not improve our accuracy. So 31 out of 35 columns containing NaN values were addressed manually.

House Model:

For building our house model, we started with Lasso Regression, which was fast and easy to apply. It also gave us a good result. Then we tested other supervised learning methods such as Ridge, Elastic net, SVR random forest, Boosting, and compared the results. Finally, we ensemble various models into meta-model by stacking and averaging. The figure below illustrates our final house model. 

Stacking: Stacking is a method that combines predictions of several different models. Using this method, various ML algorithm can be combined to produce better predictions. This is a powerful machine learning approach as it can incorporate multiple models of various types. This allows the weaknesses of one model to be compensated by the strengths of other models. Various models can be combined using meta regressor, an algorithm that combines different models.

Averaged Model: We also used another approach of combining various models by assigning weight to each. But averaging the models did not perform better than stacking model, so we used stacking approach to make our house model.

Hyperparameter Tuning: Tuning parameters is a very important component of improving model performance.  A poor choice of hyperparameters may result in an over-fit or under-fit of the data for the model. We can adopt three different methods in tuning hyperparameters: random search, grid search, and Bayseian optimization. For our house model, we used grid search as various references where available. This proved easier to use, and works better with our small data size. We employed 5-fold cross validation using grid search for optimizing the hyperparameters.

Results: The final results of the various models used for predicting sale prices are summarized in the table below:

Model Train RMSE Kaggle Score
ElasticNet 0.10879657 0.11684
Lasso 0.10869445 0.11691
Ridge 0.11043459 0.11562
SVR 0.11368618 0.12019
Randomforest 0.15893652 0.17354
Xgboost* 0.12115257 0.12965
lightGBM* 0.13731836 0.14344

We did not use Xgboost or lightGBM for our final model as accuracy was not better.  The results for our final house model are presented in the table below. 

Model Train RMSE Kaggle Score
StackingRegressor 0.11069786 0.11554
AveragingRegressor 0.12458452 0.12066

Kaggle result: We were placed in top 3% (11/18/2018 ) in Kaggle competition and was the highest Kaggle score achiever in the September, 2018 cohort. Here is our Kaggle score distribution graph with number of submissions.

Kaggle score distribution                                                                                                                                                                                                                                                                                                                       

 Conclusions: For our house model, we used stacking which performed better than the averaging model. It was not possible to analyze the effect of individual features on the house sale price which was a draw back for stacking model. From this model, we conclude following points:

  • Skewness, Z score, variables to select as well as tuning model hyperparameter were very important 
  • Feature engineering such as filling in NANs ( 0 , mode , median ), binning, and using domain knowledge, etc. were most important to predict the house sale price
  • GBM/Xgboost though powerful to give good benchmark solutions but not always not best choice for model fitting

Future directions: In order to improve this model's performance, we would consider improving following aspects in the future :

  • Tune parameter for Xgboost, lightGBM model
  • Apply clustering analysis to create new features
  • Investigate more feature engineering potentials
  • It would be nice to get time series event data and study the effects of 2008 recession on house sale price and predict its effect in case of recession in future

Our team: Basant Dhital, Jiwon Chan, and SangYon Choi completed this project. Please find all codes for this project in following Github link

About Authors

Basant Dhital

Basant Dhital is a Physics Ph.D. with an excellent background in Mathematics and Statistics and demonstrated programming skills. During his Ph.D. research, he developed several algorithms to process and analyze NMR and other spectroscopic data. He developed a...
View all posts by Basant Dhital >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI