Using Machine Learning Algorithms to Predict Sale Prices of Houses in Ames, Iowa

Posted on Jan 10, 2022

GitHub 

Authors: Sweta Prabha, Hadar Zeigerson & Ayelet Hillel

Introduction

Buying a home is one of the greatest investments an average American will make in their lifetime. The most efficient and least stressful way to purchase a home is to be well informed throughout the process. This project utilizes machine learning techniques to accurately predict the sale prices of a house based on information regarding the house’s features. The models are trained on house sale data of over 2,500 houses in Ames, Iowa. Our team engineered a highly accurate predictive machine learning model (0.96 R2 for unseen data) that can potentially be used by anyone interested in selling, building, flipping, or buying a home, from real estate agencies to homeowners debating whether to remodel their house before selling. 

The Data

The dataset describes the sale of individual residential property in Ames, Iowa from 2006 to 2010. The data set contains 2580 observations and 81 explanatory variables involved in assessing home values such as square footage, neighborhood and number of bedrooms. The dataset was obtained from Kaggle competition, House Prices: Advanced Regression Techniques.

Exploratory Data Analysis (EDA)

A detailed EDA was conducted with the aim of gaining maximum insights into the data set and its underlying structure. The following list highlights some of our key findings:

  • Houses up to 2000 sqft are desirable in Ames: Price/sqft dropped for bigger houses supported by negative exponent in sales price vs gross area log-log linear model. 
  • The external quality, external condition and kitchen quality were highly correlated with overall quality and condition. This suggests that those features are determining factors of sale prices. 

  • Seasonal Trends: more houses are being sold between May-July. However, we witness minor seasonal trends in sale price. 

  • Neighborhood analysis: NoRidge neighborhood has bigger and most expensive houses, compared to all other neighborhoods. NAmes has the most number of houses for sale, compared to all other neighborhoods. 

Data Processing 

  • Only Normal Sales of Residential Houses were included in train and test set
  • Most Null values for categorical variables usually meant missing features and were replaced by no_feature
  • Most Null value in continuous variables were replaced by 0
  • Very few houses (only 9) had pools and the rows were deleted to avoid discrepancy
  • Only 3 houses had second garage, the records were dropped too

Predictive Models 

Penalized Linear Regression

Regression problems perform statistical model selection to find the simplest model that provides the best predictive performance which involves important feature selection. Penalized regressions are popular due to their higher prediction accuracy and computational efficiency. Penalized linear regression regularizes the regression coefficients by shrinking them towards zero. For feature engineering and linear models, a dummified dataset was used. The alpha parameter was varied between 0.7 to 40.7 with step size of 1. The age of the house was calculated by taking the difference between year sold and year built that reduced a lot of dummy variables by converting categorical features to continuous features. The Lasso model gave an R^2 of 0.96 on the test set and 0.96 on the training set. Although the accuracy was high, some important features in context to the housing market such as building type and house style were dropped so we decided to try other modeling techniques such as tree based models. 

Tree - Based Models 

  • Random Forest 

Random Forest Regression is a supervised learning model that uses ensemble learning techniques for regression. Ensemble learning method combines predictions from multiple machine learning algorithms to make a better prediction than a single model. 

Before we build and evaluate our random forest model, we need to construct a baseline model; a simple model we hope to improve upon. We have decided to set the baseline predictions to be the sale price averages for every year in our data set. Having established a baseline, we were then able to build our model. In order to optimize the random forest model, we conducted hyperparameter tuning. We started by running a Random Hyperparameter Grid with K-Fold cross validation (CV) in order to narrow down the range for each hyperparameter. To further improve our results we used Grid Search with K-Fold CV focusing on the most promising hyperparameter ranges. 

The following table shows the final results: 

  • Gradient Boosting

Gradient boosting is a supervised learning model that builds simpler prediction models sequentially where each model tries to predict the error left over by the previous model. We established our predictions by building a gradient boosting model using the default parameters. In order to improve upon our baseline mode, we performed hyperparameter tuning. We used Grid Search with K-Fold cross validation (CV) to find the best hyperparameters, and evaluated our models by comparing our results to our baseline predictions. 

The following table shows the final results:

Model Selection 

Our linear model outperformed the tree- based models with a score of 0.96 on unseen dataset. 

Conclusions and Future Work

This project laid the groundwork for building more accurate forecasts by surfacing insights about the most impactful drivers of housing prices in Ames, Iowa. Our forecasts (R^2 0.96) can benefit individuals interested in investing, buying or selling a house in Ames, Iowa, while making optimal capital allocation decisions. Future work includes adding data on employment, education and crime in order to identify additional key driving factors that affect house prices in Ames, Iowa. 

About Author

Ayelet Hillel

Data Science Professional with experience in research alongside program management. I am passionate about developing data-driven solutions using statistical methodologies and programming languages including Python and R.
View all posts by Ayelet Hillel >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp