A New Hybrid Approach to Data-Driven House Price Predictions

Posted on Jul 30, 2022

The original source code for producing the web-based interactive dashboard in this article can be found here.

The original source code for the machine learning model which the dashboard utilized can be found here.

Interactive Data Analytics Dashboard

The following website was created in Streamlit to automatically generate a web-based interactive dashboard from Python code. Modeling Home Prices and Value Added from Improvements in Ames, IA. It utilized a data set from 2020 of all the prices of houses and their unique features in order to develop a machine learning prediction model utilizing ElasticNet at more than 93% accuracy.

The following features were used to estimate the average price of a house in a particular area and neighborhood of the town:

  1. Number of Rooms
  2. Number of Bathrooms
  3. Number of Garage Car Spaces
  4. Ground Living Area (square feet)
  5. Lot Area (square feet)
  6. Lot Frontage (linear feet)
  7. Overall Quality of House (on a scale from 1 to 9)
  8. Approximate Age (years)
  9. Presence of Basement
  10. Presence of Paved Driveway
  11. Whether or Not Remodeled

The following proposed improvements were used to determine how much added value a particular house in a specific area and neighborhood would have if implemented:

  1. Remodeling Exterior Material
  2. Remodeling Kitchen
  3. Building Pool
  4. Finishing Basement
  5. Finishing Garage

A variety of machine learning regression models such as Multilinear, Logistic, Huber, Penalized, Ridge, Lasso, Random Forest, and Support Vector Machines (SVM) were attempted but ElasticNet turned out to be the most accurate and easily implemented one. In conclusion, a combination of quantitative and ordinal categorical features working together to determine the overall price of a house and its improvements made it ideal for this task.

Introduction

A house is typically one of the most expensive purchases one makes. Such a purchase needs to be made on an informed choice on answers to the following factors:

  • Lifestyle requirements
  • Location
  • Price

The project aimed to quantify these factors through machine learning.

Objectives

  1. Estimating the price of a house based on a few basic features.
  2. Pairing two machine learning algorithms to improve prediction.
  3. Estimating potential value increases from home improvements.

Methodology

Five different models were built for the best house price prediction.

Three Linear Models

  1. Ridge Regression
  2. Lasso Regression
  3. ElasticNet Regression

One Tree Model

  1. Random Forest Classification

A Novel Approach

  1. Hybrid Model

Linear Model

During the exploratory data analysis phase, correlations between multiple house features and sale price were observed. Consequently, the first model applied multiple linear regression.

During this phase, it was ensured whether data satisfied or seemed to satisfy the following five assumptions:

  1. A linear relationship between the dependent and the independent variables
  2. Little or no correlation between the independent variables
  3. Constant variance of the residuals
  4. Normal distribution of the residuals
  5. Independence of observations

After feature selection and feature engineering, 42 features were chosen from the initial feature set to apply multiple linear regression. To improve the results, Ridge and Lasso penalty regressions were also applied. Finally, for further refinement, ElasticNet, which is a penalized regression model that synthesizes both Ridge and Lasso, was used.

Tree-Based Model

Among the tree-based models, Random Forest was used to explore the predictive accuracy of non-linear models. It is an estimator algorithm that aggregates the result of many decision trees to output the optimal result. This model was chosen over other tree-based models because it is robust to outliers and has a lower risk of overfitting. Since linear patterns had been observed in the data, linear models usually tend to perform better than random forest. However, on closer observation of the results in some cases, the sales price prediction of Random Forest is more accurate than the linear model. Therefore, a hybrid approach was needed.

Hybrid Model

During the analysis of the linear model and tree-based model, sometimes predictions made in linear models are more accurate than in tree-based models, and sometimes tree-based models are closer to the actual sales price. In the following figure, the horizontal black line signifies all observations. The blue boxes indicate the observations where the linear model performed better, and the green boxes indicate those where the tree-based model performed better. If the model that performed better for an observation with certain kinds of features could be selectively applied, then the most accurate results could be predicted.

To achieve that, another prediction model that can predict which model will perform better for a particular house needed to be created.

The approach worked in the following two phases:

1. Predicting the Best Prediction Model

The aim of this phase was to build a classification model that could predict which machine learning algorithm will perform the best for an input house. ElasticNet was the best linear model and Random Forest was the best tree-based model. First, a labelled dataset matching each house price prediction with its best performing model was generated. Next, a classifier was trained to predict the best model for sales price prediction. After testing Support Vector Machines (SVM), Logistic Regression, and Random Forest models; the Random Forest classifier was found to be the most accurate. Therefore, Random Forest was used yet again to predict the best model between the subservient Linear Model (LM) and Random Forest (RF) for sales price prediction.

2. Using the Resultant Model to Predict Sales Price

Once the best prediction model was calculated, that model was used to predict the sales price of that particular house. For each new house, both steps repeated. The figure below illustrates this process.

Each model pursued the following steps:

  1. On the cleaned data, dummy or label encoding was performed on all categorical variables based on the type of categorical variable.
  2. The dataset was split into training and testing data, where 80% was the training data, and 20% was the testing data.
  3. The splitting of data was randomized by using a random seed. One hundred (100) random splits were used for each model, and the final result was the average of all these raining and test datasets.
  4. The training dataset was used to fit all of the models and the testing dataset was used to predict the house sales price.
  5. The prediction results obtained from all models were then evaluated using R-Squared (R2) and Root Mean Squared Error (RMSE). R2 determines the proportion of variance in the dependent variable that the independent variables can explain. RMSE, which is the square root of the mean of the square of all of the errors, measures the error in the predictions.

Results

The table below shows the result obtained by each model. ElasticNet and the Hybrid model both have significantly higher R2 values compare to that of the Random Forest model. Therefore, the hybrid model was the best model. However, it only had a slightly better R2 value than that of ElasticNet.

Model Test Performance Hyper Parameters
R2 RMSE
Linear Regression with ElasticNet Regularization 0.9180 0.1401 Alpha = 4.1 x 105
Random Forest 0.8952 0.1592
n_estimators = 300
min_samples_leaf = 2
ccp_alpha = 0.045
Hybrid Model 0.9188 0.1422
criterion = 'entropy'
min_samples_leaf = 10
n_estimators = 100

Conclusion

One of the important contributions of this project was the new hybrid model, which presented a new approach to ensembling multiple models to improve the accuracy of the results. This model can correctly predict sales price with a 91.88% R2 value. Nevertheless, for this dataset, linear models performed better than tree-based models.

The skills the author demonstrated here can be learned through taking Data Science with Machine Learning Bootcamp with NYC Data Science Academy.

About Author

Joydeep Chatterjee

Creative data scientist passionate about applying machine learning to physical processes, logistics, and business activities. Blending prior engineering technical and leadership background to solve problems of multiple industries for stakeholders.
View all posts by Joydeep Chatterjee >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI