A New Hybrid Approach to Data-Driven House Price Predictions
The original source code for producing the web-based interactive dashboard in this article can be found here.
The original source code for the machine learning model which the dashboard utilized can be found here.
Interactive Data Analytics Dashboard
The following website was created in Streamlit to automatically generate a web-based interactive dashboard from Python code. Modeling Home Prices and Value Added from Improvements in Ames, IA. It utilized a data set from 2020 of all the prices of houses and their unique features in order to develop a machine learning prediction model utilizing ElasticNet at more than 93% accuracy.
The following features were used to estimate the average price of a house in a particular area and neighborhood of the town:
- Number of Rooms
- Number of Bathrooms
- Number of Garage Car Spaces
- Ground Living Area (square feet)
- Lot Area (square feet)
- Lot Frontage (linear feet)
- Overall Quality of House (on a scale from 1 to 9)
- Approximate Age (years)
- Presence of Basement
- Presence of Paved Driveway
- Whether or Not Remodeled
The following proposed improvements were used to determine how much added value a particular house in a specific area and neighborhood would have if implemented:
- Remodeling Exterior Material
- Remodeling Kitchen
- Building Pool
- Finishing Basement
- Finishing Garage
A variety of machine learning regression models such as Multilinear, Logistic, Huber, Penalized, Ridge, Lasso, Random Forest, and Support Vector Machines (SVM) were attempted but ElasticNet turned out to be the most accurate and easily implemented one. In conclusion, a combination of quantitative and ordinal categorical features working together to determine the overall price of a house and its improvements made it ideal for this task.
A house is typically one of the most expensive purchases one makes. Such a purchase needs to be made on an informed choice on answers to the following factors:
- Lifestyle requirements
The project aimed to quantify these factors through machine learning.
- Estimating the price of a house based on a few basic features.
- Pairing two machine learning algorithms to improve prediction.
- Estimating potential value increases from home improvements.
Five different models were built for the best house price prediction.
Three Linear Models
- Ridge Regression
- Lasso Regression
- ElasticNet Regression
One Tree Model
- Random Forest Classification
A Novel Approach
- Hybrid Model
During the exploratory data analysis phase, correlations between multiple house features and sale price were observed. Consequently, the first model applied multiple linear regression.
During this phase, it was ensured whether data satisfied or seemed to satisfy the following five assumptions:
- A linear relationship between the dependent and the independent variables
- Little or no correlation between the independent variables
- Constant variance of the residuals
- Normal distribution of the residuals
- Independence of observations
After feature selection and feature engineering, 42 features were chosen from the initial feature set to apply multiple linear regression. To improve the results, Ridge and Lasso penalty regressions were also applied. Finally, for further refinement, ElasticNet, which is a penalized regression model that synthesizes both Ridge and Lasso, was used.
Among the tree-based models, Random Forest was used to explore the predictive accuracy of non-linear models. It is an estimator algorithm that aggregates the result of many decision trees to output the optimal result. This model was chosen over other tree-based models because it is robust to outliers and has a lower risk of overfitting. Since linear patterns had been observed in the data, linear models usually tend to perform better than random forest. However, on closer observation of the results in some cases, the sales price prediction of Random Forest is more accurate than the linear model. Therefore, a hybrid approach was needed.
During the analysis of the linear model and tree-based model, sometimes predictions made in linear models are more accurate than in tree-based models, and sometimes tree-based models are closer to the actual sales price. In the following figure, the horizontal black line signifies all observations. The blue boxes indicate the observations where the linear model performed better, and the green boxes indicate those where the tree-based model performed better. If the model that performed better for an observation with certain kinds of features could be selectively applied, then the most accurate results could be predicted.
To achieve that, another prediction model that can predict which model will perform better for a particular house needed to be created.
The approach worked in the following two phases:
1. Predicting the Best Prediction Model
The aim of this phase was to build a classification model that could predict which machine learning algorithm will perform the best for an input house. ElasticNet was the best linear model and Random Forest was the best tree-based model. First, a labelled dataset matching each house price prediction with its best performing model was generated. Next, a classifier was trained to predict the best model for sales price prediction. After testing Support Vector Machines (SVM), Logistic Regression, and Random Forest models; the Random Forest classifier was found to be the most accurate. Therefore, Random Forest was used yet again to predict the best model between the subservient Linear Model (LM) and Random Forest (RF) for sales price prediction.
2. Using the Resultant Model to Predict Sales Price
Once the best prediction model was calculated, that model was used to predict the sales price of that particular house. For each new house, both steps repeated. The figure below illustrates this process.
Each model pursued the following steps:
- On the cleaned data, dummy or label encoding was performed on all categorical variables based on the type of categorical variable.
- The dataset was split into training and testing data, where 80% was the training data, and 20% was the testing data.
- The splitting of data was randomized by using a random seed. One hundred (100) random splits were used for each model, and the final result was the average of all these raining and test datasets.
- The training dataset was used to fit all of the models and the testing dataset was used to predict the house sales price.
- The prediction results obtained from all models were then evaluated using R-Squared (R2) and Root Mean Squared Error (RMSE). R2 determines the proportion of variance in the dependent variable that the independent variables can explain. RMSE, which is the square root of the mean of the square of all of the errors, measures the error in the predictions.
The table below shows the result obtained by each model. ElasticNet and the Hybrid model both have significantly higher R2 values compare to that of the Random Forest model. Therefore, the hybrid model was the best model. However, it only had a slightly better R2 value than that of ElasticNet.
|Model||Test Performance||Hyper Parameters|
|Linear Regression with ElasticNet Regularization||0.9180||0.1401||Alpha||=||4.1 x 105|
One of the important contributions of this project was the new hybrid model, which presented a new approach to ensembling multiple models to improve the accuracy of the results. This model can correctly predict sales price with a 91.88% R2 value. Nevertheless, for this dataset, linear models performed better than tree-based models.