Home Sales Price Estimation with Machine Learning Regression Models
Introduction:
Real estate markets are constantly changing; therefore, accurate home feature importance and price prediction models are essential to facilitate efficient transactions between sellers and buyers. Given an extensive and accurate dataset, data science methods and machine learning models can provide a solution.
Background:
This project used a dataset of home sales records from Ames, Iowa, to better understand local home prices and their associated features. The goal was to explore the importance of features in local home sale records and develop a price prediction model by iterating through various machine learning regression methods. It was broken down into three deliverables: 1) exploratory data insights with visuals, 2) descriptive modeling for feature importance/ranking related to price, and 3) predictive machine learning modeling and pipeline objects.
Dataset and Preprocessing:
The Ames Housing dataset was sourced from a Kaggle competition and has over 2500 home sale records with 80 input features. These features generally describe home-specific features, such as square footage and finish type. They were organized as 11 numeric categorical, 28 descriptive categorical, and 40 quantitative features. After loading the original dataset, 28 columns had missing values, and imputation methods were carried out. Categorical nominal feature columns were dummy encoded, and separate CSV files were created for the input features, the single target output feature, and the sale price.
Exploratory Data Analysis (EDA):
Initially, the dataset was inspected for sale price distribution, with a mean price of $178,000 and a standard deviation of $75,000. Although the distribution was skewed toward lower prices, it was relatively normally distributed, and no further transformations were needed.
Next, feature importance was analyzed by identifying the top five home features positively correlated with the sale price. This had to be broken into two correlations: a Pearson coefficient value for numerical features and a chi-squared value for categorical features. The top five features were reported for both types.
Top 5 Numeric Features:
- Overall Quality
- Living Area Size
- Exterior Quality
- Kitchen Quality
- Basement Size
Top 5 Categorical Features:
- Roof Material
- Neighborhood
- Building Type
- Garage Type
- Utilities
Finally, a neighborhood pricing analysis was done to understand the geography of sales records and rank the neighborhoods relative to home sale price.
Model Training and Evaluation:
The sale price prediction modeling involved several machine learning regression methods and pipelines. The search for best model hyperparameters was selected based on mean R-squared values from a five-fold cross-validation and then mean absolute error (MAE) on the complete dataset for final model evaluation and selection. Although more computationally expensive, the five-fold cross-validation was intended to prevent overfitting and false model selection.
After ETL and EDA from above, the modeling workflow began with a baseline multiple linear regression without penalization. The goal here is to establish a baseline for performance metrics. Then, four other regression models were evaluated. Finally, a stacking model was assessed using the Gradient Boosting, Random Forest, and Penalized Regression models configured into layer one.
Regression models:
- Penalized Regression w/Elastic Net (grid search)
- Gradient Boosting (grid search)
- Random Forest Regression w/Bagging
- Stacking Regression Model
- Penalized Regression w/Elastic Net (grid search)
Penalized regression with elastic net grid search combines Lasso (L1) and Ridge (L2) regularization to weigh the important features of the model better while controlling overfitting/multicollinearity. It does this by optimizing L1 and L2 cost functions. The modeling strategy cross-validated three L1 ratios and alpha values with a grid search.
- Random Forest Regression w/Bagging
Random Forest regression models are a decision tree-ensembled technique that combines multiple decision trees for accuracy with a core parameter of tree depth search. This modeling method can be further advanced by bagging or training several individual decision trees on random subsets of training data and then averaging their predictions to reduce overfitting. Grid search was utilized here to find the best model.
- Gradient Boosting
Gradient boosting is a machine learning technique that trains a model by sequentially training weak models, each focusing on the errors of the previous one, which optimizes for prediction accuracy. The same metrics are used for grid search in this type of modeling: tree depth and number of trees/estimators. This model outputted the highest accuracy from the others.
- Stacking
Finally, a stacked machine learning model is an ensemble learning technique that combines multiple regression models. This method was used next with the three trained models from above and resulted in the best mean cross-validated R-squared value and lowest mean absolute error. It utilized all 79 features.
Model Selection and Results:
After modeling several types of regression pipelines and ensembling them into a single stacked model, the stacking model had the highest R-squared value and the second lowest mean absolute error. This modeling workflow is an excellent example of how single pipeline ML objects can be stacked together to improve accuracy.
Conclusion:
This project used data science best practices and machine learning modeling to analyze a home sales dataset from Ames, Iowa. Over 2500 records and 70 features were analyzed. Initially, the dataset was inspected, and a typical ETL process was pursued. Then, the dataset underwent exploration, and feature importance was explored from the 70 features correlated toward home sales price. Some top features included living area size, overall quality, and roof type. Finally, an extensive machine learning modeling workflow was evaluated to find the best regression model for predicting home sales prices. The best model utilized an ensemble/stacked architecture of the three best pipeline objects.