Data Driven Predictions of Ames, IA House Prices
The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction to Ames Housing Prices Data
The purchase of houses is something that most of us will go through at least once in his or her lifetime. When it comes to purchasing a home, one of the biggest factors is predicting the sale price of the house in order to decide whether the purchase of the home is a good investment or not. In this project we will be using the data of sale price of houses from Ames, Iowa. We will be using different type of models of machine learning to predict the sale price of houses.
Data Pre-Processing
In order to tackle the missingness from the data we used various methods to impute the missing data.
● Lot Frontage - imputed based on KNN for LotConfig and LotArea
● BsmtExposure/BsmtFinType2/Electrical - simple random sampling
While NAs that were described as a true category were replaced with No
Feature Engineering of Data
To begin the feature engineering, the categorical features were first split into either categorical nominal or categorial ordinal.
Categorical Ordinal
●Encoded with 1 being poor through n as higher represented better or higher quality for each feature
○ ExterQual, ExterCond, HeatingQC, KitchenQual, BsmtQual, BsmtCond, FireplaceQu, GarageQual, GarageCond, BsmtFinType1, Functional, GarageFinish, PavedDrive, PoolQC, Fence
Categorical Nominal
● One Hot Encoding was used to dummify the features below ○ LotConfig, Exterior1st, Exterior2nd, Foundation, MasVnrType, SaleCondition, BsmtExposure, Misc
Additional Features
● Combined redundant features and made new features to show different perspective to the dataset
● TotalBath = BsmtFullBath + BsmtHalfhBath/2 + FullBath + HalfBath/2
● Bedroom/Bathroom Ratio = BedroomAbvGr/TotalBath
Binning for Neighborhood
● Computed average price per sqft based for each neighborhood
● Transformed Neighborhood into an ordinal feature based on average price per sqft ranking
Feature Selection of Data
After the feature engineering, each feature was examined to see how they compared in terms of its distribution. From below we dropped columns that had extreme skewness.
Categorical Nominal
● Dropped based on distribution of classes
○ MSZoning, Street, Alley, LotShape, LandContour, Utilities, LandSlope, BldgType, HouseStyle,RoofStyle, RoofMatl, BsmtFinType2, Heating, Electrical, Condition1, Condition2, CentralAir, GarageType, SaleType, BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, 1stFlrSF, 2ndFlrSF, LowQualFinSF,MSSubClass
● Binned classes with less than 100 observations into others ○ Exterior1st, Exterior2nd, Foundation
Regularized Regression
● Stepwise regression was applied to full set of features
- Each feature was added and dropped based on their significance level
● 33 features were picked based on the selection process
- OverallQual, GrLivArea, Neighborhood, TotalBsmtSF, BsmtExposure_Gd, KitchenQual, GarageCars, OverallCond, MasVnrArea, BsmtFinType1, SaleCondition_Partial, Fireplaces, MasVnrType_BrkFace, KitchenAbvGr, GarageYrBlt, LotConfig_CulDSac, BsmtExposure_NoBsmt, BsmtQual, TotRmsAbvGrd, WoodDeckSF, ScreenPorch, ExterQual, LotArea, BedroomAbvGr, Functional, Exterior1st_Plywood, YearRemodAdd, TotalBath, Bedroom.Bathroom, SaleCondition_Normal, LotConfig_FR2, GarageQual, YearBuilt
- R^2 = 0.844, Adjusted R^2 =0.840, AIC = 34251.1597
● Compared with the features selected by Lasso, kept the 29 features occur amongst both feature selection processes
Lasso Penalization
- Selected the best alpha (from 0.001 to 100) from 10 folds cross validation.
- Identified features with non-zero coefficients at the best alpha. Reduced features from 76 to 38
- Further reduced features based on BIC from 38 to 29
- Split train set with reduced features into train/test set to assess spread of train/test R^2
- Cross validated train set with reduced feature to further inspect the stability of R^2
*30% split for testing
**5-folds cross validation performed
Lasso |
Lasso |
Linear Regression |
|
Hyperparameter |
alpha=15.70 |
alpha=0.001 |
|
# of features |
38 |
29 |
29 |
Train R^2* |
0.8425 |
0.8402 |
0.8402 |
Test R^2 |
0.8329 |
0.8319 |
0.8319 |
Cross Validation** |
0.8441 |
0.8421 |
0.8421 |
Features for Regularized Regression:
LotArea, Neighborhood, OverallQual, OverallCond, MasVnrArea, ExterQual, BsmtQual, BsmtFinType1, TotalBsmtSF, GrLivArea, BedroomAbvGr, KitchenAbvGr, KitchenQual,'TotRmsAbvGrd, Functional, Fireplaces, GarageYrBlt, GarageFinish, GarageCars, WoodDeckSF, 3SsnPorch, ScreenPorch' TotalBath, LotConfig_CulDSac, LotConfig_FR2, Exterior1st_Plywood, SaleCondition_Partial, BsmtExposure_Gd, BsmtExposure_NoBsmt
Random Forest: Base Model / Feature Selection
- Used random forest regressor to rank feature importance for inclusion in tree-based models
- Trained initial random forest with all 75 features, and selected features for additional models based on importance:
- Trained model on 25 features with >1% feature importance
- From 25 feature model, identified 16 features with >2% importance
- In 16 feature model, isolated 8 features with >5% importance
- Even when restricted to only 8 features, high degree of unfitting evident from high train / validation error delta
# of Features |
Train R2 |
Validation R2 |
75 |
0.9775 |
0.8428 |
25 |
0.9805 |
0.8675 |
16 |
0.9803 |
0.8602 |
8 |
0.9764 |
0.8373 |
Random Forest: Hyperparameter Tuning
- Evaluated variations in hyperparameters on 8 feature random forest to limit overfitting:
- # of trees: 100-1,000
- Max depth: 2-5
- Max features: 2-5
- High degree of overfitting regardless of hyperparameter values indicates too many features and / or not suitable data for random forest prediction
Random Forest: Forward Feature Selection
- Ran a forward feature selection from the null model to isolate the most important features and minimize overfitting
- High degree of overfitting exists with as low as two features, indicating that random forest may not be the best model for this problem
- Of first five features in forward selection, one does not overlap with features identified using regularized regression
- To account for impact of bedroom / bathroom ratio on sale price, 5 feature random forest was selected for inclusion in final model
n |
Feature Added |
Train R2 |
Validation R2 |
1 |
Overall Quality |
0.6820 |
0.6787 |
2 |
Neighborhood |
0.7979 |
0.7390 |
3 |
Above Ground Living Area |
0.9736 |
0.8174 |
4 |
Bedroom / Bathroom Ratio |
0.9794 |
0.8531 |
5 |
Total Basement Square Feet |
0.9811 |
0.8719 |
Gradient Boosting
- Trained baseline gradient boosting regressor on all features to evaluate feature importance, with 30 features having >1% importance (high overlap with chosen linear regression features)
- Selected features using forward feature selection:
- Gradient boosting models show lower levels of overfitting than random forest at each number of features
- Selected 8 feature model for inclusion in final model
n |
Feature Added |
Train R2 |
Validation R2 |
1 |
Overall Quality |
0.6818 |
0.6783 |
2 |
Above Ground Living Area |
0.8111 |
0.7779 |
3 |
Neighborhood |
0.8445 |
0.8260 |
4 |
Total Basement Square Feet |
0.8701 |
0.8396 |
5 |
Total Bathrooms |
0.8771 |
0.8503 |
6 |
Garage Cars |
0.8759 |
0.8500 |
7 |
Kitchens Above Grade |
0.8754 |
0.8504 |
8 |
Remodel Date |
0.8751 |
0.8552 |
9 |
Sale Condition: Partial (New) |
0.8690 |
0.8486 |
10 |
Number of Fireplaces |
0.8709 |
0.8495 |
Stacked Model
Lasso + GBM + Random Forest = R^2 of .985 with MSE of .178
Lasso + GBM = R^2 of .903 with MSE of .155
Future Exploration on Ames Housing Price Data
- Incorporate features from 3rd party sources such as mortgage interest rate or unemployment rate.
- Attempt other imputation methodologies to assess whether they improve predictability.
- Explore dimension reduction techniques for feature selection purposes.