Using Data to Predict Housing Prices in Ames Iowa
The skills the author demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Kaggle competitions offer a great opportunity for those who want to practice and improve their data science skills. It's also always fun to play with different data sets that push you to explore and learn different techniques. It was for this reason that I decided to take on the 'House Prices: Advanced Regression Techniques' challenge. The goal is to predict housing prices in Ames, Iowa. This post will describe how I optimized my pricing model while following best practices that are expected outside the Kaggle environment. The general outline of the process was this:
- Understanding the Data
- Imputing Missing Values
- Feature Engineering/Dimension Reduction
- Fixing Skewness and Outliers
- Modeling
- Evaluation
Understanding Data Types
The data set consists of 79 features that will be the predictors for 'SalePrice'. My first instinct was to look at all the variables and their data types. I saw that features like 'MSSubClass', 'MoSold' and 'YrSold' were integers but should actually be categories. I then went and changed them to object types so that I can change all the necessary variables into categories in one go. I left the variable 'YearBuilt' as an integer due to its linear relationship with 'SalePrice'.
There were many ordinal variables likes 'ExterQual' and 'BsmntCond' with values such as "Excellent", "Good" and "Poor" In an effort to lower dimensionality during the dummification process, these variables were label encoded. This preserves the order of the values with the assumption that sequential levels are one unit better than that last.
Understanding Missingness
If a variable has TOO many missing values, we might as well just get rid of the whole column. Imputing that many values would be way too high of a risk in under-representing the true population.
I've arbitrarily decided to remove any columns that had more than 20% missing values. As a result, I've removed 'PoolQC', 'MiscFeature', 'Alley', 'Fence', and 'FireplaceQu'. I was really close to removing 'LotFrontage' as well, but seeing the linear relationship of the logs of 'LotFrontage' with 'LotArea', missing values were replaced with the regression of 'LotFrontage' with 'LotArea'.
For other missing categorical values, depending on what the variable it is, the missing values were imputed differently. If it was a feature that a house may not necessarily have such as 'GarageType', missing values were replaced with 'None'. On the other hand, if it is a feature that SHOULD have a value like 'MSZoning', missing values were replaces with the mode of that variable. Then there are other categorical features that have 'Other' as one of their values (this can be found in the data description file). For variables like these, missing values were just replaced with 'Other'. As for numerical features, missing values were imputed with the median of the training data.
Feature Engineering/Dimension Reduction
Many of the existing features with less predictive power can be replaced with engineered ones with hopefully more predictive power. Such features include 'GarageYrBlt' and 'YearRemodAdd' which were deleted and replaced with whether or not the house had a garage/was remodeled and how long in between did that happen from when the house was built.
A house may also go for a higher price depending if it can accommodate the tenants well. Bath Capacity and Parking Capacity were features engineered by dividing total number of baths/garage cars by the number of bedrooms.
Finally many of the variables that could be combined, were. This resulted in features like Total Bathrooms, Total Floor SF, Total Porch SF, etc.
The last effort to lower dimensionality was to take out any columns that don't have a lot of variation within its values. Having zero or near zero-variance do not add value in our predictive model which included 'Street', 'Utilities' and 'Condition2'.
Using Data to Analyze Outliers
Outliers can make our model overfit and decrease our model's ability to generalize well. With visual inspection only, I've removed observations that took away from the linearity of the feature plotted with SalePrice. These included features like 'LotFrontage' (removed obs > 250), 'GrLivArea' (removed obs > 4500) and 'TotalPorchSF' (removed obs > 700).
Using Data to Understand Skewness
For the dependent variable, we see a rightly skewed distribution. In order to meet assumptions linear regression, we take the log to ensure constant variance of residuals.
However, taking the log of the independent variables in order to have a normal distribution can be beneficial and even kill two birds with one stone. 1: It may help in creating a stronger linear relationship with the dependent variable and 2: it may be used as a way of dealing with outliers. Thus, I took the log(x+1) of any numerical feature that had a skew value greater than 0.6 or lower than -0.6 as well.
Using Data to Analyze Modeling
After the data set was dummified (resulting in 250 dimensions), it was time to model. 3 regression models and 3 tree based models were ran while GridSearchCV was used to tune all hyperparameters for all 6 models. Once tuned, they were all tested on a hold out set (1/8 of the original training set). The results in order of lowest hold out set RMSE are listed below:
- Elastic Net (.11345 RMSE)
- Tuned alpha, l1_ratio
- Lasso (.11350 RMSE)
- Tuned alpha
- Ridge (.11897 RMSE)
- Tuned alpha
- Xtreme Gradient Boost (.12411)
- Tuned max depth, learning rate, n estimators, min child weight, gamma, colsample by tree, reg lambda, reg alpha
- Stochastic Gradient Boost (.12547 RMSE)
- Tuned max depth, learning rate, n estimators, min samples split, max features, min samples leaf
- Random Forest (.13589 RMSE)
- Tuned n estimators, max features, max depth, min samples split
No surprise that the regularized linear regression models outperformed the tree based models as most of the features in the data set had a linear relationship with SalePrice. It would seem that Lasso was preferred over Ridge insinuating the need to push many of the Beta coefficients of certain variables to zero.
Submission to Kaggle
Each of these models were then submitted to Kaggle. The results in order of Kaggle's RMSE are listed below:
- Lasso (.12996)
- Elastic Net (.12449)
- Ridge (.12996)
- Xtreme Gradient Boost (.13027)
- Gradient Boost (.13179)
- Random Forest (.14695)
Though the Kaggle scores were slightly higher than the ones I had in my hold out set, the ranking of the models were consistent, only that Lasso had overtaken Elastic Net.
My last trump card to optimize my Kaggle score was to reap the advantages of each model by creating a stacked model. Since the Lasso model had the best hold out set RMSE, it was used as the meta model while the other 5 models were used as base models. The final Kaggle result for the stacked model upon submission was (.12248), an improvement from all the other models. This ranked 990 out of 4634 contestants.
Conclusion
If there is one thing I learned from this project that I can pass on to any aspiring data scientist is that slight differences in data imputation can change EVERYTHING. How you decide to deal with missing data can either push your model to generalize well, or it can pull your model into some imaginary world. There is almost a step by step approach to tuning your models, but when it comes to preparing your data, you really have to be creative. At some time in the future, I will definitely go back to this project and see how I can better impute and engineer more powerful predictors.