Analyzing Data to Understand Housing Prices in Ames
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Will Han Github | LinkedIn
Gary Lin Github | LinkedIn
Background Information
Whether you are a home buyer, home seller, or a house flipper, purchasing and selling a house is a major life event. It requires not only a pile of legal documents but also a huge lump sum of money. Many people work years to save up for down payment, and mortgage ends up being the largest expense month to month. Thus, extra caution and care go into analyzing house prices and data to avoid either buying high or selling low.
To gain more insight into how house prices are determined, we decided to look at house price data in Ames, Iowa compiled by Dean De Cock. The data contained 79 explanatory features describing various aspects of residential homes such as square footage, neighborhood, and many condition and quality measures in different parts of a home, including kitchen, pool, and basement. The goal of this analysis was to:
- Develop a machine learning model to predict the sale price for each house in the data (train & test)
- Determine what features of a home impact housing prices the most and in what direction
Exploratory Data Analysis (EDA)
The data set included a plethora of house features including 1,460 observations along with 79 variables (which included 'sale price', which is the target variable in this analysis). In order to look at a cleaner set of data, we decided to perform some feature selection, feature engineering, imputations, and outlier eliminations.
Feature Selection & Engineering Data
As we were looking through all of the features, we realized that there were many single-valued features. We decided to drop the features where the largest category of the feature encompasses at least 90% of the data. Our choice for doing so rests on the assumption that adding these features would not provide much additional information to the machine learning models. This resulted in dropping the following nine features:
- Street
- Alley
- Utilities
- Condition2
- RoofMatl
- Heating
- Electrical
- PoolQC
- MiscFeature
We checked our assumption in re-running some of the models by including these nine features. Including them did not improve our predictions. We also created additional features by combining separate features:
- TotalSF (Total Sqft)
- SqftAbvGrd (Sqft above ground)
- TotalSqftPerRoom (Total Sqft per Room)
- SqftAbvGrdPerRoom (Sqft above ground per room)
- TotalBoath (Total baths)
- GarageQC
- ExterQC
- BsmtQC
- OverallQC
Since we were planning to use tree-based models using the scikit-learn library in Python, we needed to apply encoding to categorical variables. So ordinal encoder was used for ordinal features (where the numerical order matters) to capture the relationship between values within the feature, whereas target encoding was used for non-ordinal features.
Imputations & Outlier Eliminations
We then looked at null values in the data. We had to be cautious dealing with them because many of the "NaN" values in the original data meant that the observation actually did not have a certain feature (pool, basement, etc.). So we first determined which features can have true null values and set them to be "None". After that, we imputed the remaining missing values as follows:
- Numerical features: using median by neighborhood
- Categorical features: using mode by neighborhood
Once imputations were completed, we created scatter plots of the features that we thought would be important (TotalSF, TotRmsAbvGrd, YearBuilt, LotArea, OverallQual, and SalePrice) as shown in Figure 1.
Figure 1
We saw some outliers on the TotalSF and LotArea columns and decided to remove the outliers by taking out whenever TotalSF was over 7,500 and LotArea was over 10,000, which resulted in excluding six observations. The updated scatter plot with the outliers removed is shown in Figure 2.
Figure 2
Data of Correlation Matrix
Lastly, we created a Pearson correlation matrix using features that had a correlation with SalePrice of more than 0.5 as shown in Figure 3. All observations were very intuitive, but the following were some highlights:
- TotalSF, OverallQual, and GrLivArea had the highest correlation with SalePrice
- Year built & remodeled were less correlated to SalePrice compared to features regarding square footage or quality
Figure 3
Model Tests Data
Given a large number of features in the data set, we decided to focus mainly on tree-based models. We did experiment with some linear modeling initially, but the performance was much worse than tree-based models. So for the analysis, the following models were used:
- Random Forest Regression
- Gradient Boosting Regression:
- Original Gradient Boosting Regression (GBR)
- Light Gradient Boosting Machine (LGBM) Regression
- Extreme Gradient Boosting (XGB) Regression
- Meta-Learner Models (using the four models above):
- Stacked Meta-Learner
- Weighted-Average Meta-Learner (Voting Regressor)
Since the data set has a limited number of observations (only about 1,400), we decided to perform a 5-fold cross-validation for all models. Also, because the models had many parameters, hyperparameter tuning was performed through Bayesian optimization based on the โhyperoptโ module since it had a much shorter runtime compared to traditional grid search. When we were comparing the final model results, root mean squared error (RMSE) between the log of the predicted value and the log of the observed value was used. The results of the six models are shown in Figure 4 below.
Figure 4
As shown in Figure 4, four metrics were measured including mean absolute error (MAE), average RMSE on the train set, the standard deviation of RMSE on the train set, and average RMSE on the test set provided by Dean De Cock. The 'average' here represents the mean of cross-validation results. The summary of the model results was as follows:
- Average RMSE on the test set was our main focus, which XGB had the best result
- Using the stacking meta learner seems to improve accuracy slightly on the train set, but XGB had a better accuracy on the test set
- GBR was the most precise in terms of the standard deviation of RMSE
Then we tried some extra tests as follows:
- Linear model using elastic-net: did not improve performance possibly due to highly non-linear relationship between features
- Applying XGB on a restricted feature set (after removing less important features): the resulting RMSE was essentially the same
Feature Importance
Lastly, we looked through the feature importance of each of the four models (excluding meta learners). We defined a feature to be important if it is at least 10% of max feature importance in the model and Figure 5 below shows the result for each model.
Figure 5
Then we created a bar graph on consistently important features that show the features that are 'important' in at least two of the models as shown in Figure 6 below. The x-axis shows the count of the feature being considered 'important' in the four models shown in Figure 5. From the graph, the most important features that determine house price were house size and overall house quality.
Figure 6
Narrowing down the most important features even more by taking features that appear 'important' in at least three models, a Pearson correlation matrix was created as seen in Figure 7. Observations from the matrix were as follows:
- Important features are positively correlated with the sale price
- Total square footage, overall quality, and above ground living area had the strongest correlation to sale price
- One exception was BsmtFinSF1 which was somewhat weakly correlated
Figure 7
Conclusion & Key Take-Aways
After performing various model testing, it was confirmed that the XGB model resulted in the best accuracy in predicting house prices in Ames, Iowa. However, feature importance analysis provided more actionable insights than actually predicting house prices as follows:
- Square footage and various quality measures (basement, overall, and kitchen) seem to be the most important features of the four models tested
- Potential uses of the results:
- House flippers can use the information to determine where to use their renovation budget to add the most value to the property
- Homeowners that are trying to sell their property can focus on increasing basement & kitchen quality before listing to get the most value out of the property