Studying Data to Predict House Prices
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
The goal of this project was aimed to utilize supervised machine learning techniques to predict the price of houses located in Ames, Iowa. This dataset was provided by Kaggle, a very popular website for data scientist come to compete and test their skill and knowledge.
This dataset provided around 80 different features, including multiple aspects of the house that would help or may not help predict the fluctuation of the house prices. The strategic approach our team adopted was to derive meaning from the dataset through various analytical graphs, statistical methods and then to apply different supervised machine-learning algorithms to predict the house sale price.
Outline
- Exploratory Data Analysis (EDA)
- Trends
- Correlation
- Preprocessing & Feature Engineering
- Data cleaning & Imputations
- Ordinal Encoding
- Log Transformations
- Box-Cox Transformation
- Label Encoding
- One-Hot Encoding
- Models and Techniques
- Linear Regression Models
- Tree-based Models
- Results
- Future Improvements
Data Exploration
The dataset contained two csv files (train.csv, test.csv).The training set had 1460 observations and the test set had 1459 observations. The only key difference between the two sets was the absence of the column of sale prices in the test set. In order to predict the sale price of a house, we began by looking at factors like Neighborhood and Overall Quality of the house.
As you can see below, both these categories play important roles in housing prices. Neighborhood plays into the old saying about real estate being all about location, location, location. But the house condition also indicates that the higher the quality, the higher the overall price for the house.
Data Cleaning & Feature Engineering
Below is a schematic of the data cleaning and feature engineering process that was performed on the datasets
Data cleaning and feature engineering were performed to construct additional explanatory variables that can help predict the housing sale price. For this process, we combined both training and test datasets together after dropping the sales price. We first assessed features with missing values. Columns with missing values were imputed as shown in the table below.
Next, we engineered additional features by creating additional columns that would help predict the sale price or combine columns that included redundant information.
Types of Feature Engineering
Three main types of feature engineering were performed:
- Columns such as Exterior1st and Exterior2nd seemed redundant, so we dummified them and combined the dummy columns. The same process was performed for Condition1 and Condition2.
- Several categorical predictors, such as masonry type and basement types of a house, have square footage information included in another column. We decided to combine the information by first dummifying columns MasVnrType, BsmtFinType1, and BsmtFinType2, then replacing the dummy variable with the actual square footage.
- Features that were measured such as square footage of an area played a significant role in terms of its correlation to the sales price. We engineered our own columns: Total FloorSF, Total Porch SF, and BsmtBath. Individually, each feature had a minor small impact on the sales price, but combining these features created a much bigger impact on the sale price. The graph below shows that the feature engineered Total FloorSF column has a strong correlation with the sale price.
Sale Price
Transforming the data was important due to the skewness of the overall data. The first transformation we performed was on the sale price of the dataset. The data below visually displays how skewed the data is and how log transformations provide a more normal distribution of the sale prices. Another transformation we used was the box-cox transformation on every predictor variable to ensure predictors were normally distributed.
Different types of encoding were performed on the categorical predictors because machine learning algorithms are incapable of processing strings or plain text in their raw form. Three approaches of encoding were performed: one-hot encoding, label-encoding, and ordinal encoding. Ordinal encoding was performed, before the box-cox transformation step, on predictors with inherent rankings. After box-cox transformation, the remaining predictors were either all one hot encoded or all label encoded. One hot encoded was used for regression techniques, and label encoding was used for decision tree techniques.
Models
We tried out various types of models to understand which one worked best for our dataset. We started off using multiple, lasso (L1), ridge (L2) and finally elastic net regressions. Afterward, we used Decision Trees, Random Forest, Gradient Boost, and XG boost.
Lasso Regression
The regression techniques were initially used because they were fairly easy to implement to our dataset and didn’t require much maintenance to configure and optimize the results. We found that different regression techniques provided very different results.
The one regression technique that provided the best results was lasso regression. We believe that L1 was the best due to the recognition of our one hot encoding which was able to use coefficients that heavily leaned on the correlation of the sales price. Our team assumed that using an Elastic Net would provide better results than the lasso due to its combination of using L1 and L2; however, this was not the case and provided an even worse CV score. The plot below shows the positive and negative 20 coefficients in the linear regression model.
- Some neighborhoods have a strong positive impact on the model, and some neighborhoods have a strong negative impact
- TotalFlrSF and OverallQual are strong positive contribution factors to sale prices
Tree-Based Models
After our regression techniques, we switched over and implemented a number of tree-based models, such as RF, XG and XGboost. The initial tree based model we performed was the decision tree, which had a poor result. We used the decision tree results as a baseline to compare against more complex tree-based models. Random Forest, Gradient Boost and XG Boost had significantly better results, with XG Boost showing the best result. Below is the feature importance plot from the XG Boost model.
- TotalFlrSF and OverallQual are strong positive contribution factors
- Also confirms neighborhood as a strong predictor
The main reason for attempting so many modeling techniques was to try to get an ensemble method to produce the best result possible. However, since lasso regression yields the best results consistently through validation and Kaggle scores, ensembling of all the models was not necessary to gain marginal improvement on scores at the cost of losing model interpretability.
Data Results
The results below showed that our Kaggle score and RMSE score from cross-validation follows the same trend and are similar in range. None of our models were overfitting, and lasso regression model and XGboost had the best results.
Future Improvements
A few notes we had in mind to further improve our model accuracy would be to explore the possibility of stacking or ensembling individual models. In addition, we also would like to further explore the neighborhood effects on couple important features we identified using hierarchical linear regression. We had a theory that the clustering of the neighborhoods had a bigger role on the house price than we initially thought, and using hierarchical linear regression would have helped prove our theory right or wrong.
Another interesting topic we had in mind for future improvements was the inclusion of time series event data to be included in our dataset. The recession cycle in 2008 must have played an impact somehow and our group wanted to see how that would have played out in our results. The economic index is something we had in mind we wanted to implement, showing things such as economic statuses and the communities living in the area. Since Iowa state university was in the middle Ames, Iowa, we saw a trend noticing that houses near the college were typically lower in price, while those north of the campus were higher in price.