# House Price Prediction using Machine Learning Techniques

##### The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

## INTRODUCTION

This project was conducted to predict house prices in the city of Ames, Iowa using machine learning regression methods. The data set was collected from a Kaggle competition (i.e., House Prices: Advanced Regression Techniques), and 80 features of the data set were carefully reviewed and processed for more accurate house price prediction. This project conducted an in-depth EDA, missing data imputation, feature engineering, and model building. Five different machine learning regression models, including Lasso, ElasticNet, Random Forest, Gradient Boosting, and XGBoost were trained and applied to predict house prices. In addition, the trained models were fed into a stacked model to maximize the accuracy of the prediction.

## DATA EXPLORATION

The project began by exploring the features of data set, which includes the presence of certain amenities, the number of rooms and garages, the size of all spaces, house conditions, ages, etc. Some descriptive examples of the conducted data exploration are described below.

### Distribution of Target Variable

First, the distribution of target variable, ‘** SalePrice**’, was examined. As illustrated in Figure 1-(a), the majority of house prices ranges between $100,000 and $200,000 with a long tail stretching up to about $800,000. However, the distribution is right-skewed, which violates the key assumption of linear models. To normalize the distribution, Box-Cox transformation was applied and the result is shown in Figure 1-(b).

(a) Original |
(b) After Box-Cox transformation |

Figure 1. SalePrice distribution

### Numerical Features Correlation

The types of features were reviewed and classified into numeric, categorical, and nominal features.There are 38 numeric features in the training dataset. To help visual understanding of the relationship between the numerical features and the target variable, a correlation matrix was created. It was found that top 9 features, including *OverallQual, GrLivArea, GarageCars, GarageArea, TotalBsmtSF, 1stFlrSF, FullBath, TotRmsAbvGrd*, and *YearBuilt*, are strongly correlated with *SalePrice*.

Figure 2. Correlation of Numeric Features

## DATA PRE-PROCESSING

In this section, the following two tasks were conducted:

- Cleaning outliers
- Imputing missing data

**Cleaning Outliers**

To remedy outliers, I decided to manually clean certain extreme outliers for a better fit. A scatter plots showing the relationship between *SalePrice* and *GrLivArea* was created and examined. It is important to note that *GrLivArea* has the highest correlation with *SalePrice* among the continuous numeric features. In Figure 3-(a), there are two extreme outliers on the bottom right side of the plot. These are huge outliers. They were safely removed from the data set as illustrated in Figure 3-(b).

(a) Before outliers removal | (b) After outliers removal |

Figure 3. Outliers Removal

### Imputing Missing Data

Missing values in the data set were examined. Figure 4 illustrates the frequency of missing values in the training and test data set.

Figure 4. Frequency of missing values in the training and test data

As a first step of missing data imputation, the description of each feature was carefully reviewed. Then, three numeric features, including *MSSubClass*, *YrSold*, and *MoSold*, were converted into categorical variables. Some features include many null values (i.e., ‘NA’). The definition of ‘NA’ value in each feature was reviewed and replaced as shown below:

In addition, the following data imputations were performed:

- Values in some numeric features were imputed with their mode. These features are
*Functional*,*Electrical*,*KitchenQual*,*Exterior1st*,*Exterior2nd*,*SaleType*,*MSZoning*, and*LotFrontage*. - Missing values in
*GarageType*,*GarageFinish*,*GarageQual*, and*GarageCond*were filled with zero. - Values in
*BsmtFinType2*,*BsmtExposure*,*BsmtFinType1*,*BsmtCond*, and*BsmtQual*were imputed with*No Basement*. - Missing values in
*BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF, BsmtFullBath,*and*BsmtHalfBath*were filled with zero. - Values in
*GarageYrBlt*were imputed with*No Garage*. - Missing values in
*MasVnrType*were imputed with None. - Values in
*MasVnrArea*were filled with zero. - The
*Utilities*feature was dropped because it won’t help for prediction

## FEATURE ENGINEERING

This section summarizes the feature engineering techniques applied in this project.

**Creating New Features**

After completing the imputation of missing values, I decided to add the following three new features to the dataset, which would provide more information about the house prices.

- TotalSF = TotalBsmtSF + 1stFlrSF + 2ndFlrSF
- TotalBathrooms = FullBath + 0.5 x HalfBath + BsmtFullBath + 0.5 x BsmtHalfBath
- Total_porch_sf = OpenPorchSF + 3SsnPorch + EnclosedPorch + ScreenPorch + WoodDeskSF

**Encoding Labels of Ordinal Features**

The labels of ordinal categorical features were found and they were encoded with values between 0 and n_classes-1.

**Log Transformation**

The skewness of numeric features were reviewed. There are 59 features whose absolute skewness value is greater than 0.75. The skewness of these features were visually checked. For example, *LotFrontage* and *LotArea* are not normally distributed as shown in Figure 5. To fix this issue, Box-Cox (i.e., log(x+1)) transformation was conducted to normalize the distributions. Figure 6 shows the distributions of *LotFrontage* and *LotArea* after the transformation.

(a) LotFrontage | (b) LotArea |

Figure 5. Skewed distribution of *LotFrontage* and *LotArea*

(a) LotFrontage | (b) LotArea |

Figure 6. Distribution of *LotFrontage* and *LotArea* after Box-Cox transformation

**Dummy Features (OneHot Encoding)**

For the categorical features, the get_dummies function of Pandas was applied to the remaining categorical features. Some machine learning algorithms (e.g., Lasso) cannot operate on label data directly because they require all input variables and output variables to be quantitative. In the end, an additional 225 features were created, and the total number of features used for model training was 304.

**MODELING**

As mentioned in the Introduction, Lasso, ElasticNet, Random Forest, Gradient Boosting, and XGBoost models were trained, and the trained models were used to create a stacked model. I trained a stacked model using StackingCVRegressor package, which improved the prediction results of the five models.

The optimal hyperparameters of each model were tuned using GridSearchCV from the scikit-learn package in Python. This approach trained many models with cross-validation, using a limited number of random combinations from selected ranges of hyperparameters. In addition, the features were normalized for the two regularized linear models (i.e., Lasso and ElasticNet) because the scale of the features affects the regularization.

**StackingCVRegressor**

Stacking is an ensemble learning technique that combines multiple regression models via a meta-model. I selected XGBoost for the meta-model. “The StackingCVRegressor extends the standard stacking algorithm using out-of-fold predictions to prepare the input data for the level-2 regressor.” The basic conceptual operation of the StackingCVRegressor is illustrated in Figure 7.

Source: http://rasbt.github.io/mlxtend/user_guide/regressor/StackingCVRegressor/

Figure 7. Illustration of StackingCVRegressor algorithm

** **

**Prediction Performance**

The models with the optimal hyperparameters were evaluated by comparing the predictions of each model with validation data. Each model was evaluated using the root mean square error (RMSE) of model predictions, which is a metric for describing the differences between the predicted values and the observed values for *SalePrice*. Sure, lower RMSE scores are better.

Below are graphs of the predicted values and the observed values in each studied model. As expected, the result from the StackingCVRegressor (i.e., stacked model) was better than the other five models, but the result was very similar to the one obtained with XGBoost.

(a) Lasso | (b) ElasticNet |

(c) Random Forest | (d) Gradient Boosting |

(e) XGBoost | (f) Stacking |

Figure 8. Model Performance

**Feature Importance**

Figure 9 illustrates the top 20 features in Random Forest, Gradient Boosting, and XGBoost models. The result helps understanding which features are most significant. The three models showed that OverallQual and TotalSF are two most important features.

(a) Random Forest | (b) Gradient Boosting |

(c) XGBoost |

Figure 9. Feature Importance of Random Forest, Gradient Boosting, and XGBoost

## CONCLUSIONS

The objective of this project was to build machine learning models to predict house prices in Ames, IA. Two linear models (i.e., Lasso and ElasticNet), as well as Random Forest, Gradient Boosting, XGBoost and Stacking models were used for predictions. As expected, the Stacking model outperformed all the other models. While the stacked model does not explicitly interpret the importance of individual features on house price prediction, Random Forest, Gradient Boosting, and XGBoost showed that OverallQual and TotalSF are two most important features.

When the developed model is used to predict the house prices, the buyer should expect an error of approximately $13,000 in the estimated result. Also, it should be note that overall quality (OverallQual) and total square feet of house area (e.g., total square feet of basement (TotalBsmtSF), 1st floor (1stFlrSF), and 2nd floor (2ndFlrSF)) are the two most important features that affects the house prices.

Finally, the estimated house prediction result was ranked at top 17% (806th out of 4932).