House Price Prediction using Machine Learning Techniques

Avatar
Posted on Oct 11, 2019

 

INTRODUCTION

This project was conducted to predict house prices in the city of Ames, Iowa using machine learning regression methods. The data set was collected from a Kaggle competition (i.e., House Prices: Advanced Regression Techniques), and 80 features of the data set were carefully reviewed and processed for more accurate house price prediction. This project conducted an in-depth EDA, missing data imputation, feature engineering, and model building. Five different machine learning regression models, including Lasso, ElasticNet, Random Forest, Gradient Boosting, and XGBoost were trained and applied to predict house prices. In addition, the trained models were fed into develop a stacked model to maximize the accuracy of the prediction.

DATA EXPLORATION

The project began by exploring the features of data set, which includes the presence of certain amenities, the number of rooms and garages, the size of all spaces, house conditions, ages, etc. Described below are some descriptive examples of the conducted data exploration.

Distribution of Target Variable

First, the distribution of target variable,  ‘SalePrice’, was examined. As illustrated in Figure 1-(a), the majority of house prices ranges between $100,000 and $200,000 with a long tail stretching up to about $800,000. However, the distribution is right-skewed, which violates the key assumption of linear models. To normalize the distribution, Box-Cox transformation was applied and the result is shown in Figure 1-(b).

(a) Original SalePrice distribution   

(b) After Box-Cox transformation 

 

Figure 1. SalePrice distribution

 

Numerical Features Correlation

The types of features were reviewed and classified into numeric, categorical, and nominal features.There are 38 numeric features in the training dataset. To help visual understanding of the relationship between the numerical features and the target variable, a correlation matrix was created. It was found that top 9 features, including  OverallQual, GrLivArea, GarageCars, GarageArea, TotalBsmtSF, 1stFlrSF, FullBath, TotRmsAbvGrd, and YearBuilt, are strongly correlated with SalePrice.

Figure 2. Correlation of Numeric Features

 

DATA PRE-PROCESSING

In this section, the following two tasks were conducted.

  1. Cleaning outliers
  2. Imputing missing data

Cleaning Outliers

To remedy outliers, I decided to manually clean certain extreme outliers for a better fit. A scatter plots showing the relationship between SalePrice and GrLivArea was created and examined. It is important to note that GrLivArea has the highest correlation with SalePrice among the continuous numeric features. In Figure 3-(a), there are two extreme outliers on the bottom right side of the plot. These are huge outliers. Thus, they were safely removed from the data set as illustrated in Figure 3-(b).     

(a) Before outliers removal (b) After outliers removal

Figure 3. Outliers Removal

Imputing Missing Data 

Missing values in the data set were examined. Figure 4 illustrates the frequency of missing values in the train and test data set.

 

Figure 4. Frequency of missing values in the training and test data

As a first step of missing data imputation, the description of each feature was carefully reviewed. Then, three numeric features, including MSSubClass, YrSold, and MoSold, were converted into categorical variable. Some features include many null values (i.e., ‘NA’). The definition of ‘NA’ value in each feature was reviewed and replaced as shown below.

In addition, the following data imputations were performed. 

  1. Missing values in some numeric features were imputed with their mode. These features are Functional, Electrical, KitchenQual, Exterior1st, Exterior2nd, SaleType, MSZoning, and LotFrontage.
  2. Missing values in GarageType, GarageFinish, GarageQual, and GarageCond were filled with zero.
  3. Missing values in BsmtFinType2, BsmtExposure, BsmtFinType1, BsmtCond, and BsmtQual were imputed with No Basement.
  4. Missing values in BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF, BsmtFullBath, and BsmtHalfBath were filled with zero.
  5. Missing values in GarageYrBlt were imputed with No Garage.
  6. Missing values in MasVnrType were imputed with None.
  7. Missing values in MasVnrArea were filled with zero.
  8. The Utilities feature was dropped because it won’t help for prediction

 

FEATURE ENGINEERING

This section summarizes the feature engineering techniques applied in this project.

Creating New Features

After completion of imputing missing values, I decided to add the following three new features  to the dataset, which would provide more information about the house prices.

  1. TotalSF = TotalBsmtSF + 1stFlrSF + 2ndFlrSF
  2. TotalBathrooms = FullBath + 0.5 x HalfBath + BsmtFullBath + 0.5 x BsmtHalfBath
  3. Total_porch_sf = OpenPorchSF + 3SsnPorch + EnclosedPorch + ScreenPorch + WoodDeskSF

Encoding Labels of Ordinal Features

The labels of ordinal categorical features were found and they were encoded with values between 0 and n_classes-1.  

Log Transformation

The skewness of numeric features were reviewed. There are 59 features whose absolute skewness value is greater than 0.75. Then, the skewness of these features were visually checked. For example,  LotFrontage and LotArea are not normally distributed as shown in Figure 5. To fix this issue, Box-Cox (i.e., log(x+1)) transformation was conducted to normalize the distributions. Figure 6 shows the distributions of LotFrontage and LotArea after the transformation.  

(a) LotFrontage (b) LotArea

Figure 5. Skewed distribution of LotFrontage and LotArea

 

(a) LotFrontage (b) LotArea

Figure 6. Distribution of LotFrontage and LotArea after Box-Cox transformation

Dummy Features (OneHot Encoding)

For the categorical features, the get_dummies function of Pandas was applied to the remaining categorical features. Because some machine learning algorithms (e.g., Lasso) cannot operate on label data directly. They require all input variables and output variables to be quantitative. Overall, additional 225 features were created, and the total number of features used for model training was 304.

 

MODELING

As mentioned in the Introduction section, Lasso, ElasticNet, Random Forest, Gradient Boosting, and XGBoost models were trained, and the trained models were used to create a stacked model. I trained a stacked model using StackingCVRegressor package, which improved the prediction results of the five models.

The optimal hyperparameters of each model were tuned using GridSearchCV from the scikit-learn package in Python. This approach trained many models with cross-validation, using a limited number of random combinations from selected ranges of hyperparameters. In addition, the  features were normalized for the two regularized linear models (i.e., Lasso and ElasticNet) because the scale of the features affects the regularization.  

StackingCVRegressor

Stacking is an ensemble learning technique that combines multiple regression models via a meta-model. I selected XGBoost as meta-model.  “The StackingCVRegressor extends the standard stacking algorithm using out-of-fold predictions to prepare the input data for the level-2 regressor.” The basic conceptual operation of the StackingCVRegressor is illustrated in Figure 7. 

 

Source: http://rasbt.github.io/mlxtend/user_guide/regressor/StackingCVRegressor/

Figure 7. Illustration of StackingCVRegressor algorithm

 

Prediction Performance

The models with the optimal hyperparameters were evaluated by comparing the predictions of each model with validation data. Each model was evaluated using the root mean square error (RMSE) of model predictions, which is a metric for describing the differences between the predicted values and the observed values for SalePrice. Sure, lower RMSE scores are better.

Below are graphs of the predicted values and the observed values in each studied model. As expected, the result from the StackingCVRegressor (i.e., stacked model) was better than the other five models, but the result was very similar to the one obtained with XGBoost.

 

(a) Lasso (b) ElasticNet
(c) Random Forest (d) Gradient Boosting
(e) XGBoost (f) Stacking

Figure 8. Model Performance 

 

Feature Importance

Figure 9 illustrates the top 20 features in Random Forest, Gradient Boosting, and XGBoost models. The result helps understanding which features are most significant. The three models showed that OverallQual and TotalSF are two most important features.

(a) Random Forest (b) Gradient Boosting
 
(c) XGBoost  

Figure 9. Feature Importance of Random Forest, Gradient Boosting, and XGBoost

 

CONCLUSIONS

The objective of this project was to build machine learning models to predict house prices in Ames, IA. Two linear models (i.e., Lasso and ElasticNet), as well as Random Forest, Gradient Boosting, XGBoost and Stacking models were used for predictions. As expected, the Stacking model outperformed all the other models. While the stacked model does not explicitly interpret the importance of individual features on house price prediction, Random Forest, Gradient Boosting, and XGBoost found that OverallQual and TotalSF are two most important features. 

When the developed model is used to predict the house prices, buyer should expect an error of approximately $13,000 in the estimated result. Also, it should be note that overall quality (OverallQual) and total square feet of house area (e.g., total square feet of basement (TotalBsmtSF), 1st floor (1stFlrSF), and 2nd floor (2ndFlrSF)) are the two most important features that affects the house prices.

Finally, the estimated house prediction result was ranked at top 17% (806th out of 4932). The developed code is found on Github.

 

 

 

About Author

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp