Predictive Modeling - House Prices Prediction

Source Code - https://github.com/Evinwlin/Kaggle_House_price
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction to House Prices and Real Estate
Real estate has traditionally been the finest investment since it provides both passive income and long-term appreciation if the value rises over time. When it comes to real estate, there are several factors in play, each of which is pricing. Historical house prices reveal the value of a property's historical valuations as well as its growth in value. The goal of this research is to use machine learning techniques to investigate various variables including their impacts on property value and use that information to construct a predictive model for property prices.
Background
The data for this project was generated as part of a Kaggle competition called Advanced Regression Techniques. This dataset includes a list of 81 variables and 2560 observations. The target variable is sales price, and the remaining 80 variables are used to construct a predictive model with a goal to study variables that have potential impacts on property values and use that information to predict house prices.
Table of Contents
- Data Cleaning
- Model Selection
- Outlier Removal and Feature Selection
- Model Performance
Data Cleaning - Exploratory Data Analysis of House Prices - Pt. 1
In statistics, exploratory data analysis is a common method for examining data characteristics. It helps to uncover the underlying story the data is telling. To begin, the first step is to perform a couple of statistics and graphs to explore the target variable in greater detail (Figure1.1).
The target variable's mean and median are 180,921.2 and 163,000, respectively, as shown in the graph below. The histogram shows that the majority of the property values are in the $200k to $400k area, with the distribution being positively skewed or right skewed, indicating that there is some deviation from a normal distribution. This is also revealed by a skewness value of 1.88 and a kurtosis value of 6.54. The QQ-plot has a lot of tails, which indicates that there are a lot of outliers.
Note: Log transformation is needed on sales price since the graph shows deviation from a normal distribution.
Following that, an investigation regarding correlation was carried out (Figure1.2). Examining the correlations between sales price and predictor variables can help better comprehend the data and gain a sense of how the predictor variables could affect the target variableC

(Figure 1.2)
With further investigation, there are a total of 10 variables with a correlation coefficient above 0.5, which is considered moderately correlated to sales price. Among the 10 variables, the variable overall quality and general living area are considered highly correlated with a correlation coefficient above 0.7. This simply means when one unit of increase in the variable overall quality will result in sale price also increasing.
Data Cleaning - Exploratory Data Analysis of House Prices - Pt. 2
The EDA's final step is to look for missing values (Figure1.3). The data contains 34 variables with varying degrees of missingness. PoolQC (pool quality), MiscFeature (miscellaneous feature), Alley, Fence, FireplaceQu (fireplace quality), and LotFrontage (linear feet of roadway attached to property) are the variables with the most missing values.

(Figure1.3)
To remedy this issue, the imputation method is needed. For most of the missing values in categorical variables, “None” was the method of the imputation and for most of the continuous variables, a mix of zero, mode, and median imputation was performed. A list of the detailed summary is shown below (Figure1.4).

(Figure1.4)
Model Selection and Performance Metrics for Predicting House Prices
The list of selected models is a combination of linear models and non-linear models. They are as following: Linear Regression, Penalization Regression (Ridge, Lasso, and Elastic-net), Support Vector Regression, Random Forest Regression, Gradient Boost Regression (Xgboost Gradient Boost). The scoring metrics used for examining model performance are mean square error and root mean square error; both MSE and RMSE provide insight into model performance by assessing residuals and residual standard deviations.
Note – The goal of experimenting with various models is to see which one works best with the dataset. Linear regression and penalization regression are better at predicting continuous variables, and lasso regression give you an idea of how important a variable is. Nonlinear models like SVR, random forest, gradient boost, and Xgboost employ techniques like bagging and boosting. It's worth a shot to test if non-linear models could outperform linear models.
Linear Regression as Baseline Model - Outlier Removal and Feature Selection
Linear regression is notorious for being sensitive to outliers because the presence of outliers causes it to deviate away from the true underlying relationship. Thus, applying linear regression as a baseline model can help in the detection of outliers. There are a couple of detection methods used in this process. Studentized residuals, leverage, DFFITS, Cook's Distance, and Bonferroni's one-step correlation. Cook's Distance was the best of the bunch, as it reduced the baseline model's mean square error and AIC the most (Figure1.5).

(Figure1.5)
Feature selection is needed to improve the baseline model. Figure 1.6 illustrates that recessive feature elimination, out of the four feature selection methods, works best with the baseline model. The model's RMSE was further reduced to 0.0764, implying that the discrepancy between observed and anticipated sale prices from the model has shrunk even more.
Insights – Despite the fact that RMSE decreases, the r-square remains at 0.9587. Because the r-square is on the higher end of the spectrum, there is a good probability of overfitting.

(Figure 1.6)
Model Performance
When RMSE was used as a performance metric, Xgboost outperformed other models (Figure1.7). Although Xgboost has a promising accuracy score, it takes over 30 minutes to train the model and grid-search for hyperparameters. Reduced model complexity is required to improve the model's time complexity. Furthermore, non-linear models appear to outperform linear models. When compared to all linear models, Xgboost and gradient boost performed better, however SVR and random forest underperformed.

(Figure1.7)
Feature Importance of House Prices
One of the goals of this study is to find out what factors influence the sale value of the house. The best way to get that information is to use feature importance from models like lasso, random forest, and gradient boost. It shows how each predictor variable contributes to the prediction of the target variable and ranks each variable in order of significance.
In random forest regression, the predictor variables are ranked by the proportion of the residual sum of squares that each variable reduces. The higher a variable's rank, the more influence it has on forecasting sale price (Figure 1.8).
The top three variables are listed below:
- Totalsf – Total square footage of a house, its engineered variable combining total basement, first, and second square footage.
- OverallQual – Ranking of overall quality of a house
- GrlivArea - Above grade (ground) living area square feet

(Figure1.8)
Conclusion to Predicting House Prices
To summarize, leveraging regression techniques to predict house prices works well, particularly Xgboost regression with additional model tuning. That said when thinking about home improvements to raise potential house value and to obtain a better return, consider the following:
- Increasing total square footage and/or livable above-ground square footage by building additional parts/fixtures. Such as a deck and balcony. (Totalsf, GrlivArea)
- Improving the condition of the house by remodeling, renovation, or landscaping. (OverallQual)
Future Work on Predicting House Prices
Extensive hyper-parameter tuning, better feature engineering, and the collection and incorporation of more data could all help improve model performance.
Here are a few instances:
- Employ data engineering techniques like Polynomial Regression to develop new features.
- Collect additional data to add to the current models.