Predictive Modeling - House Prices Prediction

Evin
Posted on May 14, 2021

Source Code - https://github.com/Evinwlin/Kaggle_House_price

Introduction

Real estate has traditionally been the finest investment since it provides both passive income and long-term appreciation if the value rises over time. When it comes to real estate, there are several factors in play, each of which is pricing. Historical prices reveal the value of a property's historical valuations as well as its growth in value. The goal of this research is to use machine learning techniques to investigate various variables including their impacts on property value and use that information to construct a predictive model for property prices.

Background

The data for this project was generated as part of a Kaggle competition called Advanced Regression Techniques. This dataset includes a list of 81 variables and 2560 observations. The target variable is sales price, and the remaining 80 variables are used to construct a predictive model with a goal to study variables that have potential impacts on property values and use that information to predict house prices.

Table of contents

  • Data Cleaning
  • Model Selection
  • Outlier Removal and Feature Selection
  • Model Performance

Data Cleaning -- Exploratory Data Analysis

In statistics, exploratory data analysis is a common method for examining data characteristics. It helps to uncover the underlying story the data is telling. To begin, the first step is to perform a couple of statistics and graphs to explore the target variable in greater detail (Figure1.1). The target variable's mean and median are 180,921.2 and 163,000, respectively, as shown in the graph below. The histogram shows that the majority of the property values are in the $200k to $400k area, with the distribution being positively skewed or right-skewed, indicating that there is some deviation from a normal distribution. This is also revealed by a skewness value of 1.88 and a kurtosis value of 6.54. The QQ-plot has a lot of tails, which indicates that there are a lot of outliers.

Note: Log transformation is needed on sales price since the graph shows deviation from a normal distribution.

 

 

Following that, an investigation regarding correlation was carried out (Figure1.2). Examining the correlations between sales price and predictor variables can help better comprehend the data and gain a sense of how the predictor variables could affect the target variableC

(Figure 1.2)

With further investigation, there are a total of 10 variables with a correlation coefficient above 0.5, which is considered moderately correlated to sales price. Among the 10 variables, the variable overall quality and general living area are considered highly correlated with a correlation coefficient above 0.7. This simply means when one unit of increase in the variable overall quality will result in sale price also increasing.

The EDA's final step is to look for missing values(Figure1.3). The data contains 34 variables with varying degrees of missingness. PoolQC (pool quality), MiscFeature (miscellaneous feature), Alley, Fence, FireplaceQu (fireplace quality), and LotFrontage (linear feet of roadway attached to property) are the variables with the most missing values.

(Figure1.3)

To remedy this issue, the imputation method is needed. For most of the missing values in categorical variables, “None” was the method of the imputation and for most of the continuous variables, a mix of zero, mode, and median imputation was performed. A list of the detailed summary is shown below (Figure1.4).

(Figure1.4)

Model Selection and performance metrics

The list of selected models is a combination of linear models and non-linear models. They are as following: Linear Regression, Penalization Regression (Ridge, Lasso, and Elastic-net), Support Vector Regression, Random Forest Regression, Gradient Boost Regression (Xgboost Gradient Boost). The scoring metrics used for examining model performance are mean square error and root mean square error; both MSE and RMSE provide insight into model performance by assessing residuals and residual standard deviations.

Note – The goal of experimenting with various models is to see which one works best with the dataset. Linear regression and penalization regression are better at predicting continuous variables, and lasso regression give you an idea of how important a variable is. Nonlinear models like SVR, random forest, gradient boost, and Xgboost employ techniques like bagging and boosting. It's worth a shot to test if non-linear models could outperform linear models.

Linear regression as baseline model - Outlier Removal and Feature Selection

Linear regression is notorious for being sensitive to outliers because the presence of outliers causes it to deviate away from the true underlying relationship. Thus, applying linear regression as a baseline model can help in the detection of outliers. There are a couple of detection methods used in this process. Studentized residuals, leverage, DFFITS, Cook's Distance, and Bonferroni's one-step correlation. Cook's Distance was the best of the bunch, as it reduced the baseline model's mean square error and AIC the most (Figure1.5). 

(Figure1.5)

Feature selection is needed to improve the baseline model. Figure 1.6 illustrates that recessive feature elimination, out of the four feature selection methods, works best with the baseline model. The model's RMSE was further reduced to 0.0764, implying that the discrepancy between observed and anticipated sale prices from the model has shrunk even more.


Insights – Despite the fact that RMSE decreases, the r-square remains at 0.9587. Because the r-square is on the higher end of the spectrum, there is a good probability of overfitting.

(Figure 1.6)

Model performance

When RMSE was used as a performance metric, Xgboost outperformed other models (Figure1.7). Although Xgboost has a promising accuracy score, it takes over 30 minutes to train the model and grid-search for hyperparameters. Reduced model complexity is required to improve the model's time complexity. Furthermore, non-linear models appear to outperform linear models. When compared to all linear models, Xgboost and gradient boost performed better, however SVR and random forest underperformed.

(Figure1.7)

Feature importance

One of the goals of this study is to find out what factors influence the sale value of the house. The best way to get that information is to use feature importance from models like lasso, random forest, and gradient boost. It shows how each predictor variable contributes to the prediction of the target variable and ranks each variable in order of significance. In random forest regression, the predictor variables are ranked by the proportion of the residual sum of squares that each variable reduces. The higher a variable's rank, the more influence it has on forecasting sale price (Figure 1.8).

 The top three variables are listed below:

  1. Totalsf – Total square footage of a house, its engineered variable combining total basement, first, and second square footage.
  2. OverallQual – Ranking of overall quality of a house
  3. GrlivArea - Above grade (ground) living area square feet

 

(Figure1.8)

Conclusion

To summarize, leveraging regression techniques to predict house prices works well, particularly Xgboost regression with additional model tuning.   That said when thinking about home improvements to raise potential house value and to obtain a better return, consider the following:

  1. Increasing total square footage and/or livable above-ground square footage by building additional parts/fixtures. Such as a deck and balcony. (Totalsf, GrlivArea)
  2. Improving the condition of the house by remodeling, renovation, or landscaping. (OverallQual)

Future Work

Extensive hyper-parameter tuning, better feature engineering, and the collection and incorporation of more data could all help improve model performance.

Here are a few instances:

  1. Employ data engineering techniques like Polynomial Regression to develop new features.
  2. Collect additional data to add to the current models.

About Author

Evin

Evin

With a bachelor's degree in Finance and a bachelor's degree in Statistics, Wei(Evin) Lin is a certified data scientist. He has more than two years of finance and accounting internship experience in the area of sales and trade,...
View all posts by Evin >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp