Predictive Modeling - House Prices Prediction

Posted on May 14, 2021
Predictive Modeling - House Prices Prediction

Source Code - https://github.com/Evinwlin/Kaggle_House_price

The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction to House Prices and Real Estate

Real estate has traditionally been the finest investment since it provides both passive income and long-term appreciation if the value rises over time. When it comes to real estate, there are several factors in play, each of which is pricing. Historical house prices reveal the value of a property's historical valuations as well as its growth in value. The goal of this research is to use machine learning techniques to investigate various variables including their impacts on property value and use that information to construct a predictive model for property prices.

Background

The data for this project was generated as part of a Kaggle competition called Advanced Regression Techniques. This dataset includes a list of 81 variables and 2560 observations. The target variable is sales price, and the remaining 80 variables are used to construct a predictive model with a goal to study variables that have potential impacts on property values and use that information to predict house prices.

Table of Contents

  • Data Cleaning
  • Model Selection
  • Outlier Removal and Feature Selection
  • Model Performance

Data Cleaning - Exploratory Data Analysis of House Prices - Pt. 1

In statistics, exploratory data analysis is a common method for examining data characteristics. It helps to uncover the underlying story the data is telling. To begin, the first step is to perform a couple of statistics and graphs to explore the target variable in greater detail (Figure1.1).

The target variable's mean and median are 180,921.2 and 163,000, respectively, as shown in the graph below. The histogram shows that the majority of the property values are in the $200k to $400k area, with the distribution being positively skewed or right skewed, indicating that there is some deviation from a normal distribution. This is also revealed by a skewness value of 1.88 and a kurtosis value of 6.54. The QQ-plot has a lot of tails, which indicates that there are a lot of outliers.

Note: Log transformation is needed on sales price since the graph shows deviation from a normal distribution.Predictive Modeling - House Prices Prediction

Following that, an investigation regarding correlation was carried out (Figure1.2). Examining the correlations between sales price and predictor variables can help better comprehend the data and gain a sense of how the predictor variables could affect the target variableC

Predictive Modeling - House Prices Prediction

(Figure 1.2)

With further investigation, there are a total of 10 variables with a correlation coefficient above 0.5, which is considered moderately correlated to sales price. Among the 10 variables, the variable overall quality and general living area are considered highly correlated with a correlation coefficient above 0.7. This simply means when one unit of increase in the variable overall quality will result in sale price also increasing.

Data Cleaning - Exploratory Data Analysis of House Prices - Pt. 2

The EDA's final step is to look for missing values (Figure1.3). The data contains 34 variables with varying degrees of missingness. PoolQC (pool quality), MiscFeature (miscellaneous feature), Alley, Fence, FireplaceQu (fireplace quality), and LotFrontage (linear feet of roadway attached to property) are the variables with the most missing values.

Predictive Modeling - House Prices Prediction

(Figure1.3)

To remedy this issue, the imputation method is needed. For most of the missing values in categorical variables, β€œNone” was the method of the imputation and for most of the continuous variables, a mix of zero, mode, and median imputation was performed. A list of the detailed summary is shown below (Figure1.4).

Predictive Modeling - House Prices Prediction

(Figure1.4)

Model Selection and Performance Metrics for Predicting House Prices

The list of selected models is a combination of linear models and non-linear models. They are as following: Linear Regression, Penalization Regression (Ridge, Lasso, and Elastic-net), Support Vector Regression, Random Forest Regression, Gradient Boost Regression (Xgboost Gradient Boost). The scoring metrics used for examining model performance are mean square error and root mean square error; both MSE and RMSE provide insight into model performance by assessing residuals and residual standard deviations.

Note – The goal of experimenting with various models is to see which one works best with the dataset. Linear regression and penalization regression are better at predicting continuous variables, and lasso regression give you an idea of how important a variable is. Nonlinear models like SVR, random forest, gradient boost, and Xgboost employ techniques like bagging and boosting. It's worth a shot to test if non-linear models could outperform linear models.

Linear Regression as Baseline Model - Outlier Removal and Feature Selection

Linear regression is notorious for being sensitive to outliers because the presence of outliers causes it to deviate away from the true underlying relationship. Thus, applying linear regression as a baseline model can help in the detection of outliers. There are a couple of detection methods used in this process. Studentized residuals, leverage, DFFITS, Cook's Distance, and Bonferroni's one-step correlation. Cook's Distance was the best of the bunch, as it reduced the baseline model's mean square error and AIC the most (Figure1.5).Β 

Predictive Modeling - House Prices Prediction

(Figure1.5)

Feature selection is needed to improve the baseline model. Figure 1.6 illustrates that recessive feature elimination, out of the four feature selection methods, works best with the baseline model. The model's RMSE was further reduced to 0.0764, implying that the discrepancy between observed and anticipated sale prices from the model has shrunk even more.


Insights – Despite the fact that RMSE decreases, the r-square remains at 0.9587. Because the r-square is on the higher end of the spectrum, there is a good probability of overfitting.

(Figure 1.6)

Model Performance

When RMSE was used as a performance metric, Xgboost outperformed other models (Figure1.7). Although Xgboost has a promising accuracy score, it takes over 30 minutes to train the model and grid-search for hyperparameters. Reduced model complexity is required to improve the model's time complexity. Furthermore, non-linear models appear to outperform linear models. When compared to all linear models, Xgboost and gradient boost performed better, however SVR and random forest underperformed.

(Figure1.7)

 

Feature Importance of House Prices

One of the goals of this study is to find out what factors influence the sale value of the house. The best way to get that information is to use feature importance from models like lasso, random forest, and gradient boost. It shows how each predictor variable contributes to the prediction of the target variable and ranks each variable in order of significance.

In random forest regression, the predictor variables are ranked by the proportion of the residual sum of squares that each variable reduces. The higher a variable's rank, the more influence it has on forecasting sale price (Figure 1.8).

Β The top three variables are listed below:

  1. Totalsf – Total square footage of a house, its engineered variable combining total basement, first, and second square footage.
  2. OverallQual – Ranking of overall quality of a house
  3. GrlivArea - Above grade (ground) living area square feet

Β 

(Figure1.8)

Conclusion to Predicting House Prices

To summarize, leveraging regression techniques to predict house prices works well, particularly Xgboost regression with additional model tuning.Β  Β That said when thinking about home improvements to raise potential house value and to obtain a better return, consider the following:

  1. Increasing total square footage and/or livable above-ground square footage by building additional parts/fixtures. Such as a deck and balcony. (Totalsf, GrlivArea)
  2. Improving the condition of the house by remodeling, renovation, or landscaping. (OverallQual)

Future Work on Predicting House Prices

Extensive hyper-parameter tuning, better feature engineering, and the collection and incorporation of more data could all help improve model performance.

Here are a few instances:

  1. Employ data engineering techniques like Polynomial Regression to develop new features.
  2. Collect additional data to add to the current models.

About Author

Evin

With a bachelor's degree in Finance and a bachelor's degree in Statistics, Wei(Evin) Lin is a certified data scientist. He has more than two years of finance and accounting internship experience in the area of sales and trade,...
View all posts by Evin >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI