Machine Learning - Predicting Housing Prices in Ames, Iowa

Avatar
Posted on Feb 7, 2020

Project Code | Presentation | Slides

Introduction

"Housing - humanity's simple, yet complex and timeless paradigm. Families and individuals, buyers and sellers, and businesses and investors all continually try to crack the code - finding the right housing price to balance one's life and future." Predicting housing prices is an invaluable, yet frustrating endeavor. In this project, we will be training several machine learning models that use the features and attributes of the house to predict the sale price of houses in Ames, Iowa. We examined the features to determine which features are important and which are not, developed multiple machine learning models, and compile the results to take advantage of the strongest point of the different models.

Data Description

The data was compiled by Dean De Cock and published in Kaggle - Advanced Regression Techniques. With 79 explanatory variables describing (almost) every aspect of residential homes in Ames, Iowa, and 2930 observations / houses, our goal is to predict the sales price for each house evaluated on Root-Mean-Squared-Error (RMSE) between the logarithm of the predicted value and the logarithm of the observed sales price. (Taking logs means that errors in predicting expensive houses and cheap houses will affect the result equally.).

Of the 79 explanatory variables, 51 are categorical and 28 are continuous. Each predictor variable could be categorized into variables in the following perspectives: lot/land, location, age, appearance, external features, room/bathroom, kitchen, basement, roof, garage, and utilities.

Continuous variables: relate to various area dimensions such as the size of the living area, the basement and the porch;

Discrete variables: quantify the number of items occurring within the house, such as number of rooms, baths, kitchens, parking spots, etc;

Nominal variables: identify various types of dwellings, garages, materials, and environmental conditions;

Ordinal variables: rate various items within the property.

The further information about the variables can be found here.

We can make assumptions about predictors that might influence the housing price:

  • Which Neighbourhood it locates, is it in a Good School Zone? Is it a safe place to live? Will I have friendly neighbors in a harmonious community?
  • How old is the house?
  • What is the number of rooms, is it capable of accommodating my family and guest?
  • Has heating?
  • Has a swimming pool?
  • The number of parking spots.
  • The lot size;
  • Proximity to schools, public transportations, malls, etc.;
  • Type of dwelling;
  • Size of inside and outside areas.

Data Exploration

Understanding the dataset by visualizing the distribution of variables and their correlation with the target variable, we removed outliers and transformed the target variable. As can be seen in the plots below, outliers exist, and some variables have a strong linear relationship with the target variable.

No photo description available.

Log Transformation of Dependent Variable

A histogram and probability plot shows a left-skewed distribution curve of the dependent variable sale price, with most of the houses being sold at the $100,000 to $200,000 range, as shown in the left plots below. The left skewness is caused by a small number of expensive houses and a concentration on cheap houses. Therefore we took a log on the sale price to make this distribution more symmetrical, as demonstrated in the right plots. The rationale behind the log transformation on the target variable is as follows:

  • It allows a non-linear and thus a quite general relationship between variables for having a multiplicative form.
  • Sale price is always greater than or equal to 0, which makes it a limited dependent variable that needs to be dealt with some particular techniques. However, the log of it has a maximum value of positive infinity and a minimum value of negative infinity that avoids using those techniques.
  • It can reduce the effect of extreme outliers.
  • If we have a model that has heteroscedasticity / non-constant variance, the log transformation will suppress the variation in the target variable and therefore reduce the heteroscedasticity.
  • Similarly, it can also make the error's distribution more symmetric for the sake of normality assumption in linear regression.

Image may contain: text

Removing High Influential Points in Features that are Highly Correlated with the Target Variable

High influential points are the points that are both an outlier and having high leverage. An outlier is a data point whose response does not follow the general trend of the rest of the data. A data point has high leverage if it has "extreme" predictor values. High influential points can't represent the general pattern of the data. They will pollute the linear regression model by 'dragging' the fitting hyperplane 'up' or 'down,' making the model either overestimate or underestimate the actual coefficients and intercept. This can particularly manifest in the predictor variables that are highly correlated with the response variable as they are more likely being influential to the prediction with higher coefficients.

Therefore, we filtered out the highly correlated predictor variables, identified and removed the influential points in these variables.

No photo description available.

Imputation of Missing Values

Overall, 6% of data was missing, and this occurred in 34 variables of the total 79 predictor variables. Four predictor variables have over 80% of missing value.

No photo description available.

The number of variables with missing value in the test dataset is much more than in the training dataset. The imputation has to be implemented in both the training and testing datasets. Let's take a look at our imputation strategy over the training dataset.

We identified missing values into 'pseudo' and 'real' missing values. The pseudo means the values are not actually missing. Instead, they are indicating a house without the corresponding attribute. For instance, the NaNs in variable PoolQC, which stands for swimming pool quality, represent houses without a swimming pool, which is common. So we imputed these pseudo missing values with words/strings like 'No PoolQC.' For the real missing values, we imputed by grouping the values in the predictor variable by their labels in the related variable, taking the mean, median, or mode from the grouped values of the predictor variables, and using that to impute a missing value for predictor variable in each of the groups.

For example, we imputed 'LotFrontage,' which stands for the Linear feet of street-connected to the property by grouping the values in it by the labels of their corresponding Neighborhood, taking the median, and use it to impute. We also used some life experience and domain knowledge to impute. For example, we imputed the feature 'Electrical' by the industrial standard Electrical system ('Standard Circuit Breakers & Romex'). 

No photo description available.

Similarly, we imputed the test dataset.

Image may contain: text

Feature Engineering

As our linear model is not robust enough to deal with different data types, we grouped the data into different categories: continuous, ordinal categorical, and nominal categorical, and transformed them respectively, as shown in the slide below.

No photo description available.

When dummifying, every category will form its own column. However, many will not be numerous enough or different enough to form meaningful variables, like the 'Wall', 'OthW', and 'Floor' in the plot below.

Image may contain: text

So we grouped them as follows:

No photo description available.

As the optimal situation to apply linear regression model on the data is that residuals/errors are normal, transforming both predictors and the target variable to more symmetric distribution can make our model more robust in the sense of statistical inference. Therefore, we applied a box-cox transformation to skew data that exceed a threshold of skewness.

No photo description available.

No photo description available.

In addition, we also created new variables based on our understanding of the data and domain knowledge. For instance, we add one feature, which represents the total square feet of the house:

attri['TotalSF'] = attri['TotalBsmtSF'] + attri['1stFlrSF'] + attri['2ndFlrSF']

Model Fitting

We first trained a Lasso Regression Model and got a minimum test RMSE 0.1075 by selecting the optimal lambda. Then we performed feature selection using the optimal lambda to drop features that the corresponding coefficients are shrunk to 0. 119 features are dropped in total in both training and test datasets.

No photo description available.

After that, we re-tune the lambda for the updated training dataset using Lasso and Ridge after feature selection above and got a slightly better result (RMSE) in Lasso.

No photo description available.

And a significant improvement in Ridge.

No photo description available.

We also train Elastic Net Regression Models and found that Lasso returns us the best result among the three.

No photo description available.

Lastly, we trained the other two models that are gradient boosting and extreme gradient boosting machine, and we stacked all the regressors by giving them different weights based on their least RMSE. Eventually, we got the best result: 0.1049 in RMSE. 

No photo description available.

This model achieved an average error of $9000 within the most common price range from $50,000 to $20,0000.

Feature Importance

The top 10 predictor variables given by the feature importance score of gradient boosting machine are:

  1. TotalSF - Featured variable: Total Squared Feet of the house;
  2. OverallQual: Rates the overall material and finish of the house;
  3. GrLivArea: Above ground living area square feet;
  4. YearBuilt: Original construction date;
  5. KitchenQual: Kitchen quality;
  6. TotalBsmtSF: Total square feet of basement area;
  7. GarageArea: Size of garage in square feet;
  8. YearRemodAdd: Remodel date (same as construction date if no remodeling or additions);
  9. ExterQual: Evaluates the quality of the material on the exterior;
  10. 1srFlrSF: First Floor square feet;
 

Introduction to our Team 

Image may contain: 4 people, including Paul Dingus, people smiling, text

 

About Author

Avatar

Fred (Lefan) Cheng

Fred Cheng is a certified data scientist who is working on a Master’s Degree in Management and Systems with database technology specialization from New York University with a bachelor’s in business management minor in finance from The Chinese...
View all posts by Fred (Lefan) Cheng >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp