Studying Data to Predict House Prices

, , and
Posted on Sep 15, 2018
The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction

The goal of this project was aimed to utilize supervised machine learning techniques to predict the price of houses located in Ames, Iowa. This dataset was provided by Kaggle, a very popular website for data scientist come to compete and test their skill and knowledge.

This dataset provided around 80 different features, including multiple aspects of the house that would help or may not help predict the fluctuation of the house prices. The strategic approach our team adopted was to derive meaning from the dataset through various analytical graphs, statistical methods and then to apply different supervised machine-learning algorithms to predict the house sale price.

 

Outline

  1. Exploratory Data Analysis (EDA)
    1. Trends
    2. Correlation
  2. Preprocessing & Feature Engineering
    1. Data cleaning & Imputations
    2. Ordinal Encoding
    3. Log Transformations
    4. Box-Cox Transformation
    5. Label Encoding
    6. One-Hot Encoding
  3. Models and Techniques
    1. Linear Regression Models
    2. Tree-based Models
  4. Results
  5. Future Improvements

 

Data Exploration

 

The dataset contained two csv files (train.csv, test.csv).The training set had 1460 observations and the test set had 1459 observations. The only key difference between the two sets was the absence of the column of sale prices in the test set. In order to predict the sale price of a house, we began by looking at factors like Neighborhood and Overall Quality of the house.

As you can see below, both these categories play important roles in housing prices. Neighborhood plays into the old saying about real estate being all about location, location, location. But the house condition also indicates that the higher the quality, the higher the overall price for the house.

Studying Data to Predict House Prices Studying Data to Predict House Prices

Data Cleaning & Feature Engineering

 

Below is a schematic of the data cleaning and feature engineering process that was performed on the datasets

Studying Data to Predict House Prices

Data cleaning and feature engineering were performed to construct additional explanatory variables that can help predict the housing sale price. For this process, we combined both training and test datasets together after dropping the sales price. We first assessed features with missing values. Columns with missing values were imputed as shown in the table below.

Next, we engineered additional features by creating additional columns that would help predict the sale price or combine columns that included redundant information.

Types of Feature Engineering

Three main types of feature engineering were performed:

  1. Columns such as Exterior1st and Exterior2nd seemed redundant, so we dummified them and combined the dummy columns. The same process was performed for Condition1 and Condition2.
  2. Several categorical predictors, such as masonry type and basement types of a house, have square footage information included in another column. We decided to combine the information by first dummifying columns MasVnrType, BsmtFinType1, and BsmtFinType2, then replacing the dummy variable with the actual square footage.
  3. Features that were measured such as square footage of an area played a significant role in terms of its correlation to the sales price. We engineered our own columns: Total FloorSF, Total Porch SF, and BsmtBath. Individually, each feature had a minor small impact on the sales price, but combining these features created a much bigger impact on the sale price. The graph below shows that the feature engineered Total FloorSF column has a strong correlation with the sale price.

Sale Price

Transforming the data was important due to the skewness of the overall data. The first transformation we performed was on the sale price of the dataset. The data below visually displays how skewed the data is and how log transformations provide a more normal distribution of the sale prices. Another transformation we used was the box-cox transformation on every predictor variable to ensure predictors were normally distributed.

Different types of encoding were performed on the categorical predictors because machine learning algorithms are incapable of processing strings or plain text in their raw form. Three approaches of encoding were performed: one-hot encoding, label-encoding, and ordinal encoding. Ordinal encoding was performed, before the box-cox transformation step, on predictors with inherent rankings.  After box-cox transformation, the remaining predictors were either all one hot encoded or all label encoded. One hot encoded was used for regression techniques, and label encoding was used for decision tree techniques.

 

Models

We tried out various types of models to understand which one worked best for our dataset. We started off using multiple, lasso (L1), ridge (L2) and finally elastic net regressions. Afterward, we used Decision Trees, Random Forest, Gradient Boost, and XG boost.

Lasso Regression

The regression techniques were initially used because they were fairly easy to implement to our dataset and didn’t require much maintenance to configure and optimize the results. We found that different regression techniques provided very different results.

The one regression technique that provided the best results was lasso regression. We believe that L1 was the best due to the recognition of our one hot encoding which was able to use coefficients that heavily leaned on the correlation of the sales price. Our team assumed that using an Elastic Net would provide better results than the lasso due to its combination of using L1 and L2; however, this was not the case and provided an even worse CV score. The plot below shows the positive and negative 20 coefficients in the linear regression model.

 

  • Some neighborhoods have a strong positive impact on the model, and some neighborhoods have a strong negative impact
  • TotalFlrSF and OverallQual are strong positive contribution factors to sale prices

Tree-Based Models

After our regression techniques, we switched over and implemented a number of tree-based models, such as RF, XG and XGboost. The initial tree based model we performed was the decision tree, which had a poor result. We used the decision tree results as a baseline to compare against more complex tree-based models. Random Forest, Gradient Boost and XG Boost had significantly better results,  with XG Boost showing the best result. Below is the feature importance plot from the XG Boost model.

  • TotalFlrSF and OverallQual are strong positive contribution factors
  • Also confirms neighborhood as a strong predictor

 

The main reason for attempting so many modeling techniques was to try to get  an ensemble method to produce the best result possible. However, since lasso regression yields the best results consistently through validation and Kaggle scores, ensembling of all the models was not necessary to gain marginal improvement on scores at the cost of losing model interpretability.

 

Data Results

The results below showed that our Kaggle score and RMSE score from cross-validation follows the same trend and are similar in range. None of our models were overfitting, and lasso regression model and XGboost had the best results.


Future Improvements

A few notes we had in mind to further improve our model accuracy would be to explore the possibility of stacking or ensembling individual models. In addition, we also would like to further explore the neighborhood effects on couple important features we identified using hierarchical linear regression. We had a theory that the clustering of the neighborhoods had a bigger role on the house price than we initially thought, and using hierarchical linear regression would have helped prove our theory right or wrong.

Another interesting topic we had in mind for future improvements was the inclusion of time series event data to be included in our dataset. The recession cycle in 2008 must have played an impact somehow and our group wanted to see how that would have played out in our results. The economic index is something we had in mind we wanted to implement, showing things such as economic statuses and the communities living in the area. Since Iowa state university was in the middle Ames, Iowa, we saw a trend noticing that houses near the college were typically lower in price, while those north of the campus were higher in price.

About Authors

Richie Bui

Richie Bui has experienced knowledge working with collecting, processing and capturing large datasets while working with Medtronic the past 3 years. Closely working with his colleagues in engineering, product management, and bio-statisticians to gather information on the medical...
View all posts by Richie Bui >

Kelly Ho

Kelly graduated from Cornell University with a Master of Engineering degree. She has three years of experience in analytics, statistical modeling, and providing data-driven recommendations for process improvement.
View all posts by Kelly Ho >

Samuel Mao

Samuel Mao is a data scientist with three years of experience using R and Python to develop models addressing needs across business functions. He also has demonstrated experience growing US/China cross border enterprise value.
View all posts by Samuel Mao >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI