The Estate Saints

Objective:

Predicting the value for a house can be a useful tool for consumers and real estate practitioners. The quality of a house and its surrounding features potentially make or break a buyer's decision and determine the overall price. A buyer is always thinking the same thing, “Am I getting the best bang for my buck?” It’s a no brainer, you want to make sure you are getting your full money’s worth, and why shouldn’t you?

 The Data: 

 The Ames, Iowa housing data set was obtained on the Kaggle website, which contained 79 featured aspects of a residential property from 2006-2010. Some of those features included location of the property, property size, overall condition, roof styles, and year of the house.  This dataset included in depth descriptions of the features collected and how to interpret each individual observation.

EDA:

To understand our collected data further, we started by identifying columns with high missing data. We wanted to see if there were features that did not seem important or would add value; these columns would be considered to drop. We used basic visualizations like a correlation matrix to understand which columns had high collinearity with each other such as garage area and the amount of cars a garage holds. We would need to reduce this to be able to have accurate results for our models.

Figure 1. Correlation Matrix

For further analysis, we used some scatter plots and line graphs to see how certain features interacted with each other to try and understand trends that would affect our overall goal to predict the value a house should be priced. Our end goal is to reduce the number of features that are highly similar with other features or will not contribute highly towards our model to reduce the complexity and highlight the most important features.

Figure 2. Scatter plots and line plots of various features

 Lasso Regression Model:

Through our initial EDA, we saw our data had a relatively linear trend. We wanted to use a linear model so we chose a Lasso Regression model to predict housing pricing and help reduce multicollinearity through classification. The lasso regression model introduces a penalty to regularize the model; this helps lower the error of the model at the cost of some accuracy. Through hyperparameter tuning, the lasso model can actually reduce coefficients to zero which is why it can be used for classification.

Figure 3. Lasso Classification Model

We split our model into a 70/30 training set and test set to score and record error when trying to predict house pricing. Our model showed pretty high accuracy, but also higher error than anticipated showing it was overfitting our prediction. By dropping features that had high multicollinearity shown by our classification, we were able to decrease our error and maintain high accuracy.

Train R^2 Test R^2 Mean Squared Error
0.8861 0.8954 0.02530

Figure 3. Table of accuracy and error for Lasso Regression Model

The client is everything to us, we wanted to test other models to see if we can outperform our first attempt. We chose an elastic net model to compare to our lasso model. The elastic net combines the penalty terms from a lasso regression model with one from a ridge regression model as well. Each regression can be given a weight of importance through hyperparameter tuning. Our Elastic Net model showed similar results to the lasso, but overall had a higher error and lower accuracy for the test data proving our lasso regression provided the best results in predicting house pricing.

Train R^2 Test R^2 Mean Squared Error
0.8970 0.8893 0.02107

Figure 4. Table of accuracy and error for Elastic Net Model

OlS Regression Model:

Backward Elimination Wrapper Method

We used a wrapper method to perform the evaluation criteria when selecting which features to keep in the model. We fed all 79 features to the selected machine learning algorithm and based on the model performance we added/removed the features that were highly correlated to the sale price. This helped us to identify which features were of importance to our idea and which features we should keep.

K- Fold Validation – 5 Splits

We then evaluate our models performance based on an error metric to determine the accuracy of the model by using the K-Fold Validation – 5 Splits.

Cross-validation scores: [0.88838404, 0.7851076, 0.84647181, 0.83752568, 0.69006605]

Random Forest Model:

We wanted to test a nonlinear function that could also perform classification to compare to our previous models; we chose a random forest as it meets both criteria. The random forest is an ensemble method that implements multiple decision trees using random sampling. Each tree outputs a class prediction,the class that has the most votes becomes the prediction for the model. The random forest uses these best results from uncorrelated trees and combines them together. Through this ensemble, all of the weak models come together to create one strong and stable model.

Train R^2 Test R^2 Root Mean Squared Error Mean Squared Error Mean Error
0.9666 0.8757 29225.4910 854129322.44 18619.36

Figure X. Table of accuracy and error for random forest model

Our random forest model had high accuracy, especially with our training dataset. We were able to identify some of the key features that drive someone to buy a home. Overall quality, overall condition and masonry veneer were among the most important features found with the random forest.

Conclusion:

Our models determined which features add the most value to a house compared to others. This can help sellers and contractors increase house value further by fixing up or improving features identified by our model that would increase the price. Our models help increase real estate value for the consumer and the supplier.

About Authors

Connor Haas

Connor Haas

Looking for a new opportunity, I recently graduated from a data science fellowship in Manhattan. I am an electrical engineering graduate with a computer science minor. I worked as an Inside Sales Engineer in a small company that...
View all posts by Connor Haas >
Avatar

Abzal Bacchus

Hello! I am Abzal Bacchus and I am a Data Scientist Fellow at the New York City Data Science Academy. Feel free to contact me: www.linkedin.com/in/abzalb
View all posts by Abzal Bacchus >
Jason Hoffmeier

Jason Hoffmeier

Jason Hoffmeier is a NYC Data Science fellow that currently resides in New York City. He has a Masters Degree in Systems Engineering from SUNY Binghamton, and has recently earned his Lean Six Sigma Black Belt for quality...
View all posts by Jason Hoffmeier >
Avatar

Jae Ko

Jae Ko is a NYC Data Science Fellow with a bachelor’s in business administration from University of Mary Washington. He has professional work experience in the financial industry for several years as a financial advisor. His strong background...
View all posts by Jae Ko >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp