House Hunting with Data and Machine Learning

Our team did a thorough analysis of the Ames, Iowa Housing data set and trained several machine learning models to predict house prices. We also deployed one of these models in a web-app to help potential buyers find the right house based on their budget.

Data Science Background

The data was collected by Dean De Cock in 2001 as an alternative to the famous Boston Housing Dataset for teaching linear regression. The data consists of 2580 sale prices for houses sold in Ames Iowa from 2006 to 2010, along with over 80 explanatory variables for each house. This dataset can be found on kaggle.

Our group was inspired by the popular "house hunter" style television shows, and we decided to use machine learning to help home buyers find their home. Our goal is to take someone's requirements for a house (size, number of bathrooms, etc.) and predict what their hypothetical house would cost. Our value proposition is that if their predicted house did not match their budget, we can analyze what things to add or remove in order to reach their budget and find their dream home!

Exploratory Data Analysis

To kick off the project we set out to find out as much as we could about the 80 explanatory feature in the dataset and see how they relate to the sale price so that we could use these features to inform a home buyer. Below are some of our findings. 

When searching for a home, homebuyers tend to have optional additions in mind during their search. We decided to transform many variables detailing square footage of optional additions into binary variables indicating if they have the addition or not. This was done for the pool, fireplace, finished garage, porch, deck, and finished basement features. We ran a student's two-tailed t-test on whether the house had the feature or not. We found that every feature ws associated with a statistically significant increases to the home sale price. On average, having a pool had on average a $78,000 change in worth. Having a fireplace and having a finished garage also had a moderate positive correlation with the sale price. 

House Hunting with Data and Machine Learning

Next, we saw that 10 features that spoke about the quality of the house including overall quality. When creating a box-plot of overall quality to the log sale price we saw that as the quality increased from 1 to 10 so did the log sale price, showing a linear trend between the two. To see how the overall quality feature was engineered from the individual quality metrics, we looked at the correlation between the metrics and overall quality. We found that the exterior quality, kitchen quality, and basement quality were all highly (>0.6) correlated with the overall quality. 

Our secondary dataset contained Ames Real Estate data, which included the property addresses that were used to find the latitude and longitude coordinates of the houses and the neighborhood. By merging the two datasets we were able to compare features based on neighborhood. We looked again at overall quality and asked: Does the price sensitivity on quality depend on the neighborhood? We split the neighborhoods in half; those above the median price and those below. For the neighborhoods above the median, we found a sharper increase in Sale Price when overall quality increased indicating that they are more sensitive to changes in quality.

House Hunting with Data and Machine Learning

There were two types of house styles in the dataset either ranch or colonial. Around 60% of houses in the Ames housing dataset are ranch style. The popularity of the house style was different in each neighborhood and didn’t seem influenced by whether it was an above-median or below median sale price neighborhood. As ISU is the largest employer of Ames, IA, we used the latitude and longitude and geopy to calculate the distance between the properties and the university. The neighborhoods with a more convenient job commute to ISU are SW ISU, Crawford, and Edwards. 

In conclusion, as overall quality increased so did the Sale Price. More affluent neighborhoods were more sensitive to changes in quality. There were moderate correlations with having a Fireplace and having a Finished garage and Sale Price. 60% of the houses in the dataset were ranch style. We used these findings to inform our feature selection and engineering for our models.

Pre-Processing

Data quality is an important topic in machine learning - the old adage of "garbage in, garbage out" is applicable here. Therefore, we cleaned the data to remove as many low quality rows from the data as possible. For instance, the data included home sales under foreclosure, houses that were severely damaged, and homes zoned for agriculture on 40 acres of land. None of these are appropriate for analyzing an average family home, so they were dropped. In total, 300 rows were removed from the data. This was acceptable, because 2,250 rows still remained, which is plenty for data analysis and machine learning.

Tree-Based Machine Learning

The advantage of using tree-based machine learning models is for early feature selection. By analyzing the features higher up on the decision tree, we can draw conclusions about what feature are more important to predicting the house price. This is convenient when we have over 80 variables to consider.

Feature Engineering Pt. 1

We were guided in our feature selection and feature engineering through the information we acquired during exploratory data analysis. For instance, we created a curb appeal feature that was a combination of 11 of variables like: masonry veneer area, lot frontage, lot shape, land contour, lot configuration, land slope, roof style, roof material, exterior material, exterior quality, and exterior condition. We also used the latitude and longitude coordinates of each house to calculate the distance to the big college in town (Iowa State).

Model Results

We used three different tree-based models: Random Forest, Gradient Boosting, and XGBoost. All three were tuned using grid searches with cross-validation by splitting the data into training and test sets. All three performed similarly in terms of accuracy. The average test accuracy was around 90%, which means that the model could a house on average within 90% of the sale price. We also saw a 6% error between the train and test sets, so we may have had a problem with overfitting. However, the Gradient Boost model performed slightly better than the other 2 models with slightly higher train and test scores and just a slightly lower error.

House Hunting with Data and Machine Learning

Feature Selection Pt. 1

Now that we had an accurate model, we can use the decision trees to analyze the important variables.

House Hunting with Data and Machine Learning

The above diagram shows the top three levels of a single decision tree from the Random Forest Model. Decision trees make splits on the features that return the largest information gain. In the tree above, the first split has been made on the overall quality feature, so the cut on this feature provided the most information. In the second level, the splits are on curb appeal and neighborhood. Therefore, this reveals that for homes with lower overall quality < 7.5, curb appeal is the most important feature, but for homes with higher quality > 7.5, the most important feature is the neighborhood. Note: this could be insightful for a real estate expert, but may be hard for new home buyers to interpret.

On the other hand, something we can interpret is a ranking of which features are the most important. This graph shows the ranking of feature significance in predicting sale price, and we can use it to inform our next round of modeling.

House Hunting with Data and Machine Learning

Linear Regression

The advantage of using linear regression in house pricing is the interpretability. As we saw earlier, tree-based models can be difficult to interpret, but linear models will output a formula in the form of "y=m*x+b" that we learned in elementary algebra. For example, a linear model to predict house price may generate a formula like "(house price) = $20*(square footage) + $1000*(number of bedrooms) + $500*(number of bathrooms)."

Feature Engineering Pt. 2

In order to make the model useful to a home buyer, we had to create more variables based on typical house buyer questions like, does it have a finished basement? does it have a garage? does it have a fireplace? These became binary variables where "1" means garage and "0" means no garage.

Feature Selection Pt. 2

After combining the important features from the tree-based models with our new binary features, we had 40 features to consider for linear regression. We had a hunch that some of these features were correlated, which could be a problem. Linear regression has a requirement where the input variables are not allowed to be correlated with each other.

In order to systematically drop redundant variables, we used a series penalized linear regressions, specifically Lasso Regression. In a Lasso Regression, the model will set the coefficient of an input variable equal to zero if that variable is redundant. Using this strategy, we narrowed the number of features down to just 15.

Final Model

Using this strategy, we narrowed the number of features down to just 15. ​We found that the five most important features of a house in Ames Iowa are:

  1. Total Living Area
  2. Whether or not it has central air
  3. Total Lot area
  4. Whether or not it has a finished basement
  5. Overall quality of the house (1-10 scale)

​​​​​​​​​​We used two scores to measure our model: coefficient of determination (R^2) and mean absolute error (MAE). The R^2 was 88%, which means that our model explains 88% of the fluctuations in house price. The MSE was $11,600, which means that on average, our model predicts within $11,600 of the sale price.

Web Application

Now that we have a linear model that is tuned for our target user, we needed to create a dashboard so home buyers can interact with the model. We created a dashboard with the Python Dash library and deployed the app on Heroku. The app can be accessed here: https://ames-housing-app.herokuapp.com/

The dashboard allows users to enter their budget and specify features they would like in their house such as gross living area, lot area, neighborhood, qualities and other features as shown below:

The selected features become inputs into the linear model, and the output is a predicted housing prices. The dashboard compares the user's budget and the predicted price and recommends 10 changes that the user can make to bring the target price closer to their budget. The recommendation system subtracts or adds features depending on whether the user is over or under budget and predicts a new price. The top 10 changes that bring the target price closest to the budget are shown to the user.

The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

About Authors

Stephen Kita

Stephen is a biomedical engineer who likes to work with data and develop innovative healthcare products. He is an excellent problem-solver with a diverse background in entrepreneurship.
View all posts by Stephen Kita >

Brandon Ryu

Experienced Engineer who enjoys using strong research and technical background to guide healthcare software products that aim to improve patient care around the world. Strong interdisciplinary professional who focuses on collaborating with users, stakeholders and developers to drive...
View all posts by Brandon Ryu >

Anjali Pathak

Geetanjali Pathak is a graduate of the NYC Data Science Academy. Geetanjali holds a dual BA/BS (Baccalaureus Artium et Scientiae) degree in interdisciplinary studies (concentration in neuroscience) from the University of South Carolina Honors College. She is a...
View all posts by Anjali Pathak >

Isabel Alvarez de Lugo

Isabel Alvarez de Lugo is an experienced data professional having worked in the Life Sciences and Real Estate sectors. As a data analyst at a fast-paced biotech accelerator, she led end-to-end development efforts to provide program outcomes and...
View all posts by Isabel Alvarez de Lugo >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI