Machine Learning Driven Predictions of House Prices in Ames

Posted on Dec 29, 2021

The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Team Members: Aidan Au, Jacob Smith, and Jordan Hicks

Background & Research Question

When investing in a house, investors often want to maximize their return on investment by buying the house for less than it is worth. Of course, it isn't that simple, they have to take into account fees from middlemen, so it can sometimes be hard to determine how much money they'll make on a house. This is where machine learning comes in. By getting an accurate estimate of how much money they could reasonably expect to sell a house for, the investor can determine if, after the initial cost of the house and all fees are paid, they will make any profit, and whether the potential of profit is worth the risk.

The goal of this project is to predict what a house should sell for in Ames, Iowa as closely as possible by minimizing the RMSE (Root Mean Squared Error), a measure of how much difference there will be between the model's prediction and the real value on average.

Data Source and Data Cleaning Process

All the data collected here was derived from the Ames, Iowa housing Kaggle Dataset, which will be linked below.

 

Before we could select the best features for the data, we still had a lot of missing information, and some variables were described using different words, which can;t always be thrown into a machine learning model without some pre-processing.

Our process was first to input any missing information, and then split the data into categorical variables and numerical variable,which we would then either dummify, or perform a box-cox transformation on, respectively. After this we would standardize all of the data, and then perform feature selection.  That's a lot of information to digest, so let's explain each step of the process.

Data Inputation

The first step was to determine how to handle missing data. After examining he variables more closely, we found three different categories of variables, and handled them accordingly.

  • Numerical variables: For these variables we found the median of all results in the column, and simply filled in the missing values with that.
  • Categorical variables where the feature may not exist: For these cases, a missing value likely meant the feature did not apply, so we created a new category "None".
  • Categorical variables where the feature almost certainly exists: When it was very unlikely that a house simply didn't have the variable listed (such as the type of electrical system), we decided to use the mode, or most common occurrence in that column as a prediction for what this house was likely to have.

Feature Transformation

Numerical Variables

Machine learning models work best with normal distributions, so we used a box-cox tranformation to reduce the skewness of all the numerical variables.

Categorical Variables

In order to plug categorical variables into linear models, we needed to turn the different categories into numbers. We used dummification to get additional columns, each containing zeros and ones to determine if a certain house had a specific feature or not. Fr example, a single house, instead of having an overall quality variable, would now have the variables "low overall quality" and "high overall quality" and a 1 in whichever column matched the feature that it has.

Feature Selection

The raw data had 79 different explanatory variables, and 2580 different homes. After a forward-stepwise feature selection process, we ended up using 47 variables in our machine learning models. Some of the variables with the highest correlation to sale price were the gross living area, the house's overall quality rating, the total square footage of the basement, and the car capacity of the garage.

Target Transformation

When we looked at the price distribution of houses, it turned out that we had a lot of rightward skew. This can cause our models to have trouble predicting the price of a house, so to counteract this, we decided to have the models predict the natural log price of the house. This allowed to predict prices within a normal distribution, and reduce the RMSE.

Model Selection

Once we had finished all of the data cleaning, and all of the feature selection, we were finally ready to test out different machine learning models. we split the data into portions that we would train the model with, and portions that we would to evaluate how the model performed. When we tested a model, we still had to figure out what parameters for the model worked best for the data we had.

To find these we used a combination of grid searching and the Optuna library, although the details of those aren't very important for understanding the results. These are the difference between the "train" and "test" scores listed in the table below. We are trying to find he lowest RMSE test score possible. After testing several different models, we found these final scores.

Models (10-Fold CV)
R^2 Train
R^2 Test (10-Fold CV)
RMSE Train
RMSE Test Rank of
RMSE Test
SVR (RBF Kernel/Gaussian) 93.0893% 92.3379% 0.100817 0.109607 1
Cat Boost 92.9744% 92.0410% 0.101543 0.11171 2
SVR (Linear) 92.0530% 91.8460% 0.102742 0.11307 3
Multiple Linear Regression 92.8693% 91.6591% 0.101799 0.114359 4
Ridge 92.8705% 91.6577% 0.101742 0.114369 5
Lasso 92.8694% 91.6581% 0.101796 0.114394 6
GBM 91.4051% 91.5348% 0.112475 0.115208 7
XG Boost 91.3796% 91.3506% 0.112594 0.116455 8
Light GBM 91.0295% 90.2929% 0.115007 0.12337 9
Random Forest 90.0454% 88.2390% 0.121267 0.135796 10

Reccomendations

As you can see from the above table, it looks like an SVR model (Support Vector Regression Model) with a gausian kernel performs the best at predicting house prices out of these models. The Cat boost model (an iteration of a random forest model) is also a good option. While an SVR model with a linear kernel does have one of the higher scores, these can be very computationally expensive, so we can't recommend using it.

Sources

Dataset: https://www.kaggle.com/c/house-prices-advanced-regression-techniques

Github repo for project: https://github.com/hzeig/ames-housing-predictions.git

About Author

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI