Studying Data to Predict Housing Prices in Ames, Iowa

The skills the author demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Introduction

For any home buyer, the first concern that comes to mind is the price of the house. Many factors influence a house price, like the size of the house, type of garage, how many bedrooms etc.. In this data study, our purpose is to find out what are the factors that influence the house sale price in Ames, Iowa and predict the sale price by using machine learning.

We used the dataset on a Kaggle competition that was compiled by Dean De Cock. It included 1460 observations with  80 features for houses sold in Ames between 2006 and 2010.

To see our code, please click here.

Data Exploration and Cleaning

To create a successful model, we must first understand the data. So, we evaluated all features one by one to determine which would need to be manipulated in our data preparation step.

Studying Data to Predict Housing Prices in Ames, Iowa

Correlation Between Sale Price and Features 

We created a correlation heat map to identify relationships between the sale price and other features. Then, we selected the most correlated features and created an additional heat map that is shown below:

Studying Data to Predict Housing Prices in Ames, Iowa

Linear Regression

We used linear regression to see the relationship between the size of the house( Square Footage), the age of the house and its sale price (dependent variable).

Studying Data to Predict Housing Prices in Ames, Iowa

Data from Feature Engineering

We spent a lot of time looking at relationships between variables to understand collinearity and whether or not a variable would prove to be truly incremental to the performance of our model

The scatter plot below shows the relationship between Living Area and Sale Price. We color-coded the scatter by neighborhood and observed that there are certain neighborhoods that collectively appear to have higher and lower prices.  

We originally decided to dummify the neighborhood feature into multiple classes, though we knew it would increase the dimensionality of our model.  Upon inspection, we found that certain neighborhoods appeared to explain sale price, while others did not. Could there be a way for us to capture this benefit without penalizing our model with added dimensionality?

Sale Price vs Year Built

We refined our scatterplot, again color-coded by neighborhood, but this time capturing the year a home was built.  What we found was interesting: neighborhoods were constructed at roughly the same time, as can be seen by the vertical clustering of color. That made sense to us. As cities are built out, specific neighborhoods are created over time.  

Sale Price vs Square Footage

What’s more, as the graph below shows, we found better clustering, i.e., less dispersion, when we plotted Age than when we plotted Neighborhood.  So our insight here was that we didn’t need a broad neighborhood dummification. Essentially, YearBuilt, or the Age of the home, offered a convenient categorical representation of neighborhood.

This analysis below didn’t always allow us to reduce variables.  Below, we examine The relationship between Quality score & Remodel Age. The thinking here was you should see an improvement in quality with a home that was remodeled in the last few years. Again, perhaps one variable can capture the same information.  

Findings

However, when we looked further, though both variables correlated nicely with the sale price, we did not see much of a joint relationship, with a wide dispersion of Quality Ratings by year remodeled.  We concluded not to drop either variable, as each offered unique information to our model that the other could not.

This was mainly the process we went through, variable by variable, gradually but deliberately reducing our model size, eliminating collinearity wherever we thought we could detect it.  We used visualizations and statistical testing to help us with this.

Visualization and Testing

First, we combined variables. For example, we found 12 features characterizing Basements.  We determined that the Basement Finish Type variables were essentially capturing Basement Exposure, Basement Condition, and Basement Quality.  We used this as a flag for Basement Square Footage.

If the ranking was high, we aggregated the Basement Square Footage into actual living space.  If the ranking was low, we allocated the Basement Square Footage to another variable that we called Recreational Space, which was itself an aggregator of all non-living square footage including porch/deck/pool etc. We aggregated and converted the bathroom features into two variables: number of toilets (a representation of half bath) and number of showers (a representation of full bath).  

We examined value counts and variances in variables and rigorously debated their importance.   

Among the 1,460 observations in the data set, we noted that 1,459 of them had Utilities equal to AllPublic; 1,450 Street types were the same, 1,311 Land Contours were level, and only 91 alleys were either paved or gravel.  Our sense was that without enough variance, these variables would not add much to our model. In many cases, we created binomial flags or simply eliminated features altogether.

With our remaining feature set, we used boxcox transformation to help normalize our data. In some cases, such as in Living Area, we found better results with a basic log transformation.  The resulting variables approach Gaussian distributions as can be seen below. (blue is before transformation, red is after transformation)

Living Area

After normalization, we found out there are some apparent outliers were excluded in our analysis; some remain in Square Footage and many in Lot Area, even after the transformation.

Data Modeling/Result

We initially ran multi-linear regression with lasso & ridge penalization, as well as naive bayes, and found the best results with multi-linear regression. Below is the cross-validation of ridge and lasso regression that allowed us to tune our hyperparameters.

In the following graph, we didn't see features dropping out at the same time, indicating that the variables were adding independent information.  We further modeled the data with and without the Overall Quality score, to determine the independent predictive value of that feature and ascertain multicollinearity.  

Our base model yielded RMSLE (Root Mean Squared Logarithmic Error) that placed us in a respectable Kaggle position, but with further refinement and additional modeling using XGBoost, LightGBM, and KRR we saw better results, smaller errors, and allowed us to achieve a position in the top 20% of all Kaggle rankings.  With additional time, we believe further feature engineering and additional experimentation with outlier removal could yield even better results.

About Authors

Hong Yang (Jason) Wang

Certified Data Scientist skilled in cleaning, visualizing and interpreting data with machine learning and statistical analysis in R and Python. Team player who performs well under pressure, receptive to feedback, pays meticulous attention to detail and always keeps...
View all posts by Hong Yang (Jason) Wang >

Marc Hasson

As an investment research professional, much of my work over the last 17 has centered around developing a deep understanding of businesses based on senior management interactions, financial modeling, forecasting, and primary due diligence. Data has also been...
View all posts by Marc Hasson >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI