Predicting Housing Prices in Ames, Iowa

Introduction
The real estate market is constantly changing with moving economic factors (supply, demand, cost of living index, unemployment rates, etc.). Real estate buyers and sellers must constantly adjust pricing of products to be competitive in the marketplace. In order to be better informed when making pricing decisions, understanding trends and being able to make predictions about the future is imperative.

Overview of Data
The primary dataset for this project was taken from Kaggle and contained 2,580 observations of house sales for Ames, Iowa. The dataset contained 80 features (approximately 37 numeric type features and 43 categorical string type features). Recorded sale price was the target variable. The observations of sales provided were from January 2006 to March 2010. A supplemental Ames Real Estate dataset with information up to 2020 was used to incorporate addresses and geo-coordinates for each observation.

EDA
We conducted an exploratory data analysis to view the characteristics of this dataset. First, we plotted a histogram to view the distribution of the sale price.

The figure above shows the distribution is skewed right due to a small number of higher priced houses. When the houses with sale prices greater than $300,000 are filtered out the distribution appears to be normal.

Next, we made a correlation heat map of some of the numeric type features.

The lighter boxes represent higher correlation between the corresponding 2 features, and the darker boxes represent lower correlation between the corresponding 2 features. PoolArea appears to have the least correlation with SalePrice. SalePrice was more correlated with GrLivArea, TotalBsmtSF, 1stFlrSF, and GarageArea. We also noticed that GrLivArea was somewhat correlated with 2ndFlrSF and 1stFlrSF which will be useful for the feature engineering.

Next, we took a look at the average sale price over time.

We can see from the chart above that the prices were generally increasing until the market crashed in 2009 and the average sale price plummeted. We also took a look at how the seasons affect transactions.

The graph above shows that the average sale price oscillates seasonally, hitting its peak in the summer and reaching its trough in the winter. Similarly, the second graph above shows the number of homes sold by season. The number of sales spikes in the summer and reaches its lows in the winter.

Finally, we took a look at the distribution of some of the numeric features.

We see that a large quantity of features have a significant number of 0 values, which means that a lot of houses are lacking that specific feature. The observation that the numeric features are not normally distributed was important as we scaled features for training the models.

Data Cleaning and Feature Engineering
In order to ensure the models provide accurate and interpretable results, several steps were taken to clean the data and engineer new features. First, we filtered out all sales that were not listed as a “normal” sale condition. We did this as we realized that 94% of the recorded sales were normal and the ones that were not tended to be outliers and inconsistent in terms of sale price. All non-normal sales were removed from the dataset. Next, we created methods to handle null and missing values. Every column in the dataset contained at least one missing value, and several had over 2,000 rows of null values. For each column, the team discussed whether it would be best to remove the feature or fill the null value with the mean of the column, zero, or some other value. For numerical columns such as BsmtFinSF, MasVnrArea, BsmtFullBath, BsmtHalfBath, GarageCars, and GarageArea, we replaced null values with 0 as a null value represented the fact that the house did not contain the feature. For LotFrontage and GarageYrBlt, we decided to fill null values with the column mean as it often reflected poor quality in the data, not a missing feature. Lastly, for all categorical variables, we decided to fill all null values with a string that read “no_variable”. This allowed us to easily use these tags for dummifiying and ordinal encoding in later stages of the project.

In addition to data cleaning, we also performed feature engineering in order to create helpful inputs and reduce multicollinearity. The first step of this process was to find correlated variables and determine if any of them could be combined. We decided to make four features: total bath, total living area (square feet), basement unfinished ratio, and total rooms above ground (excluding bedrooms). After creating these features, we removed the features used to create them in order to avoid multicollinearity. After engineering these new features, we lastly looked at ways to simplify our model and remove unnecessary features. Using a correlation plot and a preliminary lasso model as reference, we decided to remove 24 features. Often, these features were categorical and only a small percentage of the homes contained them. For example, only 10 houses in the dataset had a pool, so we decided to remove this feature all together. We removed the following features from the dataset: GrLivArea, 1stFlrSF, 2ndFlrSF, BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, FullBath, HalfBath, Street, Alley, Utilities, Condition2, RoofMatl, Heating, Electrical, LowQaulFinSF, Kitchen, GarageCars, GarageCond, PoolArea, PoolQC, MiscFeature, MiscValue.

After the feature engineering was completed, we did a cook’s distance test to determine outlier observations within the data. Four observations with p-values < 0.05 were deemed outliers and removed from the data set.

After the feature engineering and outlier testing was completed, we created four different subsegments of the data. We created a training sample and testing sample for our linear models and a separate training sample and testing sample for our tree models. The linear models utilized one hot encoding for categorical variables while the tree models utilized label encoding. Both models used a MinMaxScaler for all numerical values that were trained on the training data exclusively. After these datasets were created, we were prepared for modeling.

Modeling
For the modeling portion of the project, 2 transformations of the data set were used. A dummified data set was used for the linear models (lasso, ridge, elastic, support vector regression) and a label encoded data set was used for the tree based models (random forest, gradient boost, cat boost). The transformations for categorical variable encoding were done after the train-test split to ensure the training set remained independent. The process for training was consistent across all models. A grid search was used with 5 cross validation folds for hyperparameter tuning. When the optimal parameters for each model were found the models were retrained on the full training set and then evaluated on the test set. The metrics used to evaluate the models were R^2 score and RMSE.

Summary of model evaluation results are below:

Lasso: R^2 = 0.9338

Ridge: R^2 = 0.9333

Elastic Net: R^2 = 0.9336

Random Forest: R^2 = 0.9113

AdaBoost: R^2 = 0.8454

Gradient Boost: R^2 = 0.9391

CatBoost: R^2 = 0.9421

Support Vector Regression: R^2 = 0.9491

Following the evaluation of individual models, the top 6 were chosen to be ensembled. The ensemble model took the average prediction across the 6 models to get a final overall prediction. The R^2 score of the final ensembled model was 0.9561.

Dashboard

Link to Tableau Dashboard

We designed a dashboard to create a tool for potential stakeholders to visualize the data and view sale price predictions based on customizable feature input values.

The dashboard was created in Tableau and utilized filters to sort and aggregate the data to make it easier for users to visualize trends. The dashboard allows users to easily group features such as “Neighborhood” or “# of Bedrooms” and the dashboard would return the average sale price for the applied filter. This tool is useful for potential buyers/sellers or realtors who can easily click and aggregate data for their needs.

We also created a “Prediction” tab that would use a linear model to provide a prediction for the price of a home based on certain features that a user could change (i.e: # of bedroom, bathrooms, etc). From the predicted home value provided, a user could determine if a home listed in the area is over/under market value based on the model.

Conclusions/Next Steps
We feel that this was a great first step into applying machine learning to the housing data set to make predictions for housing prices. There are areas we would love to revisit, rethink and add on to this analysis:

Next Steps:

  • The data was taken from 2006-2010, we would like to add data from 2011-2021 to have it reflect more current trends.
  • Add more complex models such as Neural Networks
  • Explore more models to add on with what we have, specifically creating stacked ensemble models that learn from previous models.
  • Increasing our feature input and predictive capability in the dashboard based on stakeholder feedback.
  • Add external data to add features to the analysis such as inflation, unemployment data, CPI, housing market trends, & other economic factors

About Authors

Hugh Goode

Hugh is a Data Scientist with a BS in Civil Engineering from the The College of New Jersey and an MS in Engineering Management from Duke University. After 5 years as an engineer, he pivoted to pursue Data...
View all posts by Hugh Goode >

David Jhang

David has 10+ years in the financial investment industry in NYC. He is currently working at a Long/Short Equity Hedge Fund that focuses on TMT. He is also currently an aspiring Data Scientist at NYC Data Academy.
View all posts by David Jhang >

Jack Copeland

After graduating from the University of Virginia in 2019 with a degree in Computer Science, I went on to join Anheuser-Busch as a Global Management Trainee. I received cross functional training in sales, marketing, supply and more before...
View all posts by Jack Copeland >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup music Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp