Data Predicting Ames Housing Prices

The skills we demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

GitHub

Data Science Background

Data shows that the ability to accurately estimate housing prices based on housing specs and neighborhood information is a valuable tool for anybody in the real estate market.  The goal of this project was to estimate housing prices in the city of Ames, Iowa as accurately as possible for a fictional startup real estate company.  Obtaining an accurate estimation is the priority over the interpretability of influential features, but any information garnered from the model related to feature importance was also appreciated.  

Ames is a small college town in Iowa with a population of approximately 58,965 people in 2010.  As a small college town, rental housing is a big part of the housing market in the area. The median age of the population in Iowa was young at 23.8 years old, and the median income for the region was around $48,100.  

Data

The data used for this project included data on 2580 houses that were sold between 2006 and 2010. The dataset included 81 features, both qualitative and quantitative, as well as property addresses, which allowed us to find the latitude and longitude information for each house.  We incorporated some external data into the model, including proximity to the nearest schools, parks, and recreational centers, as well as the number of these same facilities within a one mile distance, as calculated by finding the difference in latitude and longitude locations of each area of interest to each house.

Data Cleaning

The dataset had many missing values that had to be dealt with in different ways. We devised a data dictionary that defined whether missing values for some variables should be replaced with ‘None’ or 0 based on whether the variable was quantitative or the missing value indicated the absent feature.  

Some of the features used character values in place of ordinal rankings, so we created dictionaries to map the words to quantitative values in these cases.  For example: 

common_ranks_dict = {'None':0,'Po':1,'Fa':2,'TA':3,'Gd':4,'Ex':5}

 

Data Dictionary

This dictionary was applied to Exterior Quality, Exterior Condition, Basement Quality, Basement Condition, Heating Quality and Condition, Kitchen Quality, Fireplace Quality, Garage Quality, and Garage Condition.

Some features required unique dictionaries to map inputs to ordinal values. Basement Exposure, Garage Finish, Paved Driveway, Pool Quality and Condition, and Alley Access all fell into this category.

Exploratory Data Analysis

Based on initial research and background knowledge, we decided to look into the influence of neighborhoods on real estate pricing. As part of exploratory analysis, we could clearly identify that median housing prices varied by neighborhood. Twenty-six neighborhoods were included in the dataset with median sale prices ranging from $89,375 to $302,000, while still showing overlap between some of the neighborhoods (Figure 1).

Figure 1. Median sale prices of each neighborhood.

 

By mapping these sale prices, we could see clear groupings of more expensive houses to the north and south of Iowa State University with less expensive housing located to the east of the school. Clear clusters could be viewed based on location (Figure 2).

Data Predicting Ames Housing Prices

Figure 2. Each house datapoint, viewed by location and sale price.

Feature Engineering

We included a multi-linear model as an initial baseline model, so we looked at many of the features and estimated transformations that would help make the relationship between the feature and sale price more linear. Many of the features were put through non-linear polynomial transformations, such as squaring, cubing, or square-rooting the feature. This proved to be very successful in improving the relationship between input and target, as can be seen from the example of house age (Figure 3). 

The difference in pricing between a newer home and a 25-year-old home was much bigger than the difference in pricing between a 75-year-old and 100-year-old home, so applying a square root transformation to this feature made the relationship much closer to a linear relationship than the original.

Figure 3. Applying a square root transformation to some features, like ‘House Age’ created a more linear relationship between the feature and the target.

 

This same method was applied to several other features including Lot Area (Figure 4).

Figure 4. Applying a square-root transformation to Lot Area

 

In the real estate market, agents often will use comparable sales, or “comps,” to assist in estimating market value. We thought the model could benefit from this information as well. Accordingly, we engineered “comps” features for many important categorical features including: neighborhood, garage car capacity, building type, MS Zoning, and housing condition. For each of the categories under all the features, the median housing price for houses within that group was calculated and added as a new feature.

Finally, we produced score features for various types of housing qualities that included the product of a quality and condition sub-feature. This was the case for the quality and conditions of the basement, the exterior, the garage, and overall.

Modeling

Once the new features were added, and all were cleaned and transformed, we were ready to model.  As we started the modeling phase of the project, we settled on using the r2 value as the metric to indicate model success because we were looking simply at model accuracy and not focusing on the size of variation.  

The baseline model was a simple multilinear regression model using all cleaned variables, which returned an r2 value of 90.66%. To improve the results from the baseline model, we created two penalized regression models -- Lasso and Elastic-net -- and two decision tree ensembles -- Random Forest and XGBoost.  Each model was tuned using 5-fold cross-validation and trained on the entire dataset with a train-test ratio of 75/25%.  

Evaluation

In the end, the models all produced similar results, but the Lasso model outperformed the others using all features. 

Lasso: r2 = 92.50%

XGBoost: r2 = 92.21%

Elastic Net: r2 = 92.16%

Random Forest: r2 = 91.09%

With the whole complete set of features in hand, we started a process to remove any major sources of multicollinearity, thus reducing the size of the feature set by iteratively looking at Variance Inflation Factors (VIF) for the features and their importances based on the coefficients produced from the Lasso model. Features with high VIF scores that were correlated with other features as viewed on a correlation matrix and that had low feature importances were removed, and the VIF scores were re-calculated.

This process was repeated until all the remaining features were no longer highly correlated with others, leaving a reduced set of 47 features. The reduced feature set was applied to the best model, Lasso, resulting in a final, improved r2 value of 92.5%.

Discussion

Many of the features selected by the models showed similar traits. This was especially true for Lasso and Elastic-net, which showed an almost exact replication of selected features. Elastic Net did include more neighborhood features.

We observed that the polynomial transformation of many features was selected over their baseline counterparts for many of the top 20 features selected by the penalized regression models (Figure 5).  This was especially noticeable for the top five features, which were all square root transformations, indicating these features had diminishing returns as their values increased.  

Conclusion

Generally, the externally added features related to parks, recreation centers, and nearby schools were dropped from the model because they didn’t seem to show improved results when included.  Comparable Estimates were also included in two of the top 20 features, showing the added value of including these features.  Specifically “comps” related to MS Zoning and Neighborhood proved to be significant.  

Figure 5. Feature importances from the final Lasso model.

 

Overall, the model performed very well, providing accurate estimates of housing prices based on housing features and comparable estimates. As we predicted, the neighborhood was relevant to housing estimates, but, more importantly, applying polynomial transformations to features boosted scores the most. This model is now tuned for Ames and our startup real estate client can use it to predict the sales price of a new house based on available housing and neighborhood features.

 

GitHub

About Authors

Hayden Warren

Hayden is an NYC Data Science Academy Fellow with a B.S. in Mathematics from the University of Utah. He then went on to work as a math teacher and debate coach, coaching multiple state champions. During this time...
View all posts by Hayden Warren >

Julie Hemily

Julie has a background in Electrical and Biomedical Engineering, receiving her M.A.Sc at the University of British Columbia where she studied ultrasound and MRI Elastography. She loves working on a wide range of projects and has collected experience...
View all posts by Julie Hemily >

Niki Agrawal

Niki is a data science professional with 4+ years of data analysis experience in industry (digital health tech) and computational research (neuroscience, biomedical engineering). Niki enjoys applying creative and analytical thinking to solve real world problems with data....
View all posts by Niki Agrawal >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI