Data Scraping to Help Housing Price Prediction (Ames, Iowa)

The skills the students demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.


We are a team of data scientists who explored the housing market of Ames, Iowa, between the years of 2006 and 2010 to understand the nuances of house pricing with the intention to build machine learning models that predict the prices and future sales in comparable markets. We build on the knowledge and experience of successful developers and real estate agents and are able to provide a more holistic and accurate approach to evaluation, with the ultimate goal of helping our clients make better informed investments of their time and money.ย 



We based our research on the Ames, Iowa data set available on Kaggle, which provides up to 79 data points for every house sold in Ames in a five year period. In order to eliminate redundancies and focus on the most important characteristics, we made hypotheses based on our business acumen and domain knowledge, and then pressure-tested our assumptions with rigorous statistical analysis.ย 

Some of our initial assumptions about feature importance included:

  • Square footage of the home
  • Neighborhood
  • Age of the house
  • Quality of the house
  • Year of the sale
  • Garage may be a more important asset depending on the location/type of the house
  • A front yard may be a less important asset depending on the location/type of the house
  • Some other features, such as masonry type, may provide important insight at the higher end of the market

These are some of the principles that guided our investigation. Some were ultimately confirmed, and others were found to be less significant than we imagined.ย 

We further verified our approach with machine learning methods which identified the most influential factors in determining the sale price of a house. As a major tool in our refinement and decision-making process, we used data visualization.


Exploratory Data Analysis

ย A major assumption of ours as we built our models was that a linear relationship existed between the sale price of houses and the various features of the houses. In order to confirm that a linear model is appropriate to estimate sale price, there are several conditions our data must satisfy. In its raw form, our data did not satisfy all of the conditions. We used data visualizations to help identify which conditions were being violated and to verify that they had been corrected after our manipulations. One example is the assumption that sale prices are normally distributed.

The graphic below demonstrates that sale prices are skewed, and so do not adhere to the assumption of normality. When we take natural logarithm of the sale prices, we find the distribution becomes closer to normal. It also resolved another issue known as heteroskedasticity. This transformation allows us to be more confident in our statistical analyses and the predictions of our model.

Not normally distributed

Closer to a normal distribution

Other issues were resolved with a combination of discarding redundant data points and combining others to create more informative features. One such example is the creation of the characteristic โ€œAge at Saleโ€ which wasnโ€™t available in our original data frame but which we created from the information we did have. We suspected that the time since a house was last remodeled may have a greater impact on the sale price than the overall age of the house, and our analyses ultimately confirmed this suspicion.ย 

Below, we can see that the longer since a house has been remodeled, the less money it sells for.

Model Building


After proceeding with feature engineering, we separated our data into a training and testing dataset and fitted our first multi linear regression model using 18 features. As MLR models tend often to be over fitted, we decided next on using a regularized model such lasso to reduce the variance in our model. One other advantage of the lasso model is its features selection properties.

By tuning the hyper-parameter lambda, we were able to drop some of the coefficients of our existing features to 0, thus reducing our initial list of features to as small as 11.

We also explored AIC and random forests models, both of which were helpful in confirming the importance of the features we selected.ย  Looking past the percentage of variance explained by each model, we selected the model with lowest root mean square deviation (RMSE).

Our focus was to retain as much interpretability as possible without sacrificing accuracy, but we trained some complex models for comparison to understand how much accuracy we were potentially sacrificing. Below is a side by side comparison of the different models we trained and how well they performed. We can see our MLR was the most successful, which is unsurprising given the linear relationship of our data.ย 



Ultimately, our hypothesis was confirmed that house prices follow, roughly, a linear relationship. Key drivers of price change from house to house include the overall quality of the house (aesthetics), house size (primarily interior), house age at the time of sale, as well as location (anyone surprised by this!?).ย 

While not our best predictor of price, Random Forest models helped us build intuition about feature importance, and they performed reasonably well on prediction vs. our simple linear regression model. However, because of their relatively low interpretability, compared with linear models, Random Forest are not recommended for driving external client conversations, i.e., informing a real estate agent on the relative significance of lot size on purchase price.ย 

Further research should pressure test these conclusions across various market types, considering that the training data strictly represented Ames, IA, a college town and suburb of Des Moines, and not other market types. For instance, the complexity of Manhattanโ€™s residential real estate market may be better represented by another model type. Perhaps a more sophisticated, opaque model will be useful when trying to generalize a single model (or ensemble) to predict price across a wider variety of features and larger dataset (i.e., national or global housing prices).ย 

About Authors

Kisaki Watanabe

Data Scientist with strong consulting experiences in data analytics/visualization and risk management, serving for industries ranging from social networking service, game, pharmaceutical, media, and advertising. Advanced skills in fraud investigation and trend projection/analysis with tools such as Tableau,...
View all posts by Kisaki Watanabe >

Nillia Ekoue

Nillia graduated from Fairfield University with a Master's degree in Mathematics. Her background includes different exposure levels to Economics, Finance, and Mathematics. Her interests are in Healthcare, Education, Retail, and Finance and Insurance services.
View all posts by Nillia Ekoue >

Colin Ford

Data scientist, with 6 years of experience growing revenue at WeWork, LinkedIn, and Oracle.
View all posts by Colin Ford >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI