Using Data to Predict House Prices in Ames, Iowa

Posted on Jun 11, 2021

Data Science Background

This project utilizes a variety of machine learning techniques to predict house prices based on a Kaggle dataset of 79 explanatory variables. The data covers the sale of individual residential properties in Ames, Iowa from 2006-2010. The goal of the project is to forecast the prices of 1,459 homes as accurately as possible, as well as identify the features which have to greatest impact on sale price.


Part 1: Data Exploration, Cleaning, and Pre-processing

My first step was to read the documentation and contextualize the data. Real estate has a history of being cyclical, and the home sales detailed in this particular dataset take place amidst the 2008 Housing Crisis. The following boxplot shows sale prices broken down by month and year:

Using Data to Predict House Prices in Ames, Iowa

Interestingly, there is very little evidence of a pricing bubble or crash here, and very little seasonality overall. We can move on to further analysis.

With the next plot, we see that overall quality shows a strong linear-like correlation with sale price.

Using Data to Predict House Prices in Ames, Iowa

Above ground living area also shows a somewhat linear relationship to sale price, as shown in the scatterplot below. The documentation notes there are five outliers in the dataset, three of which are partial sales and two of which are unusually large sales. I removed the two points circled in orange, as those stood out significantly.

Using Data to Predict House Prices in Ames, Iowa

Feature Selection (and De-Selection)

Now that I'm clued in on the linear relationships of some variables to sale price, I know that a linear regression may be a good candidate for model fitting. The next step is to try and reduce some of the dataset's dimensionality. I will look at a heatmap to identify variables that are highly correlated to each other, and remove as many redundant columns as possible.

Using Data to Predict House Prices in Ames, Iowa

The feature "GarageCars", for example, is highly correlated with "GarageArea". We can drop it since it doesn't offer any new information. I dropped "GarageYrBuilt", "TotRmsAbvGrd", "1stFlSF", and "2ndFlSF" for similar reasons.

Missing Value Imputation

Another important issue to address was the 29 columns in the dataset containing missing values. Out of those, 20 relate to optional "bonus features" of a house such as a pool, fireplace, or fence. I imputed zeros or "none" for these.

8 of the remaining columns with missing values were categorical attributes dominated by one value (i.e., most values of "SaleType" were "Normal). I imputed the mode for these columns. I then dropped "Utilities" since all values were the same except for one.

Finally, the last column requiring imputation was "LotFrontage". 16.7% of these values were missing. I considered dropping the variable, but guessed that lot frontage could greatly impact the home's visual appeal and decided instead to impute the neighborhood mode.

Feature Engineering

I created the following new features:

  • HouseAge- for better interpretability of year built
  • YearsSinceRemod- for better interpretability of year remodeled
  • TotalBathrooms- consolidating the 4 columns for full baths, half baths, and basement baths
  • PorchSF- consolidating the square footage of various porch styles

Encoding Categorical Variables

14 categorical variables were encoded ordinally. Most of these were quality scores which were easy to assign numerical values to.

For linear models, I dummified the remaining categorical variables, including Month and Year sold.

Variable Transformations

To increase the normality of some of the variables with high skew, I applied a log transformation, starting with the dependent variable.

Using Data to Predict House Prices in Ames, Iowa

I also applied log transformations to explanatory variables with a skew level of 0.5 or higher.

Using Data to Predict House Prices in Ames, Iowa


Part 2: Model Fitting & Evaluation

I split the cleaned dataset into train and test splits and analyzed models using 10-fold cross validation. I used root mean square error to evaluate performance.

Using Data to Predict House Prices in Ames, Iowa

The Lasso penalized linear regression had the best performance, with an RMSE of 0.1201 after tuning the model for alpha.

Next, I took a look at the prediction plot and residuals.

Visually, both of these plots demonstrate a relatively effective model. We can move on to evaluating coefficients to extract additional insights. 

From the coefficient plot, we can see that the most important factor in determining sale price is above ground living area, which makes sense. After that is Overall Condition.

Additionally, six of the coefficients relate to a neighborhood, which proves that location matters a lot to buyers in Ames.

One unusual surprise here is the top negative coefficient showing as commercial zoning classification. This may require further investigation, especially since the coefficient is disproportionally high.


The top takeaways from the model analysis are as follows:

  • A bigger house is not always a better house, but in Ames, it is most likely a more expensive house. Thus, homeowners can sizably  increase the value of their home through constructing a home extension, if possible.
  • Neighborhoods have a significant impact on house price. Therefore, home buyers looking to save money on a nicer house should consider neighborhoods like Edwards and Old Town.

Opportunities for Further Analysis

My final predictions scored in the top 25th percentile on Kaggle. There is potential to improve the score by further fine-tuning the model parameters. 

In addition, we may get even better results by stacking or blending some of the models. The tradeoff would be that this approach will add a layer of complication and make the model more difficult to interpret.

Finally, there are infinite possibilities when it comes to feature engineering. We can try binning some categorical variables, performing different transformations on skewed variables, and adding/dropping different combinations until the end of time.

The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.


Additional details and code available on Github

About Author

Related Articles

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI