Using Data to Predict Sale Prices of Houses in Ames, Iowa
The skills the author demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
GitHub
Authors: Sweta Prabha, Hadar Zeigerson & Ayelet Hillel
Data Science Introduction
Buying a home is one of the greatest investments an average American will make in their lifetime. The most efficient and least stressful way to purchase a home is to be well informed throughout the process. This project utilizes machine learning techniques to accurately predict the sale prices of a house based on information regarding the house’s features using the data charts. The models are trained on house sale data of over 2,500 houses in Ames, Iowa.
Our team engineered a highly accurate predictive machine learning model (0.96 R2 for unseen data) that can potentially be used by anyone interested in selling, building, flipping, or buying a home, from real estate agencies to homeowners debating whether to remodel their house before selling.
The Data
The dataset describes the sale of individual residential property in Ames, Iowa from 2006 to 2010. The data set contains 2580 observations and 81 explanatory variables involved in assessing home values such as square footage, neighborhood and number of bedrooms. The dataset was obtained from Kaggle competition, House Prices: Advanced Regression Techniques.
Exploratory Data Analysis (EDA)
A detailed EDA was conducted with the aim of gaining maximum insights into the data set and its underlying structure. The following list highlights some of our key findings:
- Houses up to 2000 sqft are desirable in Ames: Price/sqft dropped for bigger houses supported by negative exponent in sales price vs gross area log-log linear model.

- The external quality, external condition and kitchen quality were highly correlated with overall quality and condition. This suggests that those features are determining factors of sale prices.
- Seasonal Trends: more houses are being sold between May-July. However, we witness minor seasonal trends in sale price.
- Neighborhood analysis: NoRidge neighborhood has bigger and most expensive houses, compared to all other neighborhoods. NAmes has the most number of houses for sale, compared to all other neighborhoods.
Data Processing
- Only Normal Sales of Residential Houses were included in train and test set
- Most Null values for categorical variables usually meant missing features and were replaced by no_feature
- Most Null value in continuous variables were replaced by 0
- Very few houses (only 9) had pools and the rows were deleted to avoid discrepancy
- Only 3 houses had second garage, the records were dropped too
Predictive Models
Penalized Linear Regression
Regression problems perform statistical model selection to find the simplest model that provides the best predictive performance which involves important feature selection. Penalized regressions are popular due to their higher prediction accuracy and computational efficiency. Penalized linear regression regularizes the regression coefficients by shrinking them towards zero. For feature engineering and linear models, a dummified dataset was used. The alpha parameter was varied between 0.7 to 40.7 with step size of 1.
The age of the house was calculated by taking the difference between year sold and year built that reduced a lot of dummy variables by converting categorical features to continuous features. The Lasso model gave an R^2 of 0.96 on the test set and 0.96 on the training set. Although the accuracy was high, some important features in context to the housing market such as building type and house style were dropped so we decided to try other modeling techniques such as tree based models.
Tree - Based Models
- Random Forest
Random Forest Regression is a supervised learning model that uses ensemble learning techniques for regression. Ensemble learning method combines predictions from multiple machine learning algorithms to make a better prediction than a single model.
Before we build and evaluate our random forest model, we need to construct a baseline model; a simple model we hope to improve upon. We have decided to set the baseline predictions to be the sale price averages for every year in our data set. Having established a baseline, we were then able to build our model. In order to optimize the random forest model, we conducted hyperparameter tuning.
We started by running a Random Hyperparameter Grid with K-Fold cross validation (CV) in order to narrow down the range for each hyperparameter. To further improve our results we used Grid Search with K-Fold CV focusing on the most promising hyperparameter ranges.
The following table shows the final results:
- Gradient Boosting
Gradient boosting is a supervised learning model that builds simpler prediction models sequentially where each model tries to predict the error left over by the previous model. We established our predictions by building a gradient boosting model using the default parameters. In order to improve upon our baseline mode, we performed hyperparameter tuning. We used Grid Search with K-Fold cross validation (CV) to find the best hyperparameters, and evaluated our models by comparing our results to our baseline predictions.
The following table shows the final results:
Model Selection
Our linear model outperformed the tree- based models with a score of 0.96 on unseen dataset.
Conclusions and Future Work
This project laid the groundwork for building more accurate forecasts by surfacing insights about the most impactful drivers of housing prices in Ames, Iowa. Our forecasts (R^2 0.96) can benefit individuals interested in investing, buying or selling a house in Ames, Iowa, while making optimal capital allocation decisions. Future work includes adding data on employment, education and crime in order to identify additional key driving factors that affect house prices in Ames, Iowa.