Using Data to Predict Ames, Iowa Housing Price

Posted on Jun 16, 2021

The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

Source code: Github

Data Science Introduction

For this project, the primary objective was to create and assess regression models to accurately predict house prices based on a Kaggle competition data set, which is available here:

The data set includes around 3000 records of house sales in Ames, Iowa between 2006 – 2010 and contains 79 explanatory variables detailing various aspects of residential homes such as square footage, number of rooms and sale year. The data is split equally into a training set, which will be used to create the model and a test set, which will be used to test model performance. The general workflow to create the model will be as follows:

  1. Data preprocessing
  2. Exploratory data analysis/Feature Engineering
  3. Model training & hyperparameter tuning
  4. Model diagnostics & evaluation
  5. Result interpretation
Using Data to Predict Ames, Iowa Housing Price

Data preprocessing

The first step before implementing machine learning models was to preprocess the data by analyzing the different types of features in the dataset, imputing missing values and removing outliers. These preprocessing steps are integral to model performance later on as they improve the quality and interpretability of the dataset. Out of the 80 different explanatory variables, 23 were nominal, 23 were ordinal, 14 were discrete and 20 were continuous variables.

Among those, features with more than 80% missing observations such as ‘PoolQC’, ‘MiscFeature’, ‘Alley’ and ‘Fence’ were dropped completely. After careful reading of the documentation on each of these variables, many ‘NA’ values corresponded to the absence of a feature and not necessarily a missing value. Therefore, I replaced these ‘NA’ values with either ‘None’ for categorical variables or ‘0’ for numerical variables.

For the ‘Lot Frontage’ feature, a KNN method was used to impute the missing values instead of the mean/mode because mean/mode imputations ignore feature correlations and reduces variance of the data. Once missing values were imputed, categorical features with string values were converted into ordinal variables by mapping them to integers.

For instance, evaluative features containing rankings from ‘Poor’ to ‘Excellent’ were mapped to 1-5 to enhance the data interpretability (i.e. {‘None’:0, 'Po':1, 'Fa':2, 'TA':3, 'Gd':4, 'Ex':5}. Furthermore, categorical features containing ordinal data (i.e. ‘LotShape’, ‘BldgType’, ‘BsmtExposure’) were converted into ordinal features accordingly by mapping them into a numerical format. Some numerical variables such as ‘MoSold’ and ‘YrSold’ were converted to string type since they do not truly reflect numerical properties.

Exploratory data analysis

Based on the density plot of the dependent (target) variable, Sale Price, it was immediately apparent that it was right skewed. Since multivariate normality is one of the assumptions for linear regression, a log transformation was applied to fix the skewness, which yielded a much more normal distribution (as shown below).

Using Data to Predict Ames, Iowa Housing Price
Using Data to Predict Ames, Iowa Housing Price

Once the Sale Price was transformed, a correlation matrix was used to try to identify multicollinearity among the variables (Figure 3). In order to reduce multicollinearity among the explanatory variables, several new features were created that encapsulated information of several others such as: ‘Total Bath’ – total number of bathrooms in the house/basement ‘Overall Score’ – total score combining overall quality and overall condition ‘House age’ – number of years from remodeling to sale ‘Overall Porch’ – total square feet area of porch ‘Basement Area’ – total square feet area of basement New features that indicate the presence of house features such as a pool or a fireplace were created and imputed into binary values of either 0 (doesn’t exist) or 1 (exist).

Using Data to Apply a correlation heatmap

Applying a correlation heatmap to the resultant engineered data reveals which features are most correlated to the sale price (SalePrice) and which are least correlated according to Spearman’s correlation coefficients. Features such as Overall Quality and Above Ground Living Area were highly correlated with Sale Price, in contrast with the Month or Year Sold, which showed near zero correlation with Sale Price. Among the highly correlated variables, outliers that were either above or under than 3 standard deviations from the mean were removed from the dataset since outliers increase the error variance and could potentially skew our model predictions.

Using Data to Predict Ames, Iowa Housing Price

For certain explanatory variables in the dataset, features that displayed more than 90% dominance in one category (i.e. 'Street', 'LandContour', 'Utilities') were dropped completely as they would not provide any predictive/explanatory value in creating the model. For the ‘Neighborhood’ feature, the different areas were subdivided into ‘high’, ‘middle’ and ‘low’ priced neighborhoods to prevent too much expansion of feature space after dummification. Once the categorical variables were processed appropriately, they were dummified in order to be able to implement the linear regression models, meaning that meaning they were converted into a series of binary variables representing whether each category was true for a given house, yes (1) or no (0).

Model training 

We explored two types of models in this study: linear regression models and tree-based ensemble models. For linear regression, we took the log value of the target variable, Sale Price and trained the model with multiple linear regression and regularization models such as Ridge and Lasso using a 70-30 train validations split. All three linear models provided train-test scores of 0.90–0.91, MSE of approximately 0.013, and RMSE of approximately 0.114. Then the model was validated using 5-fold cross validation, yielding a mean cross validation score of approximately 0.88. Results of the models are summarized below:

Using Data to Predict Ames, Iowa Housing Price

As we can see, all three linear models produced relatively consistent results, with Ridge regression performing slightly better than the rest. The model coefficients suggest that the overall score (positive), fireplace (positive), kitchen/garage quality (positive), neighborhood (negative) and house age (negative) are among the most significant variables affecting the overall sale price of the house.

Using Data to Predict Ames, Iowa Housing Price

Normal distribution and independence of residuals or error terms from predictors are core assumptions of all regression models. The skewness and kurtosis analyses confirm that these assumptions hold true as they indicate that the error distributions are approximately zero with a mean of zero. The residual plot further confirms these assumptions as we see a nice even distribution around zero.

Using Data to Predict Ames, Iowa Housing Price

The quantile-quantile (qq) plot

The quantile-quantile (qq) plot is a graphical technique for determining if two data sets come from populations with a common distribution. Here, a qq plot allows us to determine whether the data is normally distributed, and in this case, our plot is relatively straight between the 25% - 75% quartile range. However, the qq plot indicates that the ridge model tends to underpredict house prices in the higher quantiles and overpredict those in the lower quantiles. Therefore, we conclude that the model is not as robust in predicting house prices when dealing with data points at the extremes of the price range.

Using Data to Predict Ames, Iowa Housing Price

In order to quantify the effect of each feature in our regression model, we computed the exponent of each coefficient (since the target variable is in natural log form) in order to calculate the dollar change in the average house sale price given a unit change of a given explanatory variable. Results are shown in the table below:

Using Data to Predict Ames, Iowa Housing Price
Using Data to Predict Ames, Iowa Housing Price

For example, we can see that for a given unit change of HouseAge, the sale price decreases by $1160 while for a given unit change of the OverallScore or Fireplace, the sale price increases by $6617 and $5155 respectively.

However, one drawback of the regression model is that it contains dummified features that obscure the interpretability of the explanatory variables. Therefore, in addition to the linear models implemented above, we also tried non-linear, tree-based, models. Furthermore, there may be non-linear relationships in our data that tree-models are able to capture, perhaps offering better performance than the linear models described above. RMSE scores for the non-linear models are summarized below:

Using Data to Predict Ames, Iowa Housing Price
Using Data to Predict Ames, Iowa Housing Price
Using Data to Predict Ames, Iowa Housing Price
Using Data to Predict Ames, Iowa Housing Price
Using Data to Predict Ames, Iowa Housing Price


These results can help inform decision-making at the business level. As stated above, it can provide insight on the pricing of real estate assets just by plugging in the house characteristics and letting the model return a price. In addition, it can provide information on which features of a new house are more valuable for potential house buyers.

Among the models that we have explored in this project, we find that ridge regression performs best for linear regression models while XGBoost works best for nonlinear models based on the RMSE and R^2 scores. Based on these model outputs, we conclude that fireplace, total square footage, size of the garage (GarageCars) are among the most significant positively correlated features that affect the sale price.

Choosing the optimal model

Choosing the optimal model ultimately depends on what we are trying to achieve through machine learning. For instance, if the primary goal is to predict the sale price of a house, a simple linear model in this case would suffice. To improve the predictive accuracy of the outliers observed in our data, one might consider using a stacked model combining ridge regression and XGBoost at the cost of model interpretability.

However, predicting the sale price of a house isn't necessarily everything. In fact, knowing what factors influence the selling price of a house may well be more valuable than the predictions themselves from the seller’s standpoint. Therefore, a XGBoost model or even a random forest model (with better tuning) may be better options if the goal is to enhance model interpretability.

The following are some of the insights gleaned from the outputted feature importances of each model as well as some recommendations for residential real estate buyers and sellers:

Linear Regression Models: Neighborhood, Foundation, Kitchen quality

Buyers: It should come off as an obvious point but the neighborhood where the house is in matters. Consider buying a house in the ‘high’ neighborhood such as Northridge, Northridge Heights or Stone Brook. ‘Mid’ tier neighborhoods include Timberland, Veenker and Somerset. Furthermore, consider buying houses with a concrete or slab foundation than with a wood or stone foundation as they are positively correlated with a higher sale price.

Sellers: Since sellers cannot change the neighborhood or the foundation of the house that they live in currently, consider renovating the kitchen to improve the kitchen quality. The overall quality matters as well so consider renovating the house through landscaping or remodeling to improve overall condition of the house.

Random Forests/Gradient Boosting Regressor: TotalArea, Overall quality, Living Room Area

Buyers: Consider buying houses that generally have the highest square feet area above ground, especially with ones with the biggest living room area. Having more bathrooms is a plus. If expensive, find houses that can be expanded easily through renovation.

Sellers: Consider investing into a porch/patio addition, bumping out the kitchen or adding more bathrooms to expand the square footage of the house. Expanding an existing garage can help drive up the home value, but these renovations are generally quite expensive.

XGBoost: Fireplace, Garage size, Total square ft area 

Buyers: Consider buying houses that generally have the highest square feet area above ground, especially with ones with the biggest living room area. Having more bathrooms is a plus. If expensive, find houses that can be expanded easily through renovation.

Sellers: Consider investing more into installation of a fireplace and central air conditioning to fetch higher prices if possible. Otherwise, the same suggestions apply to those given according to random forest/gradient boosting regressor.


About Author

Thomas Kim

Goal-oriented data scientist with 4 years of quantitative background in biomedical research with a bachelors degree in Biology and a masters degree in Bioengineering. Demonstrated success in hypothesis testing, data analysis and visualization to communicate results to technical...
View all posts by Thomas Kim >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI