Data Analysis Owning a Home: Fitting Towards Ames
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Introduction
For most, owning a home is regarded as a key first step on the road to financial stability. Beset by the turmoil of stock and bonds, coins, and metal, a house is seen as the premier vehicle for long term multi-generational wealth growth. And such perceptions rest on a strong foundation: private property in the United States is enshrined and protected like nowhere else.
But one assumption goes unchecked. The promise of price stability. A promise that has been voided in many parts of this country over the last 20 years as the nation was hit and knocked down by one financial crisis after another. However, one place - an ark coasting on the seas of financial turmoil - came out unaffected by the volatility in the housing market: Ames, Iowa.
By understanding Ames, we try to infer broader principles on what gives any city stability in the housing market. And we will use those insights to construct a model to accurately predict house sale prices.
And so we fit towards Ames.
Understanding Ames
The demographics, economy, quality of life, and virtually any other aspect of Ames are defined by Iowa State University, a large public research university. Over 75% of Ames is either studying or working at ISU - making the town essentially one large extended campus.
As in most college towns, the real estate market is defined by a very large proportion of rental properties. The income they generate depends eventually on the ISU’s annual budget, which helps explain the amazing stability of house prices (as well as transaction volumes) in Ames.
The Ames data was collected in 2006-2010 and this was a very turbulent time in the U.S. real estate (and the economy in general). However, none of that volatility has taken place in Ames
[placeholder for image of time data]
From the machine learning standpoint, such price stability means we cannot extract useful information from the timing of the transaction.
The last observation regarding the Ames real estate picture is that the dataset includes homes from very different price segments, ranging from upscale neighborhoods with $300,000 homes next to a golf course to the neighborhoods with $100,000 homes next to the airport.
The neighborhood and the zoning category strongly influence the house prices and came up as important features in all our pricing models:
[placeholder for neighborhhoods]
However, it is difficult for an algorithm to predict prices equally well for the entire spectrum. Our models perform better in the middle segments, where more transaction data is available. Note that it would be difficult for a human real estate agent to know all these different segments equally well, real estate agents tend to specialize.
As the plot shows, neighborhoods can be ordered by the median sale price of homes which suggests which neighborhood a home is located in is a strong indicator of price.
Overall quality of the home also has a naturally monotonically increasing relationship with the sale price.
### Price Range
Using these three features: Neighborhood, MSZoning, and OverallQual, we create a new feature called PriceRange. PriceRange is calculated by separating by quantile the median SalePrice for homes that share the same neighborhood, overall quality rating, and zoning.
As we can see in the scatter plot, the PriceRange captures some indication of overall Price. A better way to illustrate separability of the three classes is a boxplolt:
PriceRange has no information about a home's SalePrice. Yet it captures that information relatively well. We could have used more or different features in the engineering of this feature - and we will explore this topic below.
Curse of Dimensionality & Imputation
For our project, the test set was equal in size to the train set. This created problems when trying to do nontrivial feature engineering since, in the case of creating a new categorical feature based on binning, the test set might not have data in the bins we create for the train set.
Adding more features would have given a cleaner separability. But we also had to minimize imputation on the test set by making the feature sufficiently broad to capture the unknown price range of the test data.
We achieved a balance with the combination of Neighborhood, Zoning, and Overall Quality. These three features gave a good correlation with the overall price while only leaving 88 values in the test set to be imputed.
Linear Models
There are four assumptions of the linear model:
1. The response is normally distributed.
2. There exists a linear relationship between the predictors and the response.
3. There are no interactions among the predictors (no multicollinearity)
4. The residual errors are independent of each other (homoscedastic)
The first three points deal with a priori assumptions on the data and target. The fourth is necessarily model dependent.
These four points are usually taken for granted, but we decided to explore the first three and test the fourth on the Ames dataset.
Testing Assumptions
Assumption #1: Normal Distribution
Here is the distribution of the response, SalePrice. Applying a log transformation brings to closer to approximating a Gaussian distribution.
Untransformed Response | Applying Log Transform |
Assumption #2 & #3: Linearity & No Interactions
In a multilinear model, the predictor coefficients represent the magnitude of change in the response for a unit change in Xi while holding all other Xj constant.
Let's examine this idea with our new feature.
Holding PriceRange constant, we see that the relationship between median Overall Quality and SalePrice for differently binned homes is highly non-linear and varies across bins.
Generalized Additive Model (GAM)
A GAM is a more generalized linear model in which the response is allowed to not only depend on sums of linear functions - but on any smooth function of the predictor variables.
We can think of the model as:
Where the transformed response, g(E(Y)) - the logarithm of the sale price in our case - is a sum of smooth functions of the predictors, x - our housing features (Age of the home, Basement Square Footage, etc.).
By combining these basis functions, s, a GAM can represent a large number of functional relationships (to do so they rely on the assumption that the true relationship is likely to be smooth, rather than wiggly).
Motivation
The reason we chose to explore this class of models was to investigate the performance of a model that didn't make an a priori assumptions of linearity. In fact, a GAM can be used to reveal and estimate non-linear effects between the predictors on the dependent variable.
Partial Dependence Plots
Since GAMs rely on an additive model, we can separate each nonlinear interaction from each other to generate Partial Dependence Plots (PDPs). Each PDP plots the feature against the expected response while holding all other features at their median values.
Shown above the GAM captures the slight hump in price associated with homes aged 40-50 years - corresponding roughly to those built in Ames during the 1970s.
Median Overall Condition has a linear relationship with Log Sale Price | |
Median MSSubClass has a nonlinear relationship with Log SalePrice |
GAMs are equally flexible at capturing both linear and nonlinear interactions. When the predictor-response relationship is obviously linear - the GAM adds no further complexity. And for ordinal features - where the numerical mapping is often arbitrary - if can capture the variance by fitting to the median values.
For comparison here's how closely the GAM captured the nonlinear relationship between the Overall Quality - broken out by PriceRange - and the log of the Sale Price.
Evaluation of Performance
GAM v Linear Model (LM)
GAM Predictions v True Log Prices |
LM Predictions v True Log Prices |
Both models seem to underperform for low income homes with the noticable dispersion of blue points in the lower left side of both plots.
The residual plots show this more starkly. Plotted are the model residuals for both a GAM and a linear model. Most of the residuals reside within the 95% confidence band - denoted by the dashed black line. However we notice a higher proportion of low income homes (relative to middle and high) that reside outside of the band.
GAM Residuals | LM Resiudals |
Size of each residual scales with magnitude. Most of the residuals of both models lie in the 95% confidence band (dashed black lines) |
Overall these findings show similar performance as a linear model - with many features truly exhibiting approximately linear relationships with the predictor. There is underperformance, but this is likely indicative of the common limited feature set we used for both GAM and LM.