Using Data to Predict House Prices in Ames, Iowa
Data Science Background
This project utilizes a variety of machine learning techniques to predict house prices based on a Kaggle dataset of 79 explanatory variables. The data covers the sale of individual residential properties in Ames, Iowa from 2006-2010. The goal of the project is to forecast the prices of 1,459 homes as accurately as possible, as well as identify the features which have to greatest impact on sale price.
Part 1: Data Exploration, Cleaning, and Pre-processing
My first step was to read the documentation and contextualize the data. Real estate has a history of being cyclical, and the home sales detailed in this particular dataset take place amidst the 2008 Housing Crisis. The following boxplot shows sale prices broken down by month and year:
Interestingly, there is very little evidence of a pricing bubble or crash here, and very little seasonality overall. We can move on to further analysis.
With the next plot, we see that overall quality shows a strong linear-like correlation with sale price.
Above ground living area also shows a somewhat linear relationship to sale price, as shown in the scatterplot below. The documentation notes there are five outliers in the dataset, three of which are partial sales and two of which are unusually large sales. I removed the two points circled in orange, as those stood out significantly.
Feature Selection (and De-Selection)
Now that I'm clued in on the linear relationships of some variables to sale price, I know that a linear regression may be a good candidate for model fitting. The next step is to try and reduce some of the dataset's dimensionality. I will look at a heatmap to identify variables that are highly correlated to each other, and remove as many redundant columns as possible.
The feature "GarageCars", for example, is highly correlated with "GarageArea". We can drop it since it doesn't offer any new information. I dropped "GarageYrBuilt", "TotRmsAbvGrd", "1stFlSF", and "2ndFlSF" for similar reasons.
Missing Value Imputation
Another important issue to address was the 29 columns in the dataset containing missing values. Out of those, 20 relate to optional "bonus features" of a house such as a pool, fireplace, or fence. I imputed zeros or "none" for these.
8 of the remaining columns with missing values were categorical attributes dominated by one value (i.e., most values of "SaleType" were "Normal). I imputed the mode for these columns. I then dropped "Utilities" since all values were the same except for one.
Finally, the last column requiring imputation was "LotFrontage". 16.7% of these values were missing. I considered dropping the variable, but guessed that lot frontage could greatly impact the home's visual appeal and decided instead to impute the neighborhood mode.
Feature Engineering
I created the following new features:
- HouseAge- for better interpretability of year built
- YearsSinceRemod- for better interpretability of year remodeled
- TotalBathrooms- consolidating the 4 columns for full baths, half baths, and basement baths
- PorchSF- consolidating the square footage of various porch styles
Encoding Categorical Variables
14 categorical variables were encoded ordinally. Most of these were quality scores which were easy to assign numerical values to.
For linear models, I dummified the remaining categorical variables, including Month and Year sold.
Variable Transformations
To increase the normality of some of the variables with high skew, I applied a log transformation, starting with the dependent variable.
I also applied log transformations to explanatory variables with a skew level of 0.5 or higher.
Part 2: Model Fitting & Evaluation
I split the cleaned dataset into train and test splits and analyzed models using 10-fold cross validation. I used root mean square error to evaluate performance.
The Lasso penalized linear regression had the best performance, with an RMSE of 0.1201 after tuning the model for alpha.
Next, I took a look at the prediction plot and residuals.
Visually, both of these plots demonstrate a relatively effective model. We can move on to evaluating coefficients to extract additional insights.
From the coefficient plot, we can see that the most important factor in determining sale price is above ground living area, which makes sense. After that is Overall Condition.
Additionally, six of the coefficients relate to a neighborhood, which proves that location matters a lot to buyers in Ames.
One unusual surprise here is the top negative coefficient showing as commercial zoning classification. This may require further investigation, especially since the coefficient is disproportionally high.
Conclusion
The top takeaways from the model analysis are as follows:
- A bigger house is not always a better house, but in Ames, it is most likely a more expensive house. Thus, homeowners can sizably increase the value of their home through constructing a home extension, if possible.
- Neighborhoods have a significant impact on house price. Therefore, home buyers looking to save money on a nicer house should consider neighborhoods like Edwards and Old Town.
Opportunities for Further Analysis
My final predictions scored in the top 25th percentile on Kaggle. There is potential to improve the score by further fine-tuning the model parameters.
In addition, we may get even better results by stacking or blending some of the models. The tradeoff would be that this approach will add a layer of complication and make the model more difficult to interpret.
Finally, there are infinite possibilities when it comes to feature engineering. We can try binning some categorical variables, performing different transformations on skewed variables, and adding/dropping different combinations until the end of time.
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Additional details and code available on Github