Using Data to Forecast Zillow Rent Index
Project Github|Aditya's LinkedIn|Ryan's LinkedIn|Gabby's LinkedIn
IntroductionΒ
For our capstone project, we predicted Zillow Rental Indexes (ZRI) three years in the future at the behest of our corporate partner Markerr, a real estate informatics company. We assembled multiple publicly available data sources, performed data cleaning & EDA, feature engineering, and deployed machine learning models in pursuit of this goal. Our team was able to procure significant insight to answer research questions valuable to our corporate partners. Our objective for this project was to:
- Predict Zillow Rental Indices by ZIP code three years in the future
- Determine features that can predict future rent values in the absence of current rent prices
- Explore the importance of current rent prices for predicting future rent prices
Data
An important aspect of our project was finding valuable data from publicly available sources to pair with our ZRI data. Our goal was to find data that had ZIP code level granularity and an appropriate amount of data for the years we planned to fit models on. Below is a list of the data sources that made it in our final data.
- US Census American Community Survey (ACS)Β
- US Census Bureau, Business count per ZIP codeΒ
- Homeland Infrastructure Foundation-LevelΒ
- Multifamily Zillow Rent IndexesΒ
Data Cleaning
Both of our main data sources, the ACS survey and the Zillow Rental Index, had significant issues with missing data. We dropped the 364 of the 1860 ZIP codes in the ZRI data that had greater than 50% missing values, deeming the lack of information for these to be too great to serve our purposes. The remaining missing data points were addressed in one of two ways.
For missing data that had surrounding known values (missing values in the interior of the dataset), linear interpolation was used to fill in the information. For the significant amount of missing data at the beginning and end of the dataset, the twelve-month ahead or trailing datapoint (depending on whether the missing value was at the start or the end of the dataset), adjusted for an assumed yearly rental growth rate of 4%, was used to compute values.Β
To deal with missing values in the ACS data, we started by dropping the seven columns that consisted of more than 20% missing values, leaving 245 features. The remaining missing values were then filled by using the city average for the ZIP code attribute in question. Through this cleaning process, we were able to deal with the missingness issues in the datasets we chose to use, and prepare the raw data for our feature engineering process.
Feature Engineering
We made numerous changes to our raw data during the feature engineering process, with 28 of our final 33 features undergoing someΒ editing.Β In several instances, correlated features were combined to generate useful metrics with minimal duplicated information.
Also, many of the features in the ACS data had a strong dependency on the size or population of the ZIP code represented. For example, the feature βhousing unitsβ was a raw count of the housing units in a zip code. To make this a valuable attribute we transformed it into housing units per capita, a measure of the supply of available housing that can be compared across ZIP codes.
Similar processes were conducted on many other features such as the percent of housing units occupied by renters and the percentage of commuters with a commute time of over 45 minutes. This process yielded more useful features that exhibited lower multicollinearity.
In addition, we also grouped data from our other data sources at the ZIP code level to create useful features such as the number of businesses per zip code, number of transit terminals per ZIP code, and number of hospitals per ZIP code. Our engineered feature set proved to have far more predictive power than the raw ACS census data, as will be shown in the modeling section of this report.
Principal Component AnalysisΒ
After thoroughly cleaning our data and engineering new features to limit duplicated information in our dataset, there were still significant multicollinearity concerns. The ACS data had multiple features that were clearly related such as median income, number of people earning below the poverty line, and those commuting via transportation.
To correct for the remaining multicollinearity in our transformed features, we conducted principal component analysis to generate completely independent components while preserving the variance in the data. The graph below shows the ten features with the largest weights in the first principal component to give an idea of where much of our dataβs variance lies.Β
Originally, we intended to use PCA to reduce the dimensionality of our data as well. However, we found we needed to keep most of the principal components to maintain adequate predictive power in our models.Β
Models
The machine learning models we deployed in this project were lasso and stepwise regression, random forest, and boosting. Below is a summary of their performances.Β
Gradient Boosting was by far our best model which predicted three years out rent prices with an R2 of 84%. This was a drastic improvement from our original dataset before our feature engineering which predicted rent prices with an R2 of approximately 50%. However, our initial goal was to find a set of features that could outperform just using the current rent to predict future rent. To evaluate this below is a graph of how well using a single feature, current rent, performed in predicting rent in three years.
Clearly, even with our feature engineering and collection, the best model for predicting current rent is future rent with a linear model. The only instance where our model outperformed a rent-only model was with gradient boosting, our features performed the best and current rent models performed the worst with boosting.Β
Analysis
To dig deeper into our second objective of determining features that can predict future rent values in the absence of current price values, we took it upon ourselves to see which features came out on top when examining the results of our models. We ranked our linear model features by their coefficients and our non-linear model by their feature importance results in our random forest model. As illustrated by the charts below we can see a list of our top features that stood out when predicting rent values.
We can see that βMedian Incomeβ appears in both model formats as income is a huge driving factor in an individualβs ability to pay rent. The percentage of occupied rental units also has a strong linear relationship with future rent prices, which makes sense as the amount of supply of rental units will affect its pricing.Β
ConclusionΒ
During the course of this project, we provided insight and useful insight to our corporate partners, Markerr. We created models to predict rent in ZIP code three years in the future correctly with a margin of error (RSME) of $258 without using the current rent price and about $180 with current rent. We were also able to conclude that there is no replacement for using current rent to accurately predict future rent. Overall, this project succeeded in addressing our initial objectives as well as furthering our development in data processing and implementation of machine learning.
Future Work
There are several extensions of this project we are interested in pursuing. First, we would like to increase the specificity of our modeling process by creating separate models on each of the top 20 regional real estate rental markets. The added specificity of these models would likely increase our ability to predict rent prices. A second interesting extension of this work would be to create a model that has a different quantity as the target variable.
We would like to build models that have the dependent variable set to the percent change in the rental index over the next three years, and believe this could be an interesting avenue to explore. In addition, we believe that engineering features which represent changes in quantities over time (such as the percentage change in population from a year ago, rather than the current population) might show increased predictive power and be valuable to a model that has the percent change of the rental index as its target.Β
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.