Using Data to Forecast Zillow Rent Index

Project Github|Aditya's LinkedIn|Ryan's LinkedIn|Gabby's LinkedIn


For our capstone project, we predicted Zillow Rental Indexes (ZRI) three years in the future at the behest of our corporate partner Markerr, a real estate informatics company. We assembled multiple publicly available data sources, performed data cleaning & EDA, feature engineering, and deployed machine learning models in pursuit of this goal. Our team was able to procure significant insight to answer research questions valuable to our corporate partners. Our objective for this project was to:

  1. Predict Zillow Rental Indices by ZIP code three years in the future
  2. Determine features that can predict future rent values in the absence of current rent prices
  3. Explore the importance of current rent prices for predicting future rent prices


An important aspect of our project was finding valuable data from publicly available sources to pair with our ZRI data. Our goal was to find data that had ZIP code level granularity and an appropriate amount of data for the years we planned to fit models on. Below is a list of the data sources that made it in our final data.

  • US Census American Community Survey (ACS) 
  • US Census Bureau, Business count per ZIP code 
  • Homeland Infrastructure Foundation-Level 
  • Multifamily Zillow Rent Indexes 

Data Cleaning

Both of our main data sources, the ACS survey and the Zillow Rental Index, had significant issues with missing data. We dropped the 364 of the 1860 ZIP codes in the ZRI data that had greater than 50% missing values, deeming the lack of information for these to be too great to serve our purposes. The remaining missing data points were addressed in one of two ways.

For missing data that had surrounding known values (missing values in the interior of the dataset), linear interpolation was used to fill in the information. For the significant amount of missing data at the beginning and end of the dataset, the twelve-month ahead or trailing datapoint (depending on whether the missing value was at the start or the end of the dataset), adjusted for an assumed yearly rental growth rate of 4%, was used to compute values. 

To deal with missing values in the ACS data, we started by dropping the seven columns that consisted of more than 20% missing values, leaving 245 features. The remaining missing values were then filled by using the city average for the ZIP code attribute in question. Through this cleaning process, we were able to deal with the missingness issues in the datasets we chose to use, and prepare the raw data for our feature engineering process.

Feature Engineering

We made numerous changes to our raw data during the feature engineering process, with 28 of our final 33 features undergoing some editing.  In several instances, correlated features were combined to generate useful metrics with minimal duplicated information.

Also, many of the features in the ACS data had a strong dependency on the size or population of the ZIP code represented. For example, the feature ‘housing units’ was a raw count of the housing units in a zip code. To make this a valuable attribute we transformed it into housing units per capita, a measure of the supply of available housing that can be compared across ZIP codes.

Similar processes were conducted on many other features such as the percent of housing units occupied by renters and the percentage of commuters with a commute time of over 45 minutes. This process yielded more useful features that exhibited lower multicollinearity.

In addition, we also grouped data from our other data sources at the ZIP code level to create useful features such as the number of businesses per zip code, number of transit terminals per ZIP code, and number of hospitals per ZIP code. Our engineered feature set proved to have far more predictive power than the raw ACS census data, as will be shown in the modeling section of this report.

Principal Component Analysis 

After thoroughly cleaning our data and engineering new features to limit duplicated information in our dataset, there were still significant multicollinearity concerns. The ACS data had multiple features that were clearly related such as median income, number of people earning below the poverty line, and those commuting via transportation.

To correct for the remaining multicollinearity in our transformed features, we conducted principal component analysis to generate completely independent components while preserving the variance in the data. The graph below shows the ten features with the largest weights in the first principal component to give an idea of where much of our data’s variance lies. 

Originally, we intended to use PCA to reduce the dimensionality of our data as well. However, we found we needed to keep most of the principal components to maintain adequate predictive power in our models. 


The machine learning models we deployed in this project were lasso and stepwise regression, random forest, and boosting. Below is a summary of their performances. 

Gradient Boosting was by far our best model which predicted three years out rent prices with an R2 of 84%. This was a drastic improvement from our original dataset before our feature engineering which predicted rent prices with an R2 of approximately 50%. However, our initial goal was to find a set of features that could outperform just using the current rent to predict future rent. To evaluate this below is a graph of how well using a single feature, current rent, performed in predicting rent in three years.

Clearly, even with our feature engineering and collection, the best model for predicting current rent is future rent with a linear model. The only instance where our model outperformed a rent-only model was with gradient boosting, our features performed the best and current rent models performed the worst with boosting. 


To dig deeper into our second objective of determining features that can predict future rent values in the absence of current price values, we took it upon ourselves to see which features came out on top when examining the results of our models. We ranked our linear model features by their coefficients and our non-linear model by their feature importance results in our random forest model. As illustrated by the charts below we can see a list of our top features that stood out when predicting rent values.

We can see that ‘Median Income’ appears in both model formats as income is a huge driving factor in an individual’s ability to pay rent. The percentage of occupied rental units also has a strong linear relationship with future rent prices, which makes sense as the amount of supply of rental units will affect its pricing. 


During the course of this project, we provided insight and useful insight to our corporate partners, Markerr. We created models to predict rent in ZIP code three years in the future correctly with a margin of error (RSME) of $258 without using the current rent price and about $180 with current rent. We were also able to conclude that there is no replacement for using current rent to accurately predict future rent. Overall, this project succeeded in addressing our initial objectives as well as furthering our development in data processing and implementation of machine learning.

Future Work

There are several extensions of this project we are interested in pursuing. First, we would like to increase the specificity of our modeling process by creating separate models on each of the top 20 regional real estate rental markets. The added specificity of these models would likely increase our ability to predict rent prices. A second interesting extension of this work would be to create a model that has a different quantity as the target variable.

We would like to build models that have the dependent variable set to the percent change in the rental index over the next three years, and believe this could be an interesting avenue to explore. In addition, we believe that engineering features which represent changes in quantities over time (such as the percentage change in population from a year ago, rather than the current population) might show increased predictive power and be valuable to a model that has the percent change of the rental index as its target. 

The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.


About Authors

Gabrielle Klein

Recent graduate from the University of Chicago with a Master's in Applied and Computational Mathematics. Experience programming in Python, R, and Matlab. Passionate about all things math, looking forward to launching a career in data science.
View all posts by Gabrielle Klein >

Ryan Burakowski

Ryan Burakowski is a current NYC Data Science Academy fellow with experience in capital markets and a passion for working on difficult problems. He spent the last three years as a proprietary trader, traveling the world and living...
View all posts by Ryan Burakowski >

Aditya Jayasuri

Aditya is a recent Data Science graduate at NYC Data Science Academy with hopes of paving a new pathway in his career. A graduate from Drexel University with a B.S. in Entertainment & Arts Management previous experience includes...
View all posts by Aditya Jayasuri >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI