Data & Machine Learning Usage to Price Home Renovations for Real Estate Investment
The skills the author demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Background - Pt. 1
Real estate investment is an area of cultural fascination; in particular, the art of “flipping” homes for profit has been portrayed in popular shows such as the aptly named “Flip or Flop,” “Flipping Out,” “Flip this House,” and more. However, these pop-culture-friendly portrayals often gloss over the financial realities of fixing & flipping a property. Homes flipped in 2019 represented 8.6% of all sales, which was an all-time high. Yet data shows that median gross profit on flipped homes dropped 3.2% to $62,900 -- an eight-year low. What’s clear is that without expert knowledge and solid decision-making processes, prospective flippers face the risk of receiving negligible returns on huge investments of time and money.
Background - Pt. 2
Flippers face several obstacles and pain points when planning upgrades to a property. First, carrying costs for a renovation makes the process highly time sensitive. Utilities, insurance, interest on financing, and property taxes add up quickly. This means that aside from ensuring any renovations add significant value to the home, investors must complete those upgrades in an efficient manner. Second, technical expertise and capital are required -- along with experience in the local market, construction and renovation know-how, and financing. These factors all influence the selection and quality of not just which features to upgrade, but also which house(s) have the most potential at the outset.
Data-driven decision making is therefore crucial to ensure that flippers position themselves well to receive maximum return on investments. For this Machine Learning exercise, we took on the role of a firm providing that very service -- supplying clients with the quantitative insights they need to increase profit margins, when combined with more traditional domain knowledge. Through our work, we knew we could help clients: 1) reduce the amount of time needed to make renovation decisions, and 2) prioritize projects based on their increase in property value.
Data Cleaning & Pre-Processing - Pt. 1
Using Kaggle’s Ames Housing Dataset, we utilized machine learning models and analytics to estimate sale price increases from various types of renovations and identify which features of a house flipper would likely receive better returns on by renovating. The dataset includes records of approximately 2,500 home sales in Ames, Iowa from 2006-2010 with an extensive number of features -- ranging from square footage, neighborhood, zoning information, and more.
Before training our models, significant pre-processing and cleaning was required to ensure we could make full use of all the available data. Many missing values were intentionally included in the raw dataset, left empty to represent that a certain feature was missing from a house. For example, many variables graded the quality of various aspects of a basement, but some houses had no basement at all. While this makes intuitive sense to a human examining the dataset, a machine learning model cannot do the same and therefore requires a solution.
For many features, the ordinal nature of those variables permitted an easy fix. While encoding those variables (as will be discussed in a bit), houses that were simply missing that feature altogether were assigned a “0” score at the bottom of the feature’s new numerical scale. Other missing values seemed to genuinely be unintentional, missing values. Since these generally existed with negligible frequency relative to the overall number of rows, we were able to replace those missing values with the mode (for categorical features) or mean (for numerical features).
Data Cleaning & Pre-Processing - Pt. 2
Another challenge presented by the dataset was the issue of feature imbalance. Certain features were missing from so many homes that utilizing it within any model would clearly be irresponsible and unhelpful. For example, because only a dozen-or-so homes in our dataset had pools, drawing any conclusions about the impact of having a pool (along with related features such as how large it is, etc.) from such a small sample size simply isn’t possible. Other features, though, weren’t as clear-cut regarding whether we should include or exclude them.
What if a feature was missing in 50% of the homes rather than 80-90%? Was there any clear cut-off point? What if there was significant feature imbalance in one variable, but houses that did have the feature provided valuable insights about a possible relationship with sale price? When these questions arose, our team carefully went feature-by-feature to understand the nature of the imbalance and determine whether the likely benefits of including it would outweigh the potential drawbacks.
Ordinal and categorical features also needed to be encoded into numerical values so our model could interpret them properly. Because label_encoder was not preserving the correct priority levels for our ordinal variables, they were instead encoded manually. Non-ordinal features could be assigned dummy variables automatically.
As an aside, the tiny scale these variables existed on after encoding (0’s & 1’s, 0-5’s, etc.) made normalization of our dataset crucially important when training models later on; failing to do so would have completely skewed the impact of those features when used in tandem with, for example, SalePrice on a scale of hundreds of thousands rather than single digits.