Data Analysis on Real Estate Investment
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Real estate investment is an area of cultural fascination. In particular, the art of “flipping” homes for profit has been portrayed in popular shows such as the aptly named “Flip or Flop,” “Flipping Out,” “Flip this House,” and more. However, data shows these pop-culture-friendly portrayals often gloss over the financial reality of fixing & flipping a property.
Homes flipped in 2019 represented 8.6% of all sales, which was an all-time high. However, median gross profit on flipped homes dropped 3.2% to $62,900 -- an eight-year low. What’s clear is that without expert knowledge and solid decision-making processes, prospective flippers face the risk of receiving negligible returns on huge investments of time and money.
Flippers face several obstacles and pain points when planning upgrades to a property. First, carrying costs for a renovation makes the process highly time-sensitive. Utilities, insurance, interest on financing, and property taxes add up quickly. This means that aside from ensuring any renovations add significant value to the home, investors must complete those upgrades in an efficient manner.
Second, technical expertise and capital are required -- along with experience in the local market, construction and renovation know-how, and financing. These factors all influence the selection and quality of not just which features to upgrade, but also which house(s) have the most potential at the outset.
Data-driven decision making is crucial to ensure that flippers position themselves well to receive maximum return on investments. For this Machine Learning exercise, we took on the role of a firm providing that very service -- supplying clients with the quantitative insights they need to increase profit margins, especially when combined with more traditional domain knowledge. Through our work, we knew we could help clients: 1) reduce the amount of time needed to make renovation decisions, and 2) prioritize projects based on their potential to increase property value.
Using Kaggle’s Ames Housing Dataset, we utilized machine learning models and analytics to estimate sale price increases from various types of renovations, and identify which features of a house, flippers are likely to receive better returns on when renovating. The dataset includes records of approximately 2,500 home sales in Ames, Iowa from 2006-2010 with an extensive number of features -- ranging from square footage, neighborhood, zoning information, and more.
Data Cleaning & Pre-Processing
Before training our models, significant pre-processing and cleaning was required to ensure we could make full use of all the available data. Many missing values were intentionally included in the raw dataset, left empty to represent that a certain feature was missing from a house. For example, many variables graded the quality of various aspects of a basement, but some houses had no basement at all. While this makes intuitive sense to a human examining the dataset, a machine learning model cannot do the same and therefore requires an alternative approach.
For many features, the ordinal nature of those variables permitted an easy fix. While encoding those variables (as will be discussed in a bit), houses that were simply missing that feature altogether were assigned a “0” score on its new numerical scale. Other missing values seemed to genuinely be unintentional, missing values. Since these generally existed with negligible frequency relative to the overall number of rows, we were able to replace those missing values with the mode (for categorical features) or mean (for numerical features).
Another challenge presented by the dataset was the issue of feature imbalance. Certain features were missing from so many homes that utilizing it within any model would clearly be irresponsible and unhelpful. For example, because only a dozen-or-so homes in our dataset had pools, drawing any conclusions about the impact of having a pool (along with related features such as how large it is, etc.) from such a small sample size simply isn’t possible. Other features, though, weren’t as clear-cut regarding whether we should include or exclude them.
What if a feature was missing in 50% of the homes rather than 80-90%? Is there any clear cut-off point that should be used? What if there was a significant feature imbalance in one variable, but when it was present, the feature provided valuable insights about a possible relationship with the sale price? When these questions arose, our team carefully went feature-by-feature to understand the nature of the imbalance and determine whether the benefits of including it would outweigh the potential drawbacks
Ordinal and categorical features also needed to be encoded into numerical values so our model could interpret them properly. Because label_encoder was not preserving the correct priority levels for our ordinal variables, they were instead encoded manually. Non-ordinal features could be assigned dummy variables automatically.
As an aside, the tiny scale these variables existed on after encoding (0’s & 1’s, 0-5’s, etc.) made normalization of our dataset crucially important when training models later on; failing to do so would have completely skewed the impact of those features when used in tandem with, for example, Sale Price on a scale of hundreds of thousands rather than single digits.
Data on Feature Selection
Because the dataset we eventually ended with (after cleaning, encoding, etc. the raw data) had 280 features, it was clear that feature selection would have a significant impact on our models’ predictions. In order to ensure our choices gave accurate inferences, we decided to take multiple approaches to this problem. First, we used multiple linear regression to perform selection; and second, we built a tree-based model to gauge feature importance. We also analyzed the data using a targeted, cleaned dataset with a limited/targeted (59) number of features, and compared that to our full clean dataset with 280.
Thankfully, both the targeted and full dataset converge on similar, important features. Feature importance from our tree-based model also showed good overlap with the feature selection performed by multiple linear regression.
Each feature coefficient from our LassoCV model represents the dollar increase in the target variable, SalePrice, per unit increase of that respective feature. Given all other features are held constant, we can get a rough idea of which renovative features are valuable investments.
List of feature coefficients:
As house flippers, the top three most impactful features we control are:
- ExterQual – Exterior Quality
- KitchenQual – Kitchen Quality
Let’s take a look at whether they are worth renovating in Ames, Iowa
Renovation cost: $1,418 - $5,016
Average cost: $3,217
- Includes exterior painting, landscaping, door and window update, porch railing, and decoration
Given that there is a $15,115 increase in SalePrice for each unit increase of ExterQual, this feature has a high ROI making it worthwhile.
Renovation cost: $7,736 - $38,690
Average cost: $20,544
- Includes labor, materials and equipment, project costs
For every unit increase in KitchenQual, there is an $8966 increase in SalePrice. Given that the average cost in Ames is $20k, renovating a kitchen by a single unit, i.e. from good to excellent quality, has a low ROI. Therefore, we encourage multi-unit renovations, i.e. from fair to excellent quality, as they have a higher ROI.
Installation cost: $92-$1000
Average cost: $439
For every fireplace installed in a home, SalePrice increases by $7041. If installing another fireplace works with the house’s layout, it is a worthy investment.