Enhancing Analysis with Model Interpretability: A Real Estate Dashboard
Machine learning’s rising importance and ubiquity carries with it promises of profitability and efficiency. Basic machine learning models have never been simpler to create. Python and R have easy-to-use libraries that handle all of the heavy machinery. The inner workings are inaccessible when abstracted away, leading to the term “black box.” It's difficult to trust the result of a model lacking transparency. Furthermore, identifying model bias without the proper tools is impossible for a non-technical user. Model interpretability techniques address this obstacle by uncovering a “transparent view” that makes the models accessible rather than black boxes.
Model interpretability can be widely divided into intrinsic interpretability and post-hoc interpretability. The former refers to interpretability due to a model’s structure. Simple models such as MLR (Multiple Linear Regression) and decision trees are easy to communicate and play with. However, models that use ensemble methods or deep learning are much harder to interpret. This increase in difficulty often brings a corresponding rise in performance, so an interpretation method for complex models is worthwhile. Post-hoc interpretability techniques come in here. We will consider two methods: LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations).
LIME creates a local approximation around a single observation. The local approximation is typically a simple model (e.g. MLR or decision trees) trained on perturbed data around the point of interest. The coefficients or structure of this simple model is then used to explain the complex model's behavior in a neighborhood of the observation. While LIME is fast and model-agnostic, it has limitations. It can only interpret one instance at a time, and its localized nature makes it unable to provide broader insights about the entire model.
Above is an example of a LIME explanation for a random forest model predicting house price. There is a feature splitting and corresponding effect on modeled price.
Picture a tangent line from differential calculus. The tangent line is a simple local approximation of a function at a point. It makes sense to consider that approximation near that point, but not far away from it. In LIME, the same holds. The simple model approximates the complex model behavior around a specific observation. However, far away from the observation, it’s not sensible to use this simple approximation in place of the complex model.
On the other hand, SHAP provides a more comprehensive interpretation method. For each observation, SHAP calculates how much each feature contributes to the prediction by considering all possible combinations of features. The SHAP value of a feature roughly represents the amount by which that feature value changes the model output. The sum of all SHAP values (one for each feature) of an observation added to the average predicted value of all observations is the predicted value of that observation.
This approach's power comes from the fact that SHAP values have global relevance. That is, unlike LIME, SHAP explains distributions of points along a consistent measurement. SHAP is also model-agnostic, but it can be computationally expensive. Some Python implementations of SHAP optimize for specific model architectures for this reason.
Above is an example of a SHAP beeswarm plot, which looks at the distribution of SHAP values within a dataset.
It's important to exercise caution when interpreting the results provided by LIME and SHAP, as they reveal correlations rather than causal relationships. Misinterpretation potentiates incorrect conclusions and reinforcement of existing biases. Moreover, a technique called scaffolding demonstrated by Slack et al., can manipulate these methods into producing misleading explanations, potentially hiding bias. Thus, while LIME and SHAP can offer valuable information, one should critically assess the explanations to avert the problem of accepting biased results as objective fact.
In practical applications, such as real estate, models are used to identify candidate single-family residences (SFR) for purchase and rent. Cap rate, the rate of return on a property based on the income the property is expected to generate, crucially aids this identification. For non-technical users, understanding overall model behavior and why a model suggests purchasing a property helps them trust their decision. We created a dashboard focused on the Georgia SFR housing market in Plotly Dash. It employs SHAP values to provide model interpretation at a zipcode, county, or state level granularity. We built it to allow for multiple models so that users can compare explanations. Basic MLR models serve as templates within the dashboard.
The dataset was provided by Haystacks.ai. It contained approximately 10,000 SFR observations throughout Georgia. As it is proprietary, we cannot grant access to the reader. We had to do some minor data cleaning - selection of SFR residences and deleting observations that were missing important data, imputing averages, etc. We also included school and crime statistics according to the county level to add other features. To estimate cap rate, we took FMRs (Fair Market Rents) via number of bedrooms as expected income and an expected cost of 1% of (predicted) property value.
The first page provides a choropleth map of the dataset, along with model choice and metric to color the map by. This allows the user to select the outlier zip codes or counties of interest to them.
The next page shows information about the distribution of points according to our model features. There is a scatter plot, plotting feature versus price or cap rate. By selecting a dot, the bottom bar plots compare the corresponding property to averages. On the right, there is a beeswarm plot for the region selected. The SHAP plot allows the user to extract information about the region and how it interacts with the model.
The last page is for further interpretation and property selection. By selecting a zip code (clicking its corresponding bar) in the upper left, we determine a region to extract the top 10 properties from. Selecting a property then creates the SHAP waterfall plot in the upper right. This plot, for example, shows this property’s price lost over $200k in the model due to its square footage value. Because model performance varies by region, there is also a cutoff slider at the top which allows for the restriction to regions having an R-squared of at least the chosen value.
LIME and SHAP offer the transparency needed for non-technical users to trust and understand their models. Still, caution is warranted, as these tools can be manipulated or misinterpreted, potentially concealing or reinforcing bias. The integration of SHAP into our Plotly Dash dashboard offers a compelling example of model interpretability in action within the Georgia SFR housing market. We hope that it illustrates how SHAP provides insights at multiple resolutions and is easy to comprehend. As people gain insight into their models, they can make more informed decisions, driving a more transparent and accountable AI landscape.