Enhancing Analysis with Model Interpretability: A Real Estate Dashboard

Posted on May 4, 2023

Machine learning’s rising importance and ubiquity carries with it promises of profitability and efficiency. Basic machine learning models have never been simpler to create. Python and R have easy-to-use libraries that handle all of the heavy machinery. The inner workings are inaccessible when abstracted away, leading to the term “black box.” It's difficult to trust the result of a model lacking transparency. Furthermore, identifying model bias without the proper tools is impossible for a non-technical user. Model interpretability techniques address this obstacle by uncovering a “transparent view” that makes the models accessible rather than black boxes.

Model interpretability can be widely divided into intrinsic interpretability and post-hoc interpretability. The former refers to interpretability due to a model’s structure. Simple models such as MLR (Multiple Linear Regression) and decision trees are easy to communicate and play with. However, models that use ensemble methods or deep learning are much harder to interpret. This increase in difficulty often brings a corresponding rise in performance, so an interpretation method for complex models is worthwhile. Post-hoc interpretability techniques come in here. We will consider two methods: LIME (Local Interpretable Model-agnostic Explanations) and SHAP (Shapley Additive Explanations).

LIME creates a local approximation around a single observation. The local approximation is typically a simple model (e.g. MLR or decision trees) trained on perturbed data around the point of interest. The coefficients or structure of this simple model is then used to explain the complex model's behavior in a neighborhood of the observation. While LIME is fast and model-agnostic, it has limitations. It can only interpret one instance at a time, and its localized nature makes it unable to provide broader insights about the entire model. 

Above is an example of a LIME explanation for a random forest model predicting house price. There is a feature splitting and corresponding effect on modeled price.

Picture a tangent line from differential calculus. The tangent line is a simple local approximation of a function at a point. It makes sense to consider that approximation near that point, but not far away from it. In LIME, the same holds. The simple model approximates the complex model behavior around a specific observation. However, far away from the observation, it’s not sensible to use this simple approximation in place of the complex model.

On the other hand, SHAP provides a more comprehensive interpretation method. For each observation, SHAP calculates how much each feature contributes to the prediction by considering all possible combinations of features. The SHAP value of a feature roughly represents the amount by which that feature value changes the model output. The sum of all SHAP values (one for each feature) of an observation added to the average predicted value of all observations is the predicted value of that observation.

This approach's power comes from the fact that SHAP values have global relevance. That is, unlike LIME, SHAP explains distributions of points along a consistent measurement. SHAP is also model-agnostic, but it can be computationally expensive. Some Python implementations of SHAP optimize for specific model architectures for this reason.

Above is an example of a SHAP beeswarm plot, which looks at the distribution of SHAP values within a dataset.

It's important to exercise caution when interpreting the results provided by LIME and SHAP, as they reveal correlations rather than causal relationships. Misinterpretation potentiates incorrect conclusions and reinforcement of existing biases. Moreover, a technique called scaffolding demonstrated by Slack et al., can manipulate these methods into producing misleading explanations, potentially hiding bias. Thus, while LIME and SHAP can offer valuable information, one should critically assess the explanations to avert the problem of accepting biased results as objective fact.  

In practical applications, such as real estate, models are used to identify candidate single-family residences (SFR) for purchase and rent. Cap rate, the rate of return on a property based on the income the property is expected to generate, crucially aids this identification. For non-technical users, understanding overall model behavior and why a model suggests purchasing a property helps them trust their decision. We created a dashboard focused on the Georgia SFR housing market in Plotly Dash. It employs SHAP values to provide model interpretation at a zipcode, county, or state level granularity. We built it to allow for multiple models so that users can compare explanations. Basic MLR models serve as templates within the dashboard. 

The dataset was provided by Haystacks.ai. It contained approximately 10,000 SFR observations throughout Georgia. As it is proprietary, we cannot grant access to the reader. We had to do some minor data cleaning - selection of SFR residences and deleting observations that were missing important data, imputing averages, etc. We also included school and crime statistics according to the county level to add other features. To estimate cap rate, we took FMRs (Fair Market Rents) via number of bedrooms as expected income and an expected cost of 1% of (predicted) property value.

The first page provides a choropleth map of the dataset, along with model choice and metric to color the map by. This allows the user to select the outlier zip codes or counties of interest to them.

The next page shows information about the distribution of points according to our model features. There is a scatter plot, plotting feature versus price or cap rate. By selecting a dot, the bottom bar plots compare the corresponding property to averages. On the right, there is a beeswarm plot for the region selected. The SHAP plot allows the user to extract information about the region and how it interacts with the model.

The last page is for further interpretation and property selection. By selecting a zip code (clicking its corresponding bar) in the upper left, we determine a region to extract the top 10 properties from. Selecting a property then creates the SHAP waterfall plot in the upper right. This plot, for example, shows this property’s price lost over $200k in the model due to its square footage value. Because model performance varies by region, there is also a cutoff slider at the top which allows for the restriction to regions having an R-squared of at least the chosen value.

LIME and SHAP offer the transparency needed for non-technical users to trust and understand their models. Still, caution is warranted, as these tools can be manipulated or misinterpreted, potentially concealing or reinforcing bias. The integration of SHAP into our Plotly Dash dashboard offers a compelling example of model interpretability in action within the Georgia SFR housing market. We hope that it illustrates how SHAP provides insights at multiple resolutions and is easy to comprehend. As people gain insight into their models, they can make more informed decisions, driving a more transparent and accountable AI landscape. 

About Author

Daniel Erickson

I'm received my Ph.D in mathematics from Oregon State University. I enjoy tackling complex problems - specifically those amenable to insight gleaned through data.
View all posts by Daniel Erickson >

Leave a Comment

No comments found.

View Posts by Categories

Our Recent Popular Posts

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI