Using Data to Detect Healthcare Fraud
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Background
The National Health Care Anti-Fraud Association (NHCAA) estimates that financial losses due to healthcare fraud are in the tens of billions of dollars each year. Healthcare fraud translates to higher premiums, out-of-pocket expenses, and reduced benefits and coverage for consumers. It also leads to higher costs for employers providing benefits to employees. With this in mind, our efforts to use data science to reduce healthcare fraud could have a significantly positive impact on the industry.
Objectives
For this project, our primary goal was to create a machine learning algorithm to classify healthcare providers as either fraudulent or non-fraudulent. Secondly, we sought to identify the most important features that could be indicators of fraud. With these features in mind, we could recommend adjustments to optimize a health insurance company's screening process. Finally, we translated our findings into a dollar amount to project business cost savings and quantify the impact we could make.
Data Pre-Processing & EDA
Our dataset spans a full year of healthcare records (2009) and contains four separate parts- Beneficiaries, Inpatients, Outpatients, and Providers. Around 10% of the 5,410 providers are tagged potentially fraudulent.
Our first step was to combine these four datasets so we could get a more complete picture while performing exploratory data analysis. Our main objective for this step was to uncover some initial insights on fraud vs. non-fraud trends.
As shown in the plot below, both types of patients are affected by fraud, but inpatients are susceptible to it at a higher rate. We can attempt to rationalize this by keeping in mind that inpatient hospital visits are typically longer and more expensive, and thus would result in a higher payout per claim to fraudulent providers.
We also see that fraudulent claims tend to be for older patients admitted for longer hospital stays. The plot below illustrates this trend with a higher concentration of blue dots near the top right.
Data on Feature Selection & Engineering
We took findings like these into consideration when approaching feature engineering. Additionally, since fraud was tagged by provider, we consolidated several claim-level attributes to match. For example, patient age became mean patient age per provider, and claim states were expressed as a state count.
Feature engineering was part of an iterative process with modeling. Overall, we cycled through five rounds of feature adjustments based on the top coefficients from models and additional EDA. Our final cleaned dataset contained 27 new features.
Modeling Data
We ran the following binary classification models:
- Logistic Regression
- Linear Discriminant Analysis
- Gaussian Naive Bayes
- Support Vector Machine
- Random Forest
- Gradient Boosting
- XG Boost
We also tested out the following approaches to balancing our dataset:
- Setting a "balanced weight" model parameter
- Undersampling: Edited Nearest Neighbor & Random Undersampling
- Oversampling: Synthetic Minority Oversampling Technique (SMOTE) & Random Oversampling
SMOTE gave us the best results after some experimentation with hyperparameter tuning.
Preliminary Data Results
Because there are infinite combinations of features, models, and hyperparameters available, the modeling process involved significant trial and error. We evaluated our preliminary models using F1, recall, and precision scores, as well as ROC curve.
These plots demonstrate that Random Forest, Gradient Boosting, and XG Boost performed the best across all scoring metrics. We then ran a grid search to further tune hyperparameters for these models.
Model Scoring Metrics
To evaluate our models, we assumed that false negatives are more costly than false positives since a false positive can be addressed with a relatively quick investigation, whereas a false negative remains undetected. Thus, we used F2 score to emphasize recall over precision. The following plot shows our final model scores.
In the end, XG Boost narrowly outperformed the other two models and scored close to 97% on all three metrics.
Ensembling
We made one final attempt to improve our outcome with model ensembling. To do this, we used a simple voting classifier and combined our top three models.
We also tuned parameters for our Logistic Regression in an effort to incorporate another model category, and tried incorporating it into our ensemble as well. Interestingly, both sets of ensembling results were worse than XG Boost on its own. Thus, we felt satisfied choosing XG Boost as our final model.
Model Insights
Now that we had a relatively high-performing model, our next step was to evaluate feature importance for additional insights. To do this, we performed additional EDA based on the values shown in the following plot.
Many of our most important features relate to diagnostic and procedure codes. We determined that, unsurprisingly, fraudulent claims are mostly assigned to the most commonly occurring codes. We found that duplicate claims and claims with no assigned physician are often fraudulent as well.
Cost Analysis
With all these findings in mind, our final objective was to project tangible cost savings for our model. We calculated the following figures to aid in this prediction:
- Average claim cost: $998
- Average number of claims per provider: 103
- Cost to investigate a claim: $58 (assuming it takes 2 hours to investigate a claim, and medical claims adjusters make $29/hour on average)
- Savings per year = (TP x 998) - (FP + TP) x 58 - (58 x FN)
We then scaled our XG Boost Confusion Matrix to match the dataset sample size, including adjustments for oversampling, to arrive at a projected savings of $5.2 million USD for the year.
Final Takeaways
Our analysis strongly reinforces the notion that health insurance fraud is a very complicated, multi-faceted issue. Thankfully, we can make a significant impact by utilizing machine learning concepts and ultimately produce some impressive cost savings. We recommend the following to any health insurance company:
- Consider establishing an extra checkpoint for when the most common diagnostic and procedure codes come up
- Closely monitor any duplicate claims, as well as claims submitted with no physician, as these are common red flag indicators of fraud
- Fraudulent inpatient claims are significantly more prevalent than outpatient. Thus, focus the majority of investigatory resources on inpatients.
Further Analysis
Finally, here are a few opportunities for additional analysis:
- As there are infinite possibilities, we can further combine some of our models using more advanced stacking/ensembling techniques. We can also consider incorporating other combinations of models, or assigning different weights to each model.
- With more robust data, we can try classifying our fraud into more detailed categories such as duplicate claims, "upcoding", and billing for services never rendered. Health insurance companies will likely have different approaches for addressing these types of fraud. Fraudulent providers may also demonstrate different patterns based on what type of fraud they are committing.
- The Covid-19 pandemic has brought about a new and unique set of challenges for the healthcare industry. In fact, Covid-19 related healthcare fraud has become a category unto itself. It would be valuable to re-run analysis on more recent data to understand how fraud has evolved.
View our code on Github.