Data-driven Medicare Fraud Prediction
The skills the authors demonstrated here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.
Projectย Information
David completedย 'Machine Learning with R, Intermediate Level', with Vivian Zhang at the NYC Data Science Academy, an intensive course from Oct-Nov, 2014.
- Contributed by David Russo
- This post is based on a final project submission, after having presented findings at a Demo Day on Dec 4th, 2014.
Videos
- Demo Day Video
Motivation
Perpetrators of medicare fraud pick our very own pockets by steeling our tax dollars from our government's medicare system, thereby crippling our country's health care system. In 2009 the government developed a task force "to crack down on the people and organizations who abuse the system and cost Americans billions of dollars each year." The task force has successfully convicted more thanย 2,000 people of fraudulently billing Medicare for over $6 billion, butย there are many more to be found, and analyzing the data is one step in the process.
To that end, theย Centers for Medicare & Medicaid Services (CMS) has made aย data setย publicly available that contains "information on services and procedures provided to Medicare beneficiaries by physicians and other healthcare professionals." It is from the data in this set, covering all Medicare billings in the year 2012, that this project develops a method for flagging one type of potentially fraudulent doctor or supplier. I say "one type", because there are various ways in which the system is abused, and each would require a different approach to the analysis. To mention just a few:
- A cancer doctor was convicted of billing medically unnecessary treatments to 1,200 patients to the tune of $62 M (Medicare alone), many patients of whom did not actually even have cancer.
- A storefront wheelchair retailer in Los Angeles has a high percentageย of customers in San Francisco, more than 350 miles away.
- Random scam artists dig up doctor and patient id numbers and charge Medicare for anything and everything, embezzled through a fictitious medicalย organization.
- During one particular office visit for one particular medical issue, many physicians bill patients for a handful of different services in addition to the one performed.
This project focuses onย the latter method of fraud above, because it is one that has affected the contributor personally. I'm not covered by Medicare, so in my case the abuse has been against myself and my insurance company. Like the Medicare system, the health insurance industry also pays the price for these abuses, cripplingย the private side of the health care systemย as well.
Dataย Cleaning
Upon downloading, the data set isย 9.2 M observations by ย 27 variables. Each observationย provides information regarding each service that each provider has billed to Medicare in 2012. Keep in mind that some variables that are not relevant to this study, and therefore reduced, would be prominently relevantย in the study of another type of Medicare fraud.
Variable Reduction:
- Remove provider address variables except for state (not relevant)
- Remove provider name variables (provider id is easier identifier)
- Remove โcredentialsโ variable (โprovider typeโ is more relevant)
- Remove โplace of serviceโ variable (redundant with โentity codeโ)
- Remove statistical variables for amounts allowed and amounts submitted (statistics on amounts paid are most relevant)
Missing or Unusual Entries
- Delete the first observation (unusual npi and other data missing)
- Delete 7 observations with (dot) as the โentityโ rather than I or O
Data Transformation
The original data set is structured such that each observation represents a unique combination of healthcare provider (npi) and a particular service (hcpcs) performed by that provider. All variables represent relevant information pertaining to that combination. Therefore, the current data set gives us a summary of each service that each provider has performed in the year 2012.
More useful to this particular study would be a data set that sums the numerical quantities for each provider across all services they provided in that year, rather than breaking it out by service.
Copy the data.frame to a data.table, because takingย the sums and sub-setting of such a large data.frame causes problems.
install.packages( "data.table" )
MPD_T <- data.table( MPD )
DOC_TOTALS <- MPD_T[ , list( num_services = sum( line_srvc_cnt ), num_patients= sum( bene_unique_cnt ), total_payment= sum( average_Medicare_payment_amt * line_srvc_cnt ), serv_per_patient= sum( line_srvc_cnt) / sum( bene_unique_cnt)), by = "npi" ]
Using dplyr package, join the non-numeric variables from the original data set into the transformed data set by common npi.
DOC_TOTALS <- as.data.frame( DOC_TOTALS )
add <- cbind( MPD$nppes_provider_gender, MPD$nppes_entity_code, MPD$nppes_provider_state, MPD$provider_type, MPD$npi )
add <- as.data.table( add )
names( add ) <- c( "gender", "entity", "state", "specialty", "npi" )
Reduce the add dataset to unique npi's by subsetting non-duplicates
add_reduce <- subset(add, !duplicated(add$npi))
DOC_TOTALS$npi <- as.character( DOC_TOTALS$npi )
library( dplyr )
DOC_INFO <- left_join( DOC_TOTALS, add_reduce, by="npi", copy=T )
Create a binary dummy variable, 1 = possible fraud and 0 = no fraud, based on being over or under the .90 quantile of serv_per_patient in each specialty(serv_tail_per_spec)
DOC_INFO$fraud <- ifelse(DOC_INFO$serv_per_patient > DOC_INFO$serv_tail_per_spec, 1, 0)
DOC_INFO$fraud <- factor(DOC_INFO$fraud)
(tb1 <- table(DOC_INFO$fraud))
prop.table(tb1)
88K (10%) of the npi's are tagged as potentially fraudulent on the basis of being in the .90 quantile for services per patient within their specialty in the year 2012.
Variables of the transformed data set:
- Provider Identifier
- Entity ( I or O for Individual or Organization )
- Gender ( blank when entity = O )
- State
- Specialty ( of providerโs largest # of services )
- Total Number of Services Provided
- Total Number of Distinct Patients
- Average Number of Services per Patient
- Total Amount Paid from Medicare
- 90th Quantile of 'Services per Patient' for each Specialty
- Fraud Flag (1 or 0)
Descriptive Statistics
Study each providerโs average services per patient statistic
Summary(DOC_TOTALS$serv_per_patient)
Min.ย ย ย ย ย 1st Qu.ย ย Medianย ย Meanย ย ย 3rd Qu.ย ย ย ย Max.
0.769ย ย ย 1.082ย ย ย ย ย 1.472ย ย ย ย 2.981 2.277ย ย ย 3669.000
Each provider has an average of 3 services per patient, which sounds reasonable, but the max shows that there is a providerย with an average of billing 3,669 services per patient! WOW! Let's look at a plot showing the average services per patient for each provider.
OK, so the density of averages are down at the bottom, but there looks to be a lot of providers with a really high average number of services billed per patient. Let's do some calculations.
nrow(DOC_TOTALS[DOC_TOTALS$serv_per_patient > 5,]) / nrow(DOC_TOTALS)
* Only 12.5% of doctors are providing an avg of over 5 services/patient
nrow(DOC_TOTALS[DOC_TOTALS$serv_per_patient> 10,]) / nrow(DOC_TOTALS)
* Only 5% of doctors are providing an avg of over 10 services/patient
nrow(DOC_TOTALS[DOC_TOTALS$serv_per_patient> 20,]) / nrow(DOC_TOTALS)
* Only 1% of doctors are providing an avg of over 20 services/patient
So it's actually only a small percentage of these 880K providers that have billed an excessive number of services per patient. Only 1% billed over 20 services per patient. And yet the max is well over 3,000!ย Let's look at the details of that provider.
It's an Organization not an Individual, a Supplier not a Doctor, and they billed 51,372 times in 2012, but for only 14 patients, and for only 1 service (hemophilia drug/ treatment) - an average of 3,670 codes/patient. This looks suspicious.
Let's see if there is a common thread among the highest providers. We'll look at those providers that are in the 1,000, 200, and 100 services per patient "Clubs".
This plot shows that those providers with an average of 1,000 or more services billed per patient are all organizations and all suppliers.
This plot shows those providers with an average of 200 or more services billed per patient, still a very high amount, and as previously shown, relatively few providers put up these numbers. These providers are both organizations and individuals, and not just suppliers. They are from a number of specialties, but only 17% of all specialties are represented.
Among those in the 100 club, still only a fraction of 1% of providers,ย more specialties represented but only 35% of all specialties.
Predictive Modeling
Our data does not have a result variable telling us which providers were fraudulent. If it did, I would be able to develop models of variable relationships that produce those results, and I would be able to predict which new providers may also be fraudulent. So I willย now assign a fraudย variable to our data. A positive in this variableย will indicate those providers that are suspicious enough to be investigated for fraud, rather than that those providers are actually fraudulent.
The fraud flag will be essentially based on a provider's average services per patientย being in the upper 0.90 quantile, but the descriptive statisticsย above show that I need to do this with respect to specialty. Therefore, here I create a data frame showing the cutoffย for each specialtyย of the upper 0.90 quantile of serv_per_patient.
docMelt<- melt( DOC_INFO, id = "specialty", measure.vars = "serv_per_patient" )
upper.10 <- dcast( docMelt, specialty ~ variable, quantile, .90 )
Merge the quantile values into the DOC_INFO dataframe
DOC_INFO <- merge(DOC_INFO, upper.10, by="specialty")
Create the binary dummy variable โfraudโ
Serv/patient/specialy > .90 quantile --> investigate for fraud
Serv/patient/specialy< .90 quantile --> do notย investigate for fraud
DOC_INFO$fraud <- ifelse( DOC_INFO$serv_per_patient> DOC_INFO$serv_tail_per_spec, 1, 0 )
Now I can model on the fraud variable.
Separate into training (60%) and test (40%) sets:
trainIndex <- createDataPartition(DOC_INFO$fraud, p = 0.6, list = FALSE, times = 1)
Train <- DOC_INFO[trainIndex, ]
Test <- DOC_INFO[-trainIndex, ]
The following models areย variations on Decision Tree and Logisitc Regression modelย types.
Model 1: Decision Tree
treemodel_1 <- rpart( fraud ~ total_payment + specialty + num_services + num_patients + gender + entity + state, data = Train, control = rpart.control( minbucket = 1 ) )
fancyRpartPlot( treemodel_1 )
Here are the results of the summary, showing which variables, in bold, are statistically significant. When editing the model using only those three variables, the results are the same, so the reduced version would be used.
root 528385 52841 0 (0.90, 0.10)
2) num_services< 6181.5 498178 42204 0 (0.92, 0.08)
4) num_services< 1745.05 399669 28171 0 (0.93, 0.07) *
5) num_services>=1745.05 98509 14033 0 (0.86, 0.14)
10) num_patients>=1061.5 70477 5451 0 (0.92, 0.08) *
11) num_patients< 1061.5 28032 8582 0 (0.69, 0.31)
22) specialty=Addiction Medicine,Allergy/Immunology,Ambulance Service Supplier, โฆ (0.86, 0.14) *
23) specialty=All Other Suppliers,Ambulatory Surgical Center,Anesthesiology, โฆ (0.41, 0.59)
46) num_patients>=790.5 6236 2424 0 (0.61, 0.39)
92) num_services< 2415.5 4181 654 0 (0.84, 0.16) *
93) num_services>=2415.5 2055 285 1 (0.14, 0.86) *
47) num_patients< 790.5 4087 449 1 (0.11, 0.89) *
3) num_services>=6181.5 30207 10637 0 (0.65, 0.35)
6) num_patients>=3302.5 16552 3167 0 (0.81, 0.19) *
7) num_patients< 3302.5 13655 6185 1 (0.45, 0.55)
14) specialty=Allergy/Immunology,Ambulance Service Supplier,Clinical Laboratory,โฆ (0.72, 0.28)*
15) specialty=All Other Suppliers,Ambulatory Surgical Center,Anesthesiology,โฆ (0.22 0.78) *
Tree Model 1 Prediction
newdoc1 <- data.frame( total_payment = 100000, specialty = 'Anesthesiology', num_services = 150, num_patients = 50, gender = 'M', entity = 'O', state = 'CA')
predict( treemodel_1, newdata = newdoc1 )
Results:
0ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย 1
0.9295142ย ย ย ย ย ย ย ย ย ย ย 0.07048583
93% confidence in no-fraud
Model 2: Decision Tree
I began to think that since I had to manufacture the fraud variable that I am modeling/predicting for, I should only use variables that did notย actually go into calculatingย the outcome. So here I remove 'num_services' and 'num_patients':
treemodel_2 <- rpart( fraud ~ total_payment + gender + entity + state + specialty, data = Train, control = rpart.control( minbucket = 1 ) )
treemodel_2
n= 528385
1)root 528385 52841 0 (0.8999953 0.1000047) *
Only one node results, branches end, so no other variables are significant.
Tree Model 2 Prediction
newdoc2 <- data.frame( total_payment = 100000, gender = 'M', entity = 'O', state = 'CA', specialty = 'Anesthesiology')
predict( treemodel_2, newdata = newdoc2 )
0ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย 1
0.8999953 ย ย ย ย ย ย 0.1000047
90% confidence in no-fraud
Model 3: Decision Tree
Here I use the same variables as in Model 2, but I decrease the Complexity parameter from .01 to .005, which allows for less worthwhile splitting to be admitted.
treemodel_3 <- rpart( fraud ~ total_payment + gender + entity + state + specialty, data = Train, control = rpart.control( cp = .005 ) )
fancyRpartPlot( treemodel_3 )
Here the summary shows that 'total_payment' and 'specialty' variables are statistically significant.
1) root 528385 52841 0 (0.89999527 0.10000473)
2) total_payment< 158519.1 462316 38800 0 (0.91607472 0.08392528) *
3) total_payment>=158519.1 66069 14041 0 (0.78747976 0.21252024)
6) specialty=Addiction Medicine,Allergy/Immunology,Ambulance Service, โฆ 64220 12837 0 (0.80010900 0.19989100) *
7) specialty=All Other Suppliers,Anesthesiology,Audiologist, โฆ 1849 645 1 (0.34883721 0.65116279) *
Tree Model 3 Prediction
predict( treemodel_3, newdata = newdoc2 )
0 ย ย ย ย ย ย ย ย ย ย ย ย ย 1
0.9160747ย ย 0.08392528
92% confidence in no-fraud
Model 1:ย Logistic Regression
logmodel_1 <- glm( fraud ~ total_payment + gender + entity + state + specialty, data = Train, family = 'binomial' )
summary( logmodel_1 )
Estimate ย ย ย ย Std. Error ย ย ย z value ย ย Pr(>|z|)
(Intercept) ย ย ย ย ย ย ย ย ย ย ย ย -9.164e+00 ย 8.444e+01ย ย -0.109 ย ย ย 0.913573
total_payment ย ย ย ย ย ย ย ย ย 8.122e-07ย ย 1.913e-08 ย ย ย 42.460 ย ย < 2e-16 ***
genderF ย ย ย ย ย ย ย ย ย ย ย ย ย ย -1.367e+00 ย 2.561e-01 ย ย ย -5.339 ย ย ย 9.34e-08 ***
genderM ย ย ย ย ย ย ย ย ย ย ย ย ย -1.034e+00 ย 2.560e-01 ย ย ย -4.041 ย ย ย 5.33e-05 ***
Total payment and gender are strongly significant, which is different than the tree models. Many specialties showed significance, but many did not. State did not show significance.
Logisticย Model 1 Prediction
logmodel_1.pre <- predict( logmodel_1,type = 'response' )
pre_1 <- ifelse( logmodel.pre_1 > 0.5, 1, 0 )
table( pre_1, Train$fraud)
pre2 ย ย ย ย ย ย ย ย ย ย 0 ย ย ย ย ย ย ย ย 1
0ย ย ย ย ย ย ย ย ย ย ย 475,203ย ย ย ย ย 52,612
1ย ย ย ย ย ย ย ย ย ย ย ย 341 ย ย ย ย ย ย ย 229
sum(diag(table1))/sum(table1)
90% overall success
Now run the model on the Test data set
log_pred_test1 <- predict( logmodel_1, newdata = Test )
pre_1 <- ifelse( log_pred_test1 > 0.5, 1, 0 )
(table1 <- table( pre_1, Test$fraud ))
pre2ย ย ย ย ย ย ย ย ย 0 ย ย ย ย ย ย ย ย 1
0 ย ย ย ย ย 316,875ย ย 35,131
1ย ย ย ย ย ย ย ย ย ย ย 153 ย ย ย ย ย ย 96
sum(diag(table1))/sum(table1)
90% overall success on Test data set as well
Model 2:ย Logistic Regression
Reduction of variables based on previous results, using only 'total_payment' and 'gender'
logmodel_2 <- glm( fraud ~ total_payment + gender, data = Train, family = 'binomial' )
summary( logmodel_2 )
Estimate ย ย ย ย ย ย ย Std. Error ย ย ย ย z value ย ย ย ย ย Pr(>|z|)
(Intercept) ย ย ย ย ย -2.375e+00ย ย 1.931e-02 ย ย ย -122.974ย ย ย <2e-16 ***
total_paymentย ย 6.322e-07 ย ย ย 1.581e-08ย ย ย 39.997 ย ย ย ย <2e-16 ***
genderF ย ย ย ย ย ย ย ย -4.050e-02 ย ย 2.094e-02ย ย ย -1.934 ย ย ย ย ย 0.0531 .
genderM ย ย ย ย ย ย ย ย 2.066e-01 ย ย ย 1.981e-02 ย ย 10.429 ย ย ย ย <2e-16 ***
Shows that all are significant.
Logisticย Model 2 Prediction
logmodel_2.pre <- predict( logmodel_2, type = 'response' )
pre_2 <- ifelse( logmodel_2.pre > 0.5, 1, 0 )
(table2 <- table( pre_2, Train$fraud ))
pre2ย ย ย ย ย ย ย ย ย ย ย ย ย 0 ย ย ย ย ย ย ย 1
0 ย 475,254ย ย 52,686
1 ย ย ย ย ย ย 290 ย ย ย ย ย 155
sum(diag(table1))/sum(table1)
90% overall success
Now run the model on the Test data set
log_pred_test2 <- predict( logmodel_2, newdata = Test )
pre_2 <- ifelse( log_pred_test2 > 0.5, 1, 0 )
(table2 <- table( pre_2, Test$fraud ))
pre2 ย ย ย ย ย ย ย ย 0 ย ย ย ย ย ย ย 1
0 ย ย 475,254ย ย 52,686
1ย ย ย ย ย ย ย ย ย ย ย 290 ย ย ย ย ย 155
sum(diag(table2))/sum(table2)
90% overall success on Test data set as well
ย