NYC Data Science Academy| Blog
Bootcamps
Lifetime Job Support Available Financing Available
Bootcamps
Data Science with Machine Learning Flagship ๐Ÿ† Data Analytics Bootcamp Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lesson
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories Testimonials Alumni Directory Alumni Exclusive Study Program
Courses
View Bundled Courses
Financing Available
Bootcamp Prep Popular ๐Ÿ”ฅ Data Science Mastery Data Science Launchpad with Python View AI Courses Generative AI for Everyone New ๐ŸŽ‰ Generative AI for Finance New ๐ŸŽ‰ Generative AI for Marketing New ๐ŸŽ‰
Bundle Up
Learn More and Save More
Combination of data science courses.
View Data Science Courses
Beginner
Introductory Python
Intermediate
Data Science Python: Data Analysis and Visualization Popular ๐Ÿ”ฅ Data Science R: Data Analysis and Visualization
Advanced
Data Science Python: Machine Learning Popular ๐Ÿ”ฅ Data Science R: Machine Learning Designing and Implementing Production MLOps New ๐ŸŽ‰ Natural Language Processing for Production (NLP) New ๐ŸŽ‰
Find Inspiration
Get Course Recommendation Must Try ๐Ÿ’Ž An Ultimate Guide to Become a Data Scientist
For Companies
For Companies
Corporate Offerings Hiring Partners Candidate Portfolio Hire Our Graduates
Students Work
Students Work
All Posts Capstone Data Visualization Machine Learning Python Projects R Projects
Tutorials
About
About
About Us Accreditation Contact Us Join Us FAQ Webinars Subscription An Ultimate Guide to
Become a Data Scientist
    Login
NYC Data Science Acedemy
Bootcamps
Courses
Students Work
About
Bootcamps
Bootcamps
Data Science with Machine Learning Flagship
Data Analytics Bootcamp
Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lessons
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook
Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories
Testimonials
Alumni Directory
Alumni Exclusive Study Program
Courses
Bundles
financing available
View All Bundles
Bootcamp Prep
Data Science Mastery
Data Science Launchpad with Python NEW!
View AI Courses
Generative AI for Everyone
Generative AI for Finance
Generative AI for Marketing
View Data Science Courses
View All Professional Development Courses
Beginner
Introductory Python
Intermediate
Python: Data Analysis and Visualization
R: Data Analysis and Visualization
Advanced
Python: Machine Learning
R: Machine Learning
Designing and Implementing Production MLOps
Natural Language Processing for Production (NLP)
For Companies
Corporate Offerings
Hiring Partners
Candidate Portfolio
Hire Our Graduates
Students Work
All Posts
Capstone
Data Visualization
Machine Learning
Python Projects
R Projects
About
Accreditation
About Us
Contact Us
Join Us
FAQ
Webinars
Subscription
An Ultimate Guide to Become a Data Scientist
Tutorials
Data Analytics
  • Learn Pandas
  • Learn NumPy
  • Learn SciPy
  • Learn Matplotlib
Machine Learning
  • Boosting
  • Random Forest
  • Linear Regression
  • Decision Tree
  • PCA
Interview by Companies
  • JPMC
  • Google
  • Facebook
Artificial Intelligence
  • Learn Generative AI
  • Learn ChatGPT-3.5
  • Learn ChatGPT-4
  • Learn Google Bard
Coding
  • Learn Python
  • Learn SQL
  • Learn MySQL
  • Learn NoSQL
  • Learn PySpark
  • Learn PyTorch
Interview Questions
  • Python Hard
  • R Easy
  • R Hard
  • SQL Easy
  • SQL Hard
  • Python Easy
Data Science Blog > Capstone > Anomaly Detection Data with Fraudulent Healthcare Providers

Anomaly Detection Data with Fraudulent Healthcare Providers

Zack Zbar, Lu Yu and Patrice Kontchou
Posted on Jul 3, 2020
The skills I demoed here can be learned through taking Data Science with Machine Learning bootcamp with NYC Data Science Academy.

LinkedIn |  GitHub |  Email | Data | Web App

Introduction

Healthcare insurance fraud is not common but unfortunately, it does exist. According to the National Health Care Anti-Fraud Association data, health care fraud costs around $68 billion annually in the US alone. This is only a fraction of the total revenue of the industry, so identifying fraudulent activities in healthcare is a practice of anomaly detection.

In this project, we were given a folder of datasets with patient data, and inpatient (IP) and outpatient (OP) claims data where each record represents a claim that was submitted by a healthcare provider to an insurance company. We also had a list of providers with a column determining whether they should be flagged fraudulent or not.

Our task was to study the datasets we were presented with and to identify providers who were submitting potentially fraudulent claims.

A major challenge in our case was that our task was to identify potentially fraudulent providers, but our data was at the patient and claims level. Therefore, we needed to use this patient and claims data to aggregate features by provider. In other words, we had to make a new dataset with information for each individual provider, using the information from all of that providerโ€™s claims and the patients they served. With this new provider-based dataset, we could analyze the providers and detect the potentially fraudulent anomalies.

Feature Generation Data

Our provider-based table ended up with 45 generated features which were created during the data analysis phase of our project, based on what seemed to be important for identifying fraudulent providers. If we noticed a difference in distribution between fraudulent and non-fraudulent providers, weโ€™d come up with a way to create a feature to capture that information.

Weโ€™ll take you through a small sample of our features to give an idea of the kind of information we were looking for.

1. Claim Duration

One simple, yet effective feature that we generated was Duration Mean. For each provider, this is the average of the difference between each Claim Start Date and Claim End Date. In the healthcare industry, this is often referred to as Length of Stay.

While Duration on OP claims were mostly zero (which makes sense because OP interactions by definition take place within one day), the Duration Mean on IP claims showed much more variance between providers, and even showed different distributions between clean and potentially fraudulent providers. The plot below shows the distributions for IP Duration Mean, with the orange line being potentially fraudulent providers and the blue line being non-fraudulent providers.

data

2. Number of Claims & Average Reimbursement 

Another interesting pattern found in the data and then used as a feature was the Number of Claims and Average Reimbursed Claim amount per provider. When looking at each group (OP and IP) of providers independently, it appeared that potentially fraudulent providers had a large number of claims with a low Average Reimbursed Amount as illustrated in the two figures below. 

data

3. Duplication of Claims 

One common type of fraud is to submit duplicated claims, where one claim duplicates the key features (diagnosis and procedure codes) of another claim. 

We found that the following combination of codes is the least number necessary to define duplicated claims: admit diagnosis code + diagnosis codes 1-4 + procedure code1. 

More restrictive code combinations did not yield fewer claim duplications while less restricted combinations yield more duplications. Under this definition, only 3.5% of IP claims were duplications to at least another claim, versus 44.3% for OP claims. 

This drastic difference is likely explained by the distinct nature of IP and OP claims. OP claims usually have 0 procedure codes and 1-2 diagnosis codes, while IP claims which usually have 1 procedure code and 9 diagnosis codes. Therefore, itโ€™s much more likely to โ€œduplicateโ€ another OP claim. The features we generated included the number and ratio of duplicated claims and a yes-no flag of whether the provider contains duplicated claims at all.

data

Next, we determined that the 2 features of duplication (IP & OP) showed distinguishing distribution patterns between potential fraudulent and clean providers, thus justifying its inclusion in the provider feature table and usage in machine learning. Potential fraudulent providers showed a much higher and more spread out duplication ratio than clean providers, for IP claims. However, the difference is non-distinguishable in OP claims, suggesting that IP duplication features would contribute much more to the detection of fraudulent than OP duplication features.

Machine Learning Data

1. Methods to deal with class imbalance

One of the major issues with anomaly detection is that the detected feature is very imbalanced, which must be dealt with before training a machine learning model.

To handle the class imbalance, we employed different methods at the corresponding execution levels. During the train-test-split, we turned on the stratify argument so that the data is split in a stratified fashion, ensuring the class proportion is maintained in both train and test data. When training our Logistic Regression and Random Forest models, we set the class_weight argument to โ€œbalancedโ€. For Gradient Boosting and Support Vector Classifier, we used synthetic minority oversampling technique (SMOTE) to achieve a balanced training data. 

Another approach we used to address class imbalance is to use recall as the score to evaluate model performance instead using accuracy. The reason is that recall score measures the True Positive rate, or the proportion of actual positives that your model identified. By maximizing recall, we cast the widest net possible, which fosters confidence in our modelโ€™s ability to catch as many fraudulent providers as possible. 

2. Logistic Regression Data (LR)

Our first machine learning model was a penalized logistic regression. This allowed us to accomplish two tasks at once: make predictions on our binary target variable and analyze results with logistic regression, while also performing empirical features selection with a lasso penalty term. 

After standardizing our features with StandardScalar, we used GridSearchCV with our scoring metric set to recall to find the optimal parameter C. Our best estimator on GridSearchCV gave us a fairly good score and did not show signs of overfitting. The recall score with training data was .90, and the recall score with test data was .89.

For our secondary purpose of using penalized logistic regression (empirical feature selection), our model performed well again. It reduced 45 features down to 9, giving us insight on the most important features when determining whether a provider should be flagged as potentially fraudulent. As weโ€™ll see later, many of our 9 remaining features were also selected by Random Forest as the most important features, which made us more confident in reporting those as the most valuable features to answer our research question. 

3. Random Forest (RF) 

We chose to try Random Forest Classifier on this dataset because by its design, it behaves a little similar to Lasso Logistic regression in the sense that it gives you a model and at the same time the most important features. This result, when compared to our Logistic regression, would help us compare important features between the two models. 

Once fitted, our best model from the grid search gave us a  scored 0.94 on our train dataset and 0.70 on the test. Even though it was more overfit than Logistic Regression, both models reported almost the same set of important features, which we will discuss later on. 

4. Gradient Boosting Classifier (GBC)

Next we explored gradient boosting classifiers as they are known to perform better than Random Forest on imbalanced data. This model fit the residual of each previous tree to the next one hence the reduced model bias. Hyperparameters tuned included max features, min_samples_split, and n_estimators (as in Random Forest), as well as the learning rate. 

During the first round of grid search, the least overfit parameters were chosen as the center for the second round of grid search. Train and test scores showed an increasing trend as hyperparameters change. There was a train-test gap of scores, but very small (< 0.03) and narrowly distributed (std 0.006). Generally speaking, Gradient Boosting Classifier fitted the data pretty well (0.99) and predicted the classification relatively precisely (0.93).

5. Support Vector Classifier (SVC)

Finally, we tried a Support Vector Classifier using linear and nonlinear kernels. With the nonlinear kernel (poly, sigmoid, rbf), unlike Gradient Boosting Classifier, the score of grid search models oscillated majorly between 0.8 and 0.9. The best model had a train score of 0.85 and a test score of 0.83. With the linear kernel, which was very computationally expensive and time-consuming, the final train and test scores were 0.88 and 0.84, not much higher than that of non-linear kernels.

6. Data on Model Comparison & Feature Selection

Based on test recall scores, the performance of models is as follows: 

GBC (.93) > LR (.89) > SVC (.84) > RF (.7). 

Gradient Boosting Classifier performed the best, followed closely by simple Logistic Regression. Except for Random Forest, the other models did not show much overfitting, meaning their train and test scores were close.

data

Bringing our focus back to the most important features for detecting fraudulent healthcare providers, we focused on Logistic Regression and Random Forest. The 9 features that were selected by Logistic Regression turned out to be highly consistent with the most important features selected by Random Forest. 7 features overlapped.

data

Among those features, many of them correlated with the size of a provider. In general, bigger providers handle more patients, claims, a larger amount of revenue (deductible + reimbursement), more numbers of claims per patient, more inpatients that come from different states, etc. Our model suggested that such providers tend to be fraudulent. Intuitively, bigger providers possess more resources to conduct fraud and they have a high volume of claims in which they could bury fraud claims to avoid detection. 

Our model also highlighted certain features that not only are red flags of fraudulence but also indications of the type of fraud that may have been conducted. Two major types of fraud are fabrication (creating a claim to represent services that never happened) and misrepresentation (exaggerating or distorting rendered services). 

For example, Revenue per Patient and Average of Duration IP are strongly correlated with a high probability of fraud. These features, if associated with a true fraudulent claim, would be a misrepresentation of actual services. On the other hand, Number of Claims per Patient, and Duplicate IP Claims could represent fabricated claims, likely to be based on popular or existing information from the same or another provider. 

Data Conclusion

At the onset of the project, we were uncertain how our strategy of creating a new dataset of provider-based generated features would play out, but in the end we were not only impressed with the results of our models but also the interpretations that they provided. With in-depth data analysis and a comprehensive approach to machine learning applications, we created models that did well to predict fraudulent providers on unseen data, and identified the features to look out for when investigating potentially fraudulent providers.

About Authors

Zack Zbar

Certified Data Scientist with a background in consulting, bringing the mix of technical expertise and communication skills to make insights heard. Experienced in analytics, project management, and public speaking. Highly competent with business, academic, and creative writing. Organized...
View all posts by Zack Zbar >

Lu Yu

Certified data scientist with a Ph.D. in biology and experience in genomic sequencing data analysis. Specialized in machine learning, big data, and deep learning. A detail-oriented and goal-driven researcher that is also organized in project management. Confident in...
View all posts by Lu Yu >

Patrice Kontchou

Certified Data Scientist with a Masters in Software Engineering (concentration in Artificial Intelligence). Enthusiastic and self-motivated, backed by professional experience driving business insight from terabytes of data using visualization, statistical analysis and machine learning. Strong discipline and leadership...
View all posts by Patrice Kontchou >

Related Articles

Capstone
Catching Fraud in the Healthcare System
Capstone
The Convenience Factor: How Grocery Stores Impact Property Values
Capstone
Acquisition Due Dilligence Automation for Smaller Firms
Machine Learning
Pandemic Effects on the Ames Housing Market and Lifestyle
Machine Learning
The Ames Data Set: Sales Price Tackled With Diverse Models

Leave a Comment

No comments found.

View Posts by Categories

All Posts 2399 posts
AI 7 posts
AI Agent 2 posts
AI-based hotel recommendation 1 posts
AIForGood 1 posts
Alumni 60 posts
Animated Maps 1 posts
APIs 41 posts
Artificial Intelligence 2 posts
Artificial Intelligence 2 posts
AWS 13 posts
Banking 1 posts
Big Data 50 posts
Branch Analysis 1 posts
Capstone 206 posts
Career Education 7 posts
CLIP 1 posts
Community 72 posts
Congestion Zone 1 posts
Content Recommendation 1 posts
Cosine SImilarity 1 posts
Data Analysis 5 posts
Data Engineering 1 posts
Data Engineering 3 posts
Data Science 7 posts
Data Science News and Sharing 73 posts
Data Visualization 324 posts
Events 5 posts
Featured 37 posts
Function calling 1 posts
FutureTech 1 posts
Generative AI 5 posts
Hadoop 13 posts
Image Classification 1 posts
Innovation 2 posts
Kmeans Cluster 1 posts
LLM 6 posts
Machine Learning 364 posts
Marketing 1 posts
Meetup 144 posts
MLOPs 1 posts
Model Deployment 1 posts
Nagamas69 1 posts
NLP 1 posts
OpenAI 5 posts
OpenNYC Data 1 posts
pySpark 1 posts
Python 16 posts
Python 458 posts
Python data analysis 4 posts
Python Shiny 2 posts
R 404 posts
R Data Analysis 1 posts
R Shiny 560 posts
R Visualization 445 posts
RAG 1 posts
RoBERTa 1 posts
semantic rearch 2 posts
Spark 17 posts
SQL 1 posts
Streamlit 2 posts
Student Works 1687 posts
Tableau 12 posts
TensorFlow 3 posts
Traffic 1 posts
User Preference Modeling 1 posts
Vector database 2 posts
Web Scraping 483 posts
wukong138 1 posts

Our Recent Popular Posts

AI 4 AI: ChatGPT Unifies My Blog Posts
by Vinod Chugani
Dec 18, 2022
Meet Your Machine Learning Mentors: Kyle Gallatin
by Vivian Zhang
Nov 4, 2020
NICU Admissions and CCHD: Predicting Based on Data Analysis
by Paul Lee, Aron Berke, Bee Kim, Bettina Meier and Ira Villar
Jan 7, 2020

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day ChatGPT citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay football gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income industry Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI

NYC Data Science Academy

NYC Data Science Academy teaches data science, trains companies and their employees to better profit from data, excels at big data project consulting, and connects trained Data Scientists to our industry.

NYC Data Science Academy is licensed by New York State Education Department.

Get detailed curriculum information about our
amazing bootcamp!

Please enter a valid email address
Sign up completed. Thank you!

Offerings

  • HOME
  • DATA SCIENCE BOOTCAMP
  • ONLINE DATA SCIENCE BOOTCAMP
  • Professional Development Courses
  • CORPORATE OFFERINGS
  • HIRING PARTNERS
  • About

  • About Us
  • Alumni
  • Blog
  • FAQ
  • Contact Us
  • Refund Policy
  • Join Us
  • SOCIAL MEDIA

    ยฉ 2025 NYC Data Science Academy
    All rights reserved. | Site Map
    Privacy Policy | Terms of Service
    Bootcamp Application