BNP Paribas: Expediting the Insurance Claim Process

Posted on Mar 31, 2016

Contributed by Matt Samelson. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his final class project - capstone (due on the 12th week of the program).

BNP Paribas Cardiff, a leading personal insurance provider, turned to the data science community to obtain input on expediting their claims process.

In the insurance world (or at least at their company) particular claims can be paid fairly quickly while others require more thorough attention before a payment is made.  The driver for rapid claims processing is obvious:  client satisfaction.  Given a data set containing claims, numerous variables, and an indicator specifying whether or not the claim was paid in a slow or expedited manner, data scientists were charged with building a predictive model that minimizes log loss for the dataset.

Simplistically speaking, Logarithmic Loss, or simply Log Loss, is a classification loss function. Minimizing the Log Loss is basically equivalent to maximizing the accuracy of the classifier.  A good but fairly quantitative article on log loss can be found here.

The Data

Training data was comprised of 129 variables and 114,321 observations.  All variables were anonymized.  The breakdown of in terms of variable types was as follows:

  • 108 Continuous Variables
  • 18  Factor Variables in Character Format
  • 4  Variables in Integer format

Factor variables in character format had multiple class levels.  The number of levels varied widely from 3 to 18,211 but mostly numbered less than 10.

A detailed view of the data can be found at the bottom of this post.

Pre-Processing

The summary information on 129 variables is far to voluminous to publish here.  Furthermore, the information gleaned from basic EDA (electronic data analysis) was far from informative.  Suffice to say, all variables were potential contributors to a predictive model.

That said, missingness was a substantial issue with this data set.  The figure below illustrates the abundance of incomplete data among variables in the data set:

aggrplot

The histogram on the left illustrates that in many instance variables were missing from over 40% of the observations.

The pattern chart on the right illustrates that nearly half of the observations (rows of data in the dataset) were missing all but four variables.

histmissrows

Analysis

I elected to generate boosted trees using the XGBoost package in R to 1) maximize predictability using a non-parametric model and 2) enable model interpretability.  Non-parametric models are normally considered more accurate and less interpretable.  XGBoost is beneficial in that it has analytical features that assist in making models interpretable while maintaining robustness.

Under time constraint I conducted analysis by 1) eliminating only a single variable ("v22" - a categorical variable with 18,000+ levels, computationally expensive and of dubious predictive value) and 2) handling missingness  by imputing with a simple "filler" value ("-999").

Using the R Caret package method "train" which powerfully permits implementation of grid searches and cross validation, my hardware ran for a laborious 24 hours.  The model tested a grid consisting of 27 parameter combinations using 5 fold cross validation (3 parameters:  eta (learning rate), maximum tree depth, and number of rounds (trees) for a particular fold test in tree generation.  3 parameters**3 = 27). The details of this process are not shown here but yielded the following results:

  • eta:  .01
  • maximum tree depth:  8
  • nrounds (trees): 2000

Using these parameters I utilized the xgb.train function in XGBoost package to train a model and make predictions utilizing these parameters.  Code below.

The model generated these results in the process of running:

The results obtained on the training set (eval set drawn from the training set and train set used to fit the model were as follows:

  • Logloss Validation:  0.459305 (2000th tree)
  • Logloss Train: 0.345169 (2000th tree)

Results

The model yielded the following results when run against a supplied unknown set of data:

  • Logloss Unknown:  0.46133

These results are likely among the best for a single non-parametric model.  Comparison against against other modelers at the kaggle sight that was the source of this data indicate the best performing models showed logloss values in the .42 range.  These models are known to be highly complex ensembles that are largely uninterpretable.

Interpretation

Interpreting a boosted tree model with numerous variables is extremely difficult. Rather than present a full and complicated printout of the model, I stand on 1) the high predictive value and 2) highlight only the most important model factors.

I take this approach because interested consumers are likely most interested in performance the most important variables as opposed to a long and complicated presentation about tree structures.

Model performance is addressed above.  Going a bit overboard on feature importance, I present the 20 most important variables in the boosted tree model are illustrated in the chart below:

featimp

The concept of gain is a bit complex to explain here. Suffice to say, in its most basic form, gain is a measure of explanatory value brought to the model by a particular variable.

The figure above illustrates variable importance with an added layer of clustering . Essentially the clustering "groups" variables in terms of importance. So, instead of talking about 20 variables individually we can discuss the importance of four groups.

Clearly, variable v51 is by far most important in this boost tree model. Second most important is variable v67. Collectively we can say that variables v23 through v115 in the figure are "third" most important and variables v126 through v100 are "fourth" most important.

Conclusion

Data science is a trade-off. In most instances predictive power of the model is as important as interpretability. One can have a highly accurate model with limited predictability, a highly interpretable model with limited predictability, or some balance between the two. The particular needs of the assignment govern this trade-off.

BNP Paribas clearly wants a model with predictive accuracy for economic purposes yet interpretation is also important for other business purposes. Both are available in the appraoch presented in this post.

 

About Author

Matt Samelson

Matt Samelson is a data scientist and leader passionate about "hands-on" problem-solving using statistical analysis, predictive analytics, and visualization. He has a track record of driving incremental business improvements and a background in management, consulting, and quantitative research....
View all posts by Matt Samelson >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI