BNP Paribas: Expediting the Insurance Claim Process
Contributed by Matt Samelson. He is currently in the NYC Data Science Academy 12 week full time Data Science Bootcamp program taking place between January 11th to April 1st, 2016. This post is based on his final class project - capstone (due on the 12th week of the program).
BNP Paribas Cardiff, a leading personal insurance provider, turned to the data science community to obtain input on expediting their claims process.
In the insurance world (or at least at their company) particular claims can be paid fairly quickly while others require more thorough attention before a payment is made. The driver for rapid claims processing is obvious: client satisfaction. Given a data set containing claims, numerous variables, and an indicator specifying whether or not the claim was paid in a slow or expedited manner, data scientists were charged with building a predictive model that minimizes log loss for the dataset.
Simplistically speaking, Logarithmic Loss, or simply Log Loss, is a classification loss function. Minimizing the Log Loss is basically equivalent to maximizing the accuracy of the classifier. A good but fairly quantitative article on log loss can be found here.
The Data
Training data was comprised of 129 variables and 114,321 observations. All variables were anonymized. The breakdown of in terms of variable types was as follows:
- 108 Continuous Variables
- 18 Factor Variables in Character Format
- 4 Variables in Integer format
Factor variables in character format had multiple class levels. The number of levels varied widely from 3 to 18,211 but mostly numbered less than 10.
A detailed view of the data can be found at the bottom of this post.
Pre-Processing
The summary information on 129 variables is far to voluminous to publish here. Furthermore, the information gleaned from basic EDA (electronic data analysis) was far from informative. Suffice to say, all variables were potential contributors to a predictive model.
That said, missingness was a substantial issue with this data set. The figure below illustrates the abundance of incomplete data among variables in the data set:
The histogram on the left illustrates that in many instance variables were missing from over 40% of the observations.
The pattern chart on the right illustrates that nearly half of the observations (rows of data in the dataset) were missing all but four variables.
Analysis
I elected to generate boosted trees using the XGBoost package in R to 1) maximize predictability using a non-parametric model and 2) enable model interpretability. Non-parametric models are normally considered more accurate and less interpretable. XGBoost is beneficial in that it has analytical features that assist in making models interpretable while maintaining robustness.
Under time constraint I conducted analysis by 1) eliminating only a single variable ("v22" - a categorical variable with 18,000+ levels, computationally expensive and of dubious predictive value) and 2) handling missingness by imputing with a simple "filler" value ("-999").
Using the R Caret package method "train" which powerfully permits implementation of grid searches and cross validation, my hardware ran for a laborious 24 hours. The model tested a grid consisting of 27 parameter combinations using 5 fold cross validation (3 parameters: eta (learning rate), maximum tree depth, and number of rounds (trees) for a particular fold test in tree generation. 3 parameters**3 = 27). The details of this process are not shown here but yielded the following results:
- eta: .01
- maximum tree depth: 8
- nrounds (trees): 2000
Using these parameters I utilized the xgb.train function in XGBoost package to train a model and make predictions utilizing these parameters. Code below.
The model generated these results in the process of running:
The results obtained on the training set (eval set drawn from the training set and train set used to fit the model were as follows:
- Logloss Validation: 0.459305 (2000th tree)
- Logloss Train: 0.345169 (2000th tree)
Results
The model yielded the following results when run against a supplied unknown set of data:
- Logloss Unknown: 0.46133
These results are likely among the best for a single non-parametric model. Comparison against against other modelers at the kaggle sight that was the source of this data indicate the best performing models showed logloss values in the .42 range. These models are known to be highly complex ensembles that are largely uninterpretable.
Interpretation
Interpreting a boosted tree model with numerous variables is extremely difficult. Rather than present a full and complicated printout of the model, I stand on 1) the high predictive value and 2) highlight only the most important model factors.
I take this approach because interested consumers are likely most interested in performance the most important variables as opposed to a long and complicated presentation about tree structures.
Model performance is addressed above. Going a bit overboard on feature importance, I present the 20 most important variables in the boosted tree model are illustrated in the chart below:
The concept of gain is a bit complex to explain here. Suffice to say, in its most basic form, gain is a measure of explanatory value brought to the model by a particular variable.
The figure above illustrates variable importance with an added layer of clustering . Essentially the clustering "groups" variables in terms of importance. So, instead of talking about 20 variables individually we can discuss the importance of four groups.
Clearly, variable v51 is by far most important in this boost tree model. Second most important is variable v67. Collectively we can say that variables v23 through v115 in the figure are "third" most important and variables v126 through v100 are "fourth" most important.
Conclusion
Data science is a trade-off. In most instances predictive power of the model is as important as interpretability. One can have a highly accurate model with limited predictability, a highly interpretable model with limited predictability, or some balance between the two. The particular needs of the assignment govern this trade-off.
BNP Paribas clearly wants a model with predictive accuracy for economic purposes yet interpretation is also important for other business purposes. Both are available in the appraoch presented in this post.