Predicting Insurance Claim Severity

Introduction

In October 2016, Allstate launched a Kaggle competition challenging competitors to predict the severity of Β insurance claims on the basis of 131 different variables. Better understanding the future cost, or severity, of a claim is of utmost importance to an insurance company and would enable Allstate to price their plans more effectively. Additionally, knowing the relative importance of different variables would allow the company to more efficiently evaluate potential customers.

For this competition, we applied various strategies, models, and Β algorithms to predict the severity of an insurance claim. As we will discuss, we utilized a variety of supervised machine learning methods, including multiple linear regression, ridge and lasso regression, random forest, gradient boosting machine (GBM) and neural nets. We then used ensembling to combine our models and arrive at more accurate predictions.

Exploring the Data

One of the challenges within the competition was that the 131 variables provided by Allstate were anonymized, meaning there was no explanation as to what the various columns described. In all, there were 72 binary categorical variables, 43 non-binary categorical variables (with 3 to 326 levels), 14 continuous variables, and one dependent variable, β€œloss”. The company provided a training dataset with 188,318 rows and a testing dataset with 125,546 rows.

We first visualized the loss variable, which ranged from 0.65 to 125,000. However, the histogram was hard to decipher due to many outliers with high β€œloss” values. When only plotting the first 95% of the data, we were able to visualize the distribution more clearly, although the underlying data was still heavily skewed to the right.

histograms

To remove the skewness, we did a log transformation on the loss variable which normalized the distribution, as seen below.

log-loss

Preprocessing

To prepare the data for analysis, we first joined the train and test dataset to account for several levels that appeared in the test.csv dataset, but not in the train.csv dataset. Because of the many levels within the categorical variables, we created dummy columns for each level with binary values of 0 or 1. However, to reduce the number of new columns, we limited the dummy columns to categories that comprised at least 2% of the variable. Lastly, we applied a log transformation to the response column in order to normalize the distribution.

preprocessing

The resulting training dataset had a total of 280 columns consisting of 265 binary variables, the 14 original continuous variables, and the log-transformed loss variable.

Supervised methods

I. Multiple Linear Regression model

To get a sense of our data and obtain a baseline against which to compare our other models, we first ran a multiple linear regression model using the R Caret package. However, we noticed that due to the approach we took in preprocessing our data, the resulting matrix of predictors turned out to be rank deficient. This prompted us to try several linear regression models to address the multicollinearity in the data and the potential problems of matrix invertibility and non-reliability of confidence intervals.

Our original model included all the variables. To try to solve the rank deficiency problem, we ran a second model which excluded seven variables that had resulted in NA coefficients in the first model. Excluding these variables, however, did not address the original problem. Β Finally, a third model excluded all the variables dropped in the second model as well as all the variables that had failed to reach significance at the 90% confidence level in the first model.

This model resulted in an adjusted R^2 of approximately 0.52, a cross-validation RMSE of 0.51, and a mean absolute error (MAE) score of 1249.45 on the test data set. Inspecting the related diagnostic plots, we were able to ascertain that the errors followed a relatively normal distribution. Similarly, we noticed no distinctive patterns in the scatter plot of the residuals against the fitted values, suggesting that the residuals also had a constant variance. These diagnostics gave us confidence in the validity of the model’s F test, however given the modest accuracy of the predictions resulting from the multiple regression model, we proceeded to investigate further models.

linear-regressions

II. Ridge

We next ran a ridge regression, cross validating over a grid of a large range for lambda, and then cross validating again on a smaller range. Β However, the Ridge model returned a very low lambda value, close to zero in the ten-thousandth place. Β This low lambda indicated a near-zero shrinkage penalty, yielding results very close to that of the linear model with all variables included. Β The RMSE for this model was 0.507 and a MAE of 1232.

III. Lasso

In addition, we also ran a lasso model with 10 folds cross-validation. The model returned a similarly low shrinkage penalty value of 0.0007140295, suggesting again the lasso fit would also yield predictions that would be very close to the multiple linear regression model. The model produced a RMSE of 0.507 and a MAE of 1248, performing close to the multiple linear regression model, but somewhat worse than the ridge regression model.

IV. GBM

Another stand-alone model that we evaluated for learning to predict the loss variable was gradient boosting. With this method, we tried to get a more accurate model out of an ensemble of random forests by adjusting the following parameters:

  • The number of iterations: n.trees
  • The depth of each tree: interaction.depth
  • The learning rate: shrinkage
  • The minimum terminal node size: n.minobsinnode

The main challenge with gbm is finding the best mix of parameters, especially in the choice of n.trees and shrinkage. As with all our previous models, we used the Caret package to make parameter selection easier. Caret enables parameter tuning by the use of a tuning grid during training. The tuning grid can take in multiple values of each parameter and train the model over each combination of parameter values. We trained multiple models to eventually end up with the best combination of parameters.

We noticed that at interaction depth of 10 and 500 trees, MAE was minimized without overfitting the validation set. Trying an interaction depth greater than ten makes the model overfit early on in the iteration process. The lowest MAE score we achieved with the best tuning parameter was 1161.49.

Ensembling

As we have seen, none of the individual models performed fairly in predicting loss. However, by trying to reduce the MAE of each model, we were able to derive valuable insights. To achieve a better predictive performance, we next proceeded to combine each of these individual models in an ensemble.

To stack our different models, we used the H2O and H2O ensemble packages. Stacking in H2O works by using multiple base learners on the dataset. This original dataset can be referred to as the β€œlevel-zero” data. The base learners can be many different algorithms with different parameters for each. Each of these base learners then computes its own predictions using the level-zero data. Column binding these predictions and regressing them onto the original response variable will now be the β€œlevel-one” data. Another learning algorithm will then be used on this level-one data to come up with a prediction, which is better than each of the individual models. This last learning algorithm is called the β€œmeta-learner.”

The best ensemble

Using this framework, we tried using a range of combinations base learns and meta-learners. We started by testing ensembles featuring the default linear model, random forest, gradient boosting machine and neural network available and coupling them with linear, random forest, GBM and neural network meta-learners in turn. At this stage, the ensembles featuring a GBM meta-learner scored best, at a MAE score of 1142.

Next, we started adjusting some of the parameters for the base and meta-learners, and adding more base learners of the same type, but featuring different parameter values. In this step, we obtained our best result by using an ensemble that included three GBM models with different numbers of trees, and one customized model for each of the other base learners. This ensemble yielded a MAE of 1125.

Finally, we tried eliminating some of the weaker performing base learners like the linear and random forest models, using multiple GBM and neural net base learners. Our best scoring ensemble used four gradient boosting machine base learners, five neural net base learner models and a ridge regression for the level one model and yielded a MAE score of 1118.

Conclusions

Looking back on the experience, one conclusion we reached was that building an ensemble that would reach a high score is as much an art as it is a science, and that parameter tuning is a central part of the enterprise. We also noticed that bigger ensembles tend to score higher, even when including base learners of the same type with identical tuning parameters. For instance, one of our ensembles which included three gradient boosting machine and three neural net base learners with parameters that yielded the best MAE scores among individual GBM and neural net models respectively, and the same ridge regression as a meta-learner, reached a lower MAE score than our best model, even though the GBM and ridge models had the same parameter tunings.

Finally, from our perspective, it looks like the manner in which preprocessing is conducted plays a crucial role in the ability to build a model that is capable of yielding high accuracy predictions.Β 

About Authors

Cristina Andronescu

Cristina is a recent MIT graduate with background in quantitative social science. Over the past five years Cristina has been involved in various experimental and quasi experimental research projects inside academia and as part of program evaluation work...
View all posts by Cristina Andronescu >

Oamar Gianan

Oamar Gianan has about 15 years of experience in the information technology industry primarily in cloud computing. He developed a passion for data analysis by working on infrastructure where big data is processed. Before moving to New York,...
View all posts by Oamar Gianan >

James Lee

James Lee is currently a Data Analyst at Facebook via Crystal Equation and a Masters in Data Science student at the University of Washington. He has a background in Economics and Mathematics from New York University, and has...
View all posts by James Lee >

Joseph van Bemmelen

Joseph van Bemmelen worked in equity research for Stifel Nicolaus, a mid-sized investment bank, for close to two years before joining NYCDSA. In his role, he wrote reports on publicly traded companies and worked extensively with financial models...
View all posts by Joseph van Bemmelen >

Related Articles

Leave a Comment

Akshay Kumar May 3, 2019
I wonder what's the hardware specs you used to carry out the analysis.
hermes kelly 32 bags Knockoff June 22, 2017
Is Miss Eva's lovely photo from a week or so ago? hermes kelly 32 bags Knockoff http://www.modetaschen.in/en/hermes-kelly-depeches-c12_16.html

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI