Predictive Analytics - Allstate Claims Severity

Posted on Jul 27, 2017

Introduction

The Allstate claims severity challenge was a Kaggle competition that was open in 2016. The task at hand was to perform predictive analytics to find claim severity. A training dataset containing 116 categorical variables and 14 continuous variables was provided. Each row represented a claim. Along with the predictor variables, there was also the actual loss on that claim.

The training dataset contains 188,318 claim records. The prediction task was to find the loss for each of the 126546 claims in the test data set.

The scoring was done based on accuracy of submissions that were made on claim severity of a separate test file that was provided. This file did not have the loss column.

 

My Workflow

Training

  1. Load Train Data
  2. Data Visualization
  3. Data Preprocessing
  4. Model Selection
    1. Feature Selection
    2. Model Training
    3. Model Validation
    4. Execution of model on new data
  5. Ensembling of models
  6. Extreme Gradient Boosting

Prediction

  1. Load Test Data
  2. Data Preprocessing
  3. Model Execution on Test data
  4. Creation and submission of the prediction

Loading of data and visualizations

allstate=read.csv("./train.csv")
hist(allstate$loss)

Loss Distribution in Train File

We see that the loss itself is heavily skewed to the left. This has an implication when it comes to model training. I found that by using logarithmic functions on top of loss, the distribution was made to be much more normalized and this helped in getting better predictions.

Log of Loss - Allstate Train Data

Log of Loss - Allstate Train Data

To further visualize all the fields, a for loop was used to create charts on each of the variables.

carvars = paste("cat", 1:116, sep="")
par(mfrow=c(29,4))
for( catvar in 1:11){
   catvar <- paste("cat", catvar, sep="")
   p <- ggplot(allstate, aes_string("logloss", fill=catvar)) + geom_histogram(binwidth = 1) 
   print(p)
}

Loading the train data

Partitioning of the data

We are going to take 80% of the train data and use it for our model training. The remaining 20% of the data will be used to determine how well our model worked.

set.seed(1234)
# define an 80%/20% train/test split of the dataset
split=0.80
trainIndex <- createDataPartition(allstate$id, p=split, list=FALSE)
data_train <- allstate[ trainIndex,]
data_test <- allstate[-trainIndex,]

 

Feature Selection

To optimize our models, we will need to take a subset of the features available. I chose to run the rpart model on the entire dataset to come up with the fields that we need.

catfactors <- paste("cat", 1:116, sep="")
contfactors <-paste("cont", 1:14, sep="")
formula = reformulate(termlabels = c(catfactors,contfactors), response = 'logloss')
modelFit <- train( formula,data=allstate, method="rpart" )  
varImp(modelFit)

cat80D 100.00

cat80B 99.75

cat12B 78.30

cat79D 75.14

cat79B 59.11

cat10B 18.35

cat1B 18.27

cat81D 16.09

cat81B 14.15

 

Execution of various models

The following models were tried and the best scores were obtained by the XgBoost model.

  1. Linear Regression
  2. Rpart
  3. Xgboost

 

catfactors <- c(“cat80”,“cat12”,“cat79”, “cat10”, “cat1”, “cat81”)

formula = reformulate(termlabels = c(catfactors,contfactors), response = ‘logloss’)

ControlParamteres <- trainControl(method = “cv”, number = 10, savePredictions = TRUE, classProbs = TRUE, verboseIter = TRUE )

parametersGrid <- expand.grid(nrounds=100, lambda=.5, alpha=.5, eta = 0.1 )

model.xgboost <- train(formula, data = data_train,method = “xgbLinear”, trControl = ControlParamteres, tuneGrid=parametersGrid)warnings() summary(model.xgboost)

Validating the model

# Validating our model
x_test <- data_test[c(catfactors,contfactors)]
y_test <- data_test[,"logloss"]
predictions <- predict(model.xgboost, x_test) 
str(predictions) 
head(y_test) 
str(predictions) 
hist(predictions)
#Computing RMSE and R2
caret::RMSE(pred = predictions, obs = y_test)
caret::R2(pred = predictions, obs = y_test)

 

Predictions on the test file

prediction.testFile=read.csv(“./test.csv”) 
out_test <-testFile 
id <-out_test$id 
str(id)
logloss <- predict(model.xgboost, out_test) 
loss <- exp(logloss) 

head(loss) 
hist(loss) 

Creating the submission file


out_file=cbind(id,loss) 
head(out_file) 
options(scipen=999) # Removing the exponential format  
write.csv(file=“./submit.csv”,out_file,row.names = FALSE) 

Challenges faced

The data is highly anonymized and this prevented any kind of feature engineering on top of the data provided. Using Domain knowledge to fine tune our models was almost impossible.

The large set of features slowed down the models by a huge extend. In the interest of creating a model that is of practical use, I had to select the features to use. The lack of domain information led to possibility of errors while eliminating features for training.

Results

I was able to learn how to apply various models. Caret as a package makes adoption of various models a breeze. I was also able to start using RMarkdown as a documenting tool while developing models.

 

 

About Author

Smitha Mathew

Technology Enthusiast, with attention to detail, having global exposure. She is a self-motivated problem solver with experience analyzing data and deriving meaningful statistical information. Her goal is to be able to make a positive difference in peoples lives...
View all posts by Smitha Mathew >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI