Kaggle's Allstate claims severity data challenge

Avatar
Posted on Jun 11, 2017

Kaggle had a competition called Allstate claims severity. Allstate provided the training and test data that contained continuous and categorical data. The portion of which can be seen below.

Id Cat1 Cat2 Cat3 Cont2 Cont3 Cont4 Cont5 loss
1 A B B 0.245921 0.187583 0.789639 0.310061 2213.18

 

2 A B A 0.737068 0.592681 0.614134 0.885834 1283.60

 

Data set

Categorical data was denoted by “cat” and “cont” for continuous. The total number of continuous and categorical can be observed in the table below

Type Amount
Categorical 116
Continuous 15

Categorical data:

There were different types of categories in the dataset. The lowest number of categories were 2 and the highest was 326. These can be observed in the following images.

Continuous data:

The following image shows the distribution of the continuous data.

Feature Engineering

Analysis showed that the data set had multi collinearity and if not corrected it could lead to overfitting the model. In order to remove multi collinearity, Variance Influence Factor (VIF) was used to remove predictors that had a correlation with other predictors. Predictors with VIF greater then 5 were removed.

Another issue that kept on coming up was that a few categorical predictors did not have the same factors in both the training and test set. These predictors were also removed.

Once removed, the categorical dataset was changed to numerical quantity using one-hot encoding.

 

Multiple regression, Lasso, Ridge and Elastic net

Multiple regression was straight forward and resulted in a model of R2 = 0.4817. It felt intuitive to try using ridge, lasso and elastic net. However after running the techniques, the MSE for all the models showed that Ridge Regression performed better.

Technique MSE score
Ridge 4589096
Lasso 11667780
Elastic Net 11847863

 

XGBoost

Despite Ridge being a clear winner, the MSE was too high. XGBoost was used to see if a better model could be predicted. The parameter for XGBoost was set to

Eta 0.01
gamma 0.175
Max_depth 2
lambda 1
alpha 0
objective “multi:softprob”
eval_metric 'mlogloss'
nround 5000

 

First the best tree iteration was selected through CV. The best iteration was train-mlogloss:0.114459+0.000355          test-mlogloss:0.124675+0.001603. The predicted model had the test error of 0.0314.

Conclusion

When the dataset was uploaded to Kaggle, the multiple linear model had the best score compared to XGBoost. It was mind boggling as to how that was possible and due to lack of time, I was unable to figure it out. Revisiting the project, I came to realize that MLR has its own way of converting categorical features into relevant features. Due to some intermediate model I was generating I forgot to combine the one hot encoded categorical data with the continuous. This could have resulted in such complex models. If I had more time on my hands I could have rerun the models and tried fixing the models.

About Author

Avatar

Tariq Khaleeq

Tariq Khaleeq has a background in Bioinformatics and completed his masters from Saarland University, Germany. In his master thesis, he worked on prediction of non coding genes in breast cancer. After his masters he co-founded a company where...
View all posts by Tariq Khaleeq >

Related Articles

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb alumni Alumni Interview Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Big Data Book Launch Book-Signing bootcamp Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Day Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Industry Experts Job Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet linear regression Logistic Regression machine learning Maps matplotlib Medical Research meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest recommendation recommendation system regression Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Tableau TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp