Predicting Higgs Boson Signal

Danli
Diego De Lazzari
Linlin Cheng
, and
Posted on Sep 2, 2016

The Higgs boson was the last hold-out particle remaining hidden during the quest to check the accuracy of the Standard Model of Physics. After decades of search, on March 14, 2013 CERN officially announced the confirmation of the Higgs boson. This Kaggle competition is about building a machine learning algorithm to filter out signal from background noise using selected & simulated ATLAS experiment data.

The code can be found here.


The Missingness Pattern

The first thing caught our eyes while reading the data was the enormous amount of -999s in both the training and the test datasets. So we did a quick check using the aggr function inside the VIM package.

The graph below shows the missingness pattern of the training dataset. To the left is the ratio of missingness for each column, from .15 to more than .70 (The x ticks might be a little bit hard to identify but don't worry we'll talk about them later).  And to the right, each row represents a possible combination of missingness in terms of variables.  From this graph alone we can tell that there are several variables acting weird. There's a group of 7 variables that always be missing together, and sometimes a group of other 3 variables follows. The only variable that is potentially MCAR ( Missing completely at random) is DER_mass_MMC (The narrow red bar at the very left of the combination graph).

Screen Shot 2016-09-04 at 12.52.03 AM

 

Since the ticks generated in previous graph was intentionally minimized to fit the space thus hard to read, we have double checked the missingness and discovered an interesting fact, The missingness pattern has a strong correlation with the value of variable PRI_jet_num! When it's equal to 0 or 1, there are bunch of variables be missing completely, and for the variable DER_mass_MMC, which we suspected to be MCAR, it has small amount of missingness in all of the cases.

Picture1


Data Distribution

Next we decided to take a look at some of the EDAs, namely data distribution and correlation, to further explore the structure of our data. What been shown below is the box plots for all the variables, we found that the means of some them are substantially different for background and signal, also many of the features have outliers, however, we finally decided only to take out the extreme ones (those ones which are in the red circles).

Screen Shot 2016-09-04 at 7.04.14 AM

As of correlation, we kicked out all the NAs (i.e. -999s) before drawing the graph, otherwise many of the features would appear highly correlated. And the final graph shows some multicollinearity, which we think is acceptable since a lot of the features are actually derived from others (those ones with names starting with a "DER"), but we kept this graph just in case we would use any linear models in the future.

Screen Shot 2016-09-04 at 7.16.27 AM


Feature Selection

We also did a PCA (Principal Component Analysis) to help us bringing down some dimensions, or aid our feature selection.

Since PCA does not take NAs as input, we chose to do a random imputation before the analysis, simply because the box plots told us many of the variables are highly skewed and mean imputation might not be a good choice.

The scree plot suggests 11 PCs, but looking at the result to the right we found it's impossible to use eigenvectors to predict since the 11 of them can only explain 70% of the variance, and in terms of feature selection, all of the variables are contributing at least something to at least one of the PCs, so we can not dump any of them frivolously. Thus we moved forward quickly to other solutions.

Screen Shot 2016-09-04 at 7.28.09 AM Screen Shot 2016-09-04 at 7.28.25 AM

There's another method to examine feature importance which is quite popular, the tree algorithms. We fitted a random forest for complete cases and plotted out the importance graph(the little yellow thunderbolt indicates that variable has missing values). We can see the DER_mass_MMC really popped out in terms of importance, however, what we consider even more valuable is that, many of the missing-in-bulk variables are not trivial at all. So our data cleaning strategy finally came down to:

  • Kick out the extreme outliers
  • For small amount of missingness, we do random imputation
  • Keep as many variables as we can, unless it really doesn't make sense to use them

Screen Shot 2016-09-04 at 7.39.16 AM


Modeling - First & Second Try 

We subsetted the training data into 3 pieces by the value of PRI_jet_num. When it's 0 and 1, deleted those variables that are completely missing, when it's 2 and 3, imputed the small portion of NAs, and for each subsets of data, we tuned 3 models, namely random forest, XGBoost and AdaBoost (we used only tree based models considering the multicollinearity mentioned above and lack of computing time for complex algorithms like SVM and neural network), ensemble them using caretStack.

When we test, we also need to subset the data and plug into 3 different models.

Screen Shot 2016-09-04 at 7.52.44 AM

We actually did a simple ensemble with only AdaBoost and XGBoost before stacking everything together.

  • ADABOOST (iter=100, nu =.03, maxdepth = 10. Accuracy after threshold = 80%, Best threshold at p = 0.75 ) on the full raw dataset => AMS 3.45
  • XGBOOST (usual parameters, accuracy after threshold 80%, ) => AMS 3.55
  • ENSEMBLE: (ADABOOST + XGBOOST)/2 (average on predictions (unknown data) , averaged and then rounded. Threshold at p = 0.8) => AMS 3.57

By simply taking an average of the 2 models' predictions, we achieved an AMS (approximate median significance, which is the official evaluation metric of this competition) score of 3.57. However, after we stacked the models using a meta-model GBM (Gradient Boosting Machine), the score bounced back to 3.26.


 

Modeling - Third Try

We noticed that something might be wrong with our stacking method, thus we changed into another one. Manually stacking meta features.

Screen Shot 2016-09-04 at 8.51.12 AM

About Authors

Danli

Danli

Danli Zeng is a young professional with 5 years' experience in MARKETING and MEDIA. She was specialized in integrated media planning and ROMI analysis for FMCG industry. Having worked on all sides of agency, media and client, she...
View all posts by Danli >
Diego De Lazzari

Diego De Lazzari

Researcher, developer and data scientist. Diego De Lazzari is an applied physicist with a rather diverse background. He spent 8 years in applied research, developing computational models in the field of Plasma Physics (Nuclear Fusion) and Geophysics. As...
View all posts by Diego De Lazzari >

Leave a Comment

No comments found.

View Posts by Categories


Our Recent Popular Posts


View Posts by Tags

2019 airbnb Alex Baransky alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus API Application artist aws beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep Bundles California Cancer Research capstone Career Career Day citibike clustering Coding Course Demo Course Report D3.js data Data Analyst data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization Deep Learning Demo Classes Demo Day Demo Lesson Discount dplyr employer networking feature engineering Finance Financial Data Science Flask gbm Get Hired ggplot2 googleVis Hadoop higgs boson Hiring hiring partner events Hiring Partners Industry Experts Instructor Blog Instructor Interview Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter lasso regression Lead Data Scienctist Lead Data Scientist leaflet Lectures linear regression Live Chat Live Online Bootcamp Logistic Regression machine learning Maps matplotlib Medical Research Meet the team meetup Networking neural network Neural networks New Courses nlp NYC NYC Data Science nyc data science academy NYC Open Data NYCDSA NYCDSA Alumni Online Online Bootcamp Online Lectures Online Training Open Data painter pandas Part-time Portfolio Development prediction Prework Programming PwC python python machine learning python scrapy python web scraping python webscraping Python Workshop R R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking Realtime Interaction recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn Selenium sentiment analysis Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau team TensorFlow Testimonial tf-idf Top Data Science Bootcamp twitter visualization web scraping Weekend Course What to expect word cloud word2vec XGBoost yelp