Predicting Higgs Boson Signal
The Higgs boson was the last hold-out particle remaining hidden during the quest to check the accuracy of the Standard Model of Physics. After decades of search, on March 14, 2013 CERN officially announced the confirmation of the Higgs boson. This Kaggle competition is about building a machine learning algorithm to filter out signal from background noise using selected & simulated ATLAS experiment data.
The code can be found here.
The Missingness Pattern
The first thing caught our eyes while reading the data was the enormous amount of -999s in both the training and the test datasets. So we did a quick check using the aggr function inside the VIM package.
The graph below shows the missingness pattern of the training dataset. To the left is the ratio of missingness for each column, from .15 to more than .70 (The x ticks might be a little bit hard to identify but don't worry we'll talk about them later). And to the right, each row represents a possible combination of missingness in terms of variables. From this graph alone we can tell that there are several variables acting weird. There's a group of 7 variables that always be missing together, and sometimes a group of other 3 variables follows. The only variable that is potentially MCAR ( Missing completely at random) is DER_mass_MMC (The narrow red bar at the very left of the combination graph).
Since the ticks generated in previous graph was intentionally minimized to fit the space thus hard to read, we have double checked the missingness and discovered an interesting fact, The missingness pattern has a strong correlation with the value of variable PRI_jet_num! When it's equal to 0 or 1, there are bunch of variables be missing completely, and for the variable DER_mass_MMC, which we suspected to be MCAR, it has small amount of missingness in all of the cases.
Next we decided to take a look at some of the EDAs, namely data distribution and correlation, to further explore the structure of our data. What been shown below is the box plots for all the variables, we found that the means of some them are substantially different for background and signal, also many of the features have outliers, however, we finally decided only to take out the extreme ones (those ones which are in the red circles).
As of correlation, we kicked out all the NAs (i.e. -999s) before drawing the graph, otherwise many of the features would appear highly correlated. And the final graph shows some multicollinearity, which we think is acceptable since a lot of the features are actually derived from others (those ones with names starting with a "DER"), but we kept this graph just in case we would use any linear models in the future.
We also did a PCA (Principal Component Analysis) to help us bringing down some dimensions, or aid our feature selection.
Since PCA does not take NAs as input, we chose to do a random imputation before the analysis, simply because the box plots told us many of the variables are highly skewed and mean imputation might not be a good choice.
The scree plot suggests 11 PCs, but looking at the result to the right we found it's impossible to use eigenvectors to predict since the 11 of them can only explain 70% of the variance, and in terms of feature selection, all of the variables are contributing at least something to at least one of the PCs, so we can not dump any of them frivolously. Thus we moved forward quickly to other solutions.
There's another method to examine feature importance which is quite popular, the tree algorithms. We fitted a random forest for complete cases and plotted out the importance graph(the little yellow thunderbolt indicates that variable has missing values). We can see the DER_mass_MMC really popped out in terms of importance, however, what we consider even more valuable is that, many of the missing-in-bulk variables are not trivial at all. So our data cleaning strategy finally came down to:
- Kick out the extreme outliers
- For small amount of missingness, we do random imputation
- Keep as many variables as we can, unless it really doesn't make sense to use them
Modeling - First & Second Try
We subsetted the training data into 3 pieces by the value of PRI_jet_num. When it's 0 and 1, deleted those variables that are completely missing, when it's 2 and 3, imputed the small portion of NAs, and for each subsets of data, we tuned 3 models, namely random forest, XGBoost and AdaBoost (we used only tree based models considering the multicollinearity mentioned above and lack of computing time for complex algorithms like SVM and neural network), ensemble them using caretStack.
When we test, we also need to subset the data and plug into 3 different models.
We actually did a simple ensemble with only AdaBoost and XGBoost before stacking everything together.
- ADABOOST (iter=100, nu =.03, maxdepth = 10. Accuracy after threshold = 80%, Best threshold at p = 0.75 ) on the full raw dataset => AMS 3.45
- XGBOOST (usual parameters, accuracy after threshold 80%, ) => AMS 3.55
- ENSEMBLE: (ADABOOST + XGBOOST)/2 (average on predictions (unknown data) , averaged and then rounded. Threshold at p = 0.8) => AMS 3.57
By simply taking an average of the 2 models' predictions, we achieved an AMS (approximate median significance, which is the official evaluation metric of this competition) score of 3.57. However, after we stacked the models using a meta-model GBM (Gradient Boosting Machine), the score bounced back to 3.26.
Modeling - Third Try
We noticed that something might be wrong with our stacking method, thus we changed into another one. Manually stacking meta features.