NYC Data Science Academy| Blog
Bootcamps
Lifetime Job Support Available Financing Available
Bootcamps
Data Science with Machine Learning Flagship ๐Ÿ† Data Analytics Bootcamp Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lesson
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories Testimonials Alumni Directory Alumni Exclusive Study Program
Courses
View Bundled Courses
Financing Available
Bootcamp Prep Popular ๐Ÿ”ฅ Data Science Mastery Data Science Launchpad with Python View AI Courses Generative AI for Everyone New ๐ŸŽ‰ Generative AI for Finance New ๐ŸŽ‰ Generative AI for Marketing New ๐ŸŽ‰
Bundle Up
Learn More and Save More
Combination of data science courses.
View Data Science Courses
Beginner
Introductory Python
Intermediate
Data Science Python: Data Analysis and Visualization Popular ๐Ÿ”ฅ Data Science R: Data Analysis and Visualization
Advanced
Data Science Python: Machine Learning Popular ๐Ÿ”ฅ Data Science R: Machine Learning Designing and Implementing Production MLOps New ๐ŸŽ‰ Natural Language Processing for Production (NLP) New ๐ŸŽ‰
Find Inspiration
Get Course Recommendation Must Try ๐Ÿ’Ž An Ultimate Guide to Become a Data Scientist
For Companies
For Companies
Corporate Offerings Hiring Partners Candidate Portfolio Hire Our Graduates
Students Work
Students Work
All Posts Capstone Data Visualization Machine Learning Python Projects R Projects
Tutorials
About
About
About Us Accreditation Contact Us Join Us FAQ Webinars Subscription An Ultimate Guide to
Become a Data Scientist
    Login
NYC Data Science Acedemy
Bootcamps
Courses
Students Work
About
Bootcamps
Bootcamps
Data Science with Machine Learning Flagship
Data Analytics Bootcamp
Artificial Intelligence Bootcamp New Release ๐ŸŽ‰
Free Lessons
Intro to Data Science New Release ๐ŸŽ‰
Find Inspiration
Find Alumni with Similar Background
Job Outlook
Occupational Outlook
Graduate Outcomes Must See ๐Ÿ”ฅ
Alumni
Success Stories
Testimonials
Alumni Directory
Alumni Exclusive Study Program
Courses
Bundles
financing available
View All Bundles
Bootcamp Prep
Data Science Mastery
Data Science Launchpad with Python NEW!
View AI Courses
Generative AI for Everyone
Generative AI for Finance
Generative AI for Marketing
View Data Science Courses
View All Professional Development Courses
Beginner
Introductory Python
Intermediate
Python: Data Analysis and Visualization
R: Data Analysis and Visualization
Advanced
Python: Machine Learning
R: Machine Learning
Designing and Implementing Production MLOps
Natural Language Processing for Production (NLP)
For Companies
Corporate Offerings
Hiring Partners
Candidate Portfolio
Hire Our Graduates
Students Work
All Posts
Capstone
Data Visualization
Machine Learning
Python Projects
R Projects
About
Accreditation
About Us
Contact Us
Join Us
FAQ
Webinars
Subscription
An Ultimate Guide to Become a Data Scientist
Tutorials
Data Analytics
  • Learn Pandas
  • Learn NumPy
  • Learn SciPy
  • Learn Matplotlib
Machine Learning
  • Boosting
  • Random Forest
  • Linear Regression
  • Decision Tree
  • PCA
Interview by Companies
  • JPMC
  • Google
  • Facebook
Artificial Intelligence
  • Learn Generative AI
  • Learn ChatGPT-3.5
  • Learn ChatGPT-4
  • Learn Google Bard
Coding
  • Learn Python
  • Learn SQL
  • Learn MySQL
  • Learn NoSQL
  • Learn PySpark
  • Learn PyTorch
Interview Questions
  • Python Hard
  • R Easy
  • R Hard
  • SQL Easy
  • SQL Hard
  • Python Easy
Data Science Blog > Machine Learning > Higgs Boson Signal Detection

Higgs Boson Signal Detection

Spencer James Stebbins, Tyler Knutson, Amy Tzu-Yu Chen, Gregory Domingo and Chia-An (Anne) Chen
Posted on Sep 6, 2016

Code:  Github


Preface

Discovery of the long awaited Higgs boson was announced July 4, 2012 and confirmed six months later. 2013 saw a number of prestigious awards. But for physicists, the discovery of a new particle means the beginning of a long and difficult quest to measure its characteristics and determine if it fits the current model of nature, which is a daunting endeavor. Therefore CERN, the European Organization for Nuclear Research, in collaboration with the data science competition site, Kaggle, initialized an open source challenge entitled the Higgs Boson Machine Learning Challenge to explore the potential of advanced machine learning methods to improve the discovery significance of the experiment. Our team took on this challenge and with limited backgorund in particle physics, we devised a systematic approach to first understand the dataset, test individual machine learning methods, and then expand on our findings from there.

intro

Understand Dataset

Missing Data

Complete cases in the training data is 68,114 out of 250,000 observations or only 27.2% of the dataset. This is a red flag for handling missing data with imputation because of the overwhelming number of missing data. Since we do not have a very good handle on advanced physics, this meant that we had to rely on using methods that handle missing data well such as XGBoost.

Correlation Plot

The correlation plot shows strong relationships among some of the variables. We can see high correlations between some of the DER(derived variables) with PRI(primary) variables.

correlation plot

Table Plot

The table plot was designed to handle large data sets and is very a powerful visual tool to understand distinguishing factors within each variable and across variables.

The table plot divides the observations in 100 equal groups which are plotted as rows. The mean for each group of observations is displayed as a line and the accompanying standard deviation is displayed as a dark blue bar surrounding the mean.

In the table plot below, the results are sorted by the outcome (โ€œsโ€ or โ€œbโ€). One can immediately see that in some of the variables the means and/or standard deviations are quite different. The table plot directs us to variables that appear to be the most discriminating in terms of distinguishing whether observations are โ€œsignalโ€ or โ€œbackgroundโ€.

Standardized Variable Distribution

Based on the table plot and supported by calculations of mean differentials and standard deviation differentials, we selected a reduced set of 10 variables that we deemed to be the most important predictors and ran them using our model. It is worthy to note that 9 of the 10 variables chosen were DER variables.

Principal Component Analysis

As a part of exploratory data analysis, we employed Principal Component Analysis, an unsupervised method, to determine whether it is practical to reduce dimensions. The first principal component explained only 23% of variance. Based on the result of scree plot, we reduced dimensions to 10 principal components, but even then only 75% variance is explained. PCA showed us that it would be difficult to eliminate variables without sacrificing prediction accuracy. 

Principal Component Analysis

 

A Note On AMS, Weights, and Thresholds

Approximate Median Significance (AMS) is the metric by which Kaggle evaluates entries for the Higgs Boson Machine Learning Challenge. In a nutshell, through weights the AMS gives positive points for True positives (predicting โ€œsโ€ when outcome is really an โ€œsโ€) and gives negative points for False positives (predicting โ€œsโ€ when outcome is really a โ€œbโ€) except that the negative points for False positives are roughly twice in magnitude than the positive points for True positive. This is the reason why the optimal results for the models use a threshold of around 15% - the threshold being the percentage of โ€œsโ€ predictions out of the 550,000 total predictions submitted to Kaggle for evaluation. Compare 15% to the 34% โ€œsโ€ in the training data and you can see the effect of double penalty for False positives.

Test Individual Models

Random Forest

The first individual model we tested was a random forest as this model is typically quite effective on large datasets, can work with many features, and quickly gives a sense of which variables are most important.  We trained this model under the following conditions:

  • 250,000 training samples
  • AMS rather than accuracy
  • 2 fold cross-validation repeated once
  • 3 different โ€œmtryโ€ values (2, 16, 30)

As expected, the variable importance plot below clearly demonstrates the relevance of the mass related variables, particularly the โ€œDER_mass_MMCโ€ feature.  This is consistent with findings in the previous section.

variable importance

Unfortunately, the best AMS score produced on the test data set was 2.10, which is well outside of the top 50% of the Kaggle leaderboard.  Given the large gap between the random forest model performance and the top models in the competition we decided to move forward with other individual models rather than continuing to tune the random forest.

Neural Network

After using random forest, we then attempted a more complex model using a neural network. One of the biggest problems with neural networks and most black box methods of prediction is that tuning parameters is often difficult because accuracy does not necessarily improve on a linear scale with tuning. Thus, when constructing the neural network for this data, the choice of network topology and complexity was a pressing concern and we sought to achieve a more systematic approach towards finding the optimal network topology and probability threshold for choosing signal over noise to achieve the highest possible AMS score.

To achieve this, we built a function containing a simple for-loop that creates a neural network model with one hidden layer and an increasing number of hidden nodes for that layer with each iteration. Nested within each one of these node-determining iterations is another loop that then finds the threshold value at intervals of .025 that results in the highest AMS score given the topology set in the parent for-loop. Through this methodology, we were thus able to ascertain the optimal number of hidden nodes on a single hidden layer and its threshold that yielded the highest AMS score.

neural console

When testing this function on sample sizes of 10000 and below, this process worked flawlessly and with each increasingly complex network topology the function would pick a different threshold that maximized the AMS score on the sampled training and validation set. However, when using the method on the entire training data, with every iteration of increasing hidden nodes on the single hidden layer, the method would choose the same optimal threshold of 1 and deciding to classify every observation in validation set as background which produced a poor AMS score of .599. So why was this happening? Lets investigate some possibilities...

If you remember back to the section above where we sought to understand the data, we discovered that

  1.  Some of the features are highly correlated
  2. There would be missingness in some feature data that was often dependent on the existence of another feature having data or not.
  3.  PCA analysis only explained 25% of the variance in the data and even choosing the first 10 principal components only explained 75% of that variance, which both imply that overall data has limited dimensionality and colinear
  4.  The ratio of signal to noise is very skewed

Through these observations on the data, we can hypothesize that there may be complex dependency between highly correlated features that results in classification of finite signal over background and this would thus give an explanation as to why the neural network we engineered performed poorly and always chose to classify the entire data set as background. Because of the potential interdependency between the features theorized above, it is possible that given a single hidden layer topology, the neural network will never encapsulate this complex relationship and therefore always yield a poor AMS score. To remedy this, we may want to in the future try not only optimizing on network for threshold and the depth of a single layer, but through the number of layers chosen in the network topology. Perhaps a wider topology would allow the network to better encompass and tune itself to the inter-dependent relationship between features and thus be a better model of the data and yield a higher AMS score. The next complex model we chose to train on the data was XGBoost.

XGBoost

Before building the model, we subset the training dataset provided on Kaggle into our own sub-training and sub-test. Consider an extremely unbalanced ratio of signal and background counts, we utilized the sample.split() function in R to ensure that both sub-traning and sub-test would have same signal/background ratio. In other words, we avoided a scenario where sub-training might have 85% background while sub-test contained 99% background.

The error of a model can be an effect of bias and variance. We got a mediocre result from random forest, which has relatively low bias and high variance due to fully growing the decisions tress in parallel, thus we moved on with boosting model, where we would have higher bias and lower variance by growing sequential trees.

The xgboost model yielded a pretty good result with default parameters. We then constructed our own cross-validation function to grid search the best tuning parameters for high accuracy and AMS score. 

After running 1,050 models, we found the parameters associated with the highest training AMS score are eta = 0.1, max_depth = 9, and nrounds = 85.

xgboost

The ranking on Kaggle leader board of this submission wasnโ€™t as high as we expected, indicating potential overfitting. By slightly tuning the parameters into eta = 0.1, max_depth = 10, and nrounds = 75, we improved the AMS score to almost 3.7.

score

This submission ranked us to top 100. The rationale behind this performance advancement was that we improved each individual tree by increasing the maximum depth of a tree and eased the overfitting issue by decreasing number of rounds for boosting.

Takeaways and Next Steps

Takeaways

  • EDA indicates that mass related variables are most important
  • EDA shows mean and standard deviation indicate that of many variables tend to vary for signal vs background events
  • Tuned xgbBoost model yield highest AMS score of our tested models

Next Steps

  • Continue to tune neural network model
  • Attempt ensemble model given best neural net and xgbBoost models
  • Attempt other ensemble model combinations

About Authors

Spencer James Stebbins

Spencer Stebbins was the Lead Software Engineer at Dorian LPG; a NYSE listed company and the second largest LPG ship owner in the world. While at Dorian, Spencer was the project manager and lead engineer of DORIS; an...
View all posts by Spencer James Stebbins >

Tyler Knutson

Tyler Knutson has over ten years of experience in the strategy consulting industry, primarily focused on solving problems in the US healthcare sector. With a Bachelors in finance and international business from the University of Minnesota, Tyler has...
View all posts by Tyler Knutson >

Amy Tzu-Yu Chen

Amy Tzu-Yu Chen is a recent college graduate who earned her BS in Statistics with three minors in German, Japanese, and Urban/Regional Studies from University of California, Los Angeles (UCLA). As a statistician, she is deeply passionate about...
View all posts by Amy Tzu-Yu Chen >

Gregory Domingo

Built his career in the financial services industry (fixed income research and fixed income portfolio management) in New York. and moved back to the Philippines in 1995. Has been involved since then in senior management positions in both...
View all posts by Gregory Domingo >

Chia-An (Anne) Chen

Anne Chen has a Masters degree in Bioengineering from the University of Pennsylvania. Prior to working at a biotech startup developing a liver cancer diagnosis device, Anne researched and evaluated open-source Electronic Health Records software for small-scale hospitals...
View all posts by Chia-An (Anne) Chen >

Related Articles

Machine Learning
Decoding the God Particle
Machine Learning
What It Took to Score the Top 2% on the Higgs Boson Machine Learning Challenge
Machine Learning
Higgs Boson Machine Learning Challenge

Leave a Comment

Cancel reply

You must be logged in to post a comment.

No comments found.

View Posts by Categories

All Posts 2399 posts
AI 7 posts
AI Agent 2 posts
AI-based hotel recommendation 1 posts
AIForGood 1 posts
Alumni 60 posts
Animated Maps 1 posts
APIs 41 posts
Artificial Intelligence 2 posts
Artificial Intelligence 2 posts
AWS 13 posts
Banking 1 posts
Big Data 50 posts
Branch Analysis 1 posts
Capstone 206 posts
Career Education 7 posts
CLIP 1 posts
Community 72 posts
Congestion Zone 1 posts
Content Recommendation 1 posts
Cosine SImilarity 1 posts
Data Analysis 5 posts
Data Engineering 1 posts
Data Engineering 3 posts
Data Science 7 posts
Data Science News and Sharing 73 posts
Data Visualization 324 posts
Events 5 posts
Featured 37 posts
Function calling 1 posts
FutureTech 1 posts
Generative AI 5 posts
Hadoop 13 posts
Image Classification 1 posts
Innovation 2 posts
Kmeans Cluster 1 posts
LLM 6 posts
Machine Learning 364 posts
Marketing 1 posts
Meetup 144 posts
MLOPs 1 posts
Model Deployment 1 posts
Nagamas69 1 posts
NLP 1 posts
OpenAI 5 posts
OpenNYC Data 1 posts
pySpark 1 posts
Python 16 posts
Python 458 posts
Python data analysis 4 posts
Python Shiny 2 posts
R 404 posts
R Data Analysis 1 posts
R Shiny 560 posts
R Visualization 445 posts
RAG 1 posts
RoBERTa 1 posts
semantic rearch 2 posts
Spark 17 posts
SQL 1 posts
Streamlit 2 posts
Student Works 1687 posts
Tableau 12 posts
TensorFlow 3 posts
Traffic 1 posts
User Preference Modeling 1 posts
Vector database 2 posts
Web Scraping 483 posts
wukong138 1 posts

Our Recent Popular Posts

AI 4 AI: ChatGPT Unifies My Blog Posts
by Vinod Chugani
Dec 18, 2022
Meet Your Machine Learning Mentors: Kyle Gallatin
by Vivian Zhang
Nov 4, 2020
NICU Admissions and CCHD: Predicting Based on Data Analysis
by Paul Lee, Aron Berke, Bee Kim, Bettina Meier and Ira Villar
Jan 7, 2020

View Posts by Tags

#python #trainwithnycdsa 2019 2020 Revenue 3-points agriculture air quality airbnb airline alcohol Alex Baransky algorithm alumni Alumni Interview Alumni Reviews Alumni Spotlight alumni story Alumnus ames dataset ames housing dataset apartment rent API Application artist aws bank loans beautiful soup Best Bootcamp Best Data Science 2019 Best Data Science Bootcamp Best Data Science Bootcamp 2020 Best Ranked Big Data Book Launch Book-Signing bootcamp Bootcamp Alumni Bootcamp Prep boston safety Bundles cake recipe California Cancer Research capstone car price Career Career Day ChatGPT citibike classic cars classpass clustering Coding Course Demo Course Report covid 19 credit credit card crime frequency crops D3.js data data analysis Data Analyst data analytics data for tripadvisor reviews data science Data Science Academy Data Science Bootcamp Data science jobs Data Science Reviews Data Scientist Data Scientist Jobs data visualization database Deep Learning Demo Day Discount disney dplyr drug data e-commerce economy employee employee burnout employer networking environment feature engineering Finance Financial Data Science fitness studio Flask flight delay football gbm Get Hired ggplot2 googleVis H20 Hadoop hallmark holiday movie happiness healthcare frauds higgs boson Hiring hiring partner events Hiring Partners hotels housing housing data housing predictions housing price hy-vee Income industry Industry Experts Injuries Instructor Blog Instructor Interview insurance italki Job Job Placement Jobs Jon Krohn JP Morgan Chase Kaggle Kickstarter las vegas airport lasso regression Lead Data Scienctist Lead Data Scientist leaflet league linear regression Logistic Regression machine learning Maps market matplotlib Medical Research Meet the team meetup methal health miami beach movie music Napoli NBA netflix Networking neural network Neural networks New Courses NHL nlp NYC NYC Data Science nyc data science academy NYC Open Data nyc property NYCDSA NYCDSA Alumni Online Online Bootcamp Online Training Open Data painter pandas Part-time performance phoenix pollutants Portfolio Development precision measurement prediction Prework Programming public safety PwC python Python Data Analysis python machine learning python scrapy python web scraping python webscraping Python Workshop R R Data Analysis R language R Programming R Shiny r studio R Visualization R Workshop R-bloggers random forest Ranking recommendation recommendation system regression Remote remote data science bootcamp Scrapy scrapy visualization seaborn seafood type Selenium sentiment analysis sentiment classification Shiny Shiny Dashboard Spark Special Special Summer Sports statistics streaming Student Interview Student Showcase SVM Switchup Tableau teachers team team performance TensorFlow Testimonial tf-idf Top Data Science Bootcamp Top manufacturing companies Transfers tweets twitter videos visualization wallstreet wallstreetbets web scraping Weekend Course What to expect whiskey whiskeyadvocate wildfire word cloud word2vec XGBoost yelp youtube trending ZORI

NYC Data Science Academy

NYC Data Science Academy teaches data science, trains companies and their employees to better profit from data, excels at big data project consulting, and connects trained Data Scientists to our industry.

NYC Data Science Academy is licensed by New York State Education Department.

Get detailed curriculum information about our
amazing bootcamp!

Please enter a valid email address
Sign up completed. Thank you!

Offerings

  • HOME
  • DATA SCIENCE BOOTCAMP
  • ONLINE DATA SCIENCE BOOTCAMP
  • Professional Development Courses
  • CORPORATE OFFERINGS
  • HIRING PARTNERS
  • About

  • About Us
  • Alumni
  • Blog
  • FAQ
  • Contact Us
  • Refund Policy
  • Join Us
  • SOCIAL MEDIA

    ยฉ 2025 NYC Data Science Academy
    All rights reserved. | Site Map
    Privacy Policy | Terms of Service
    Bootcamp Application