Back to Search Start Over

Interpreting Transformers for Jet Tagging

Authors :
Wang, Aaron
Gandrakota, Abijith
Ngadiuba, Jennifer
Sahu, Vivekanand
Bhatnagar, Priyansh
Khoda, Elham E
Duarte, Javier
Publication Year :
2024

Abstract

Machine learning (ML) algorithms, particularly attention-based transformer models, have become indispensable for analyzing the vast data generated by particle physics experiments like ATLAS and CMS at the CERN LHC. Particle Transformer (ParT), a state-of-the-art model, leverages particle-level attention to improve jet-tagging tasks, which are critical for identifying particles resulting from proton collisions. This study focuses on interpreting ParT by analyzing attention heat maps and particle-pair correlations on the $\eta$-$\phi$ plane, revealing a binary attention pattern where each particle attends to at most one other particle. At the same time, we observe that ParT shows varying focus on important particles and subjets depending on decay, indicating that the model learns traditional jet substructure observables. These insights enhance our understanding of the model's internal workings and learning process, offering potential avenues for improving the efficiency of transformer architectures in future high-energy physics applications.<br />Comment: Accepted at the Machine Learning and the Physical Sciences Workshop, NeurIPS 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.03673
Document Type :
Working Paper