Back to Search Start Over

Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics

Authors :
Migliorini, Matteo
Castellotti, Riccardo
Canali, Luca
Zanetti, Marco
Source :
Comput Softw Big Sci 4, 8 (2020)
Publication Year :
2019

Abstract

The effective utilization at scale of complex machine learning (ML) techniques for HEP use cases poses several technological challenges, most importantly on the actual implementation of dedicated end-to-end data pipelines. A solution to these challenges is presented, which allows training neural network classifiers using solutions from the Big Data and data science ecosystems, integrated with tools, software, and platforms common in the HEP environment. In particular, Apache Spark is exploited for data preparation and feature engineering, running the corresponding (Python) code interactively on Jupyter notebooks. Key integrations and libraries that make Spark capable of ingesting data stored using ROOT format and accessed via the XRootD protocol, are described and discussed. Training of the neural network models, defined using the Keras API, is performed in a distributed fashion on Spark clusters by using BigDL with Analytics Zoo and also by using TensorFlow, notably for distributed training on CPU and GPU resourcess. The implementation and the results of the distributed training are described in detail in this work.<br />Comment: This is a pre-print of an article published in Computing and Software for Big Science. The final authenticated version is available online at https://rdcu.be/b4Wk9

Details

Database :
arXiv
Journal :
Comput Softw Big Sci 4, 8 (2020)
Publication Type :
Report
Accession number :
edsarx.1909.10389
Document Type :
Working Paper
Full Text :
https://doi.org/10.1007/s41781-020-00040-0