1. Etalumis
- Author
-
Philip H. S. Torr, Lukas Heinrich, Mingfei Ma, Xiaohui Zhao, Lawrence Meadows, Saeid Naderiparizi, Kyle Cranmer, Lei Shao, Frank Wood, Jialin Liu, Gilles Louppe, Bradley Gram-Hansen, Wahid Bhimji, Atılım Güneş Baydin, Victor W. Lee, Prabhat, and Andreas Munk
- Subjects
FOS: Computer and information sciences ,Computer Science - Machine Learning ,J.2 ,Theoretical computer science ,Computer science ,G.3 ,Inference ,Machine Learning (stat.ML) ,02 engineering and technology ,010501 environmental sciences ,Bayesian inference ,01 natural sciences ,I.2.6 ,Machine Learning (cs.LG) ,symbols.namesake ,Statistics - Machine Learning ,0202 electrical engineering, electronic engineering, information engineering ,Protocol (object-oriented programming) ,0105 earth and related environmental sciences ,020203 distributed computing ,Computer Science - Performance ,business.industry ,Deep learning ,Probabilistic logic ,Markov chain Monte Carlo ,Performance (cs.PF) ,Scalability ,symbols ,Artificial intelligence ,business ,68T37, 68T05, 62P35 - Abstract
Probabilistic programming languages (PPLs) are receiving widespread attention for performing Bayesian inference in complex generative models. However, applications to science remain limited because of the impracticability of rewriting complex scientific simulators in a PPL, the computational cost of inference, and the lack of scalable implementations. To address these, we present a novel PPL framework that couples directly to existing scientific simulators through a cross-platform probabilistic execution protocol and provides Markov chain Monte Carlo (MCMC) and deep-learning-based inference compilation (IC) engines for tractable inference. To guide IC inference, we perform distributed training of a dynamic 3DCNN--LSTM architecture with a PyTorch-MPI-based framework on 1,024 32-core CPU nodes of the Cori supercomputer with a global minibatch size of 128k: achieving a performance of 450 Tflop/s through enhancements to PyTorch. We demonstrate a Large Hadron Collider (LHC) use-case with the C++ Sherpa simulator and achieve the largest-scale posterior inference in a Turing-complete PPL., Comment: 14 pages, 8 figures
- Published
- 2019
- Full Text
- View/download PDF