1. Benchmarking energy consumption and latency for neuromorphic computing in condensed matter and particle physics
- Author
-
Kösters, D.J., Kortman, B.A., Boybat, I., Ferro, E., Dolas, S., de Austri, R.R., Kwisthout, J.H.P., Hilgenkamp, J.W.M., Rasing, T.H.M., Riel, H., Sebastian, A., Caron, S., Mentink, J.H., Kösters, D.J., Kortman, B.A., Boybat, I., Ferro, E., Dolas, S., de Austri, R.R., Kwisthout, J.H.P., Hilgenkamp, J.W.M., Rasing, T.H.M., Riel, H., Sebastian, A., Caron, S., and Mentink, J.H.
- Abstract
Item does not contain fulltext, The massive use of artificial neural networks (ANNs), increasingly popular in many areas of scientific computing, rapidly increases the energy consumption of modern high-performance computing systems. An appealing and possibly more sustainable alternative is provided by novel neuromorphic paradigms, which directly implement ANNs in hardware. However, little is known about the actual benefits of running ANNs on neuromorphic hardware for use cases in scientific computing. Here, we present a methodology for measuring the energy cost and compute time for inference tasks with ANNs on conventional hardware. In addition, we have designed an architecture for these tasks and estimate the same metrics based on a state-of-the-art analog in-memory computing (AIMC) platform, one of the key paradigms in neuromorphic computing. Both methodologies are compared for a use case in quantum many-body physics in two-dimensional condensed matter systems and for anomaly detection at 40 MHz rates at the Large Hadron Collider in particle physics. We find that AIMC can achieve up to one order of magnitude shorter computation times than conventional hardware at an energy cost that is up to three orders of magnitude smaller. This suggests great potential for faster and more sustainable scientific computing with neuromorphic hardware.
- Published
- 2023