1. Optical training of large-scale Transformers and deep neural networks with direct feedback alignment
- Author
-
Wang, Ziao, Müller, Kilian, Filipovich, Matthew, Launay, Julien, Ohana, Ruben, Pariente, Gustave, Mokaadi, Safa, Brossollet, Charles, Moreau, Fabien, Cappelli, Alessandro, Poli, Iacopo, Carron, Igor, Daudet, Laurent, Krzakala, Florent, and Gigan, Sylvain
- Subjects
Computer Science - Emerging Technologies ,Condensed Matter - Disordered Systems and Neural Networks ,Computer Science - Machine Learning ,Physics - Applied Physics ,Physics - Optics - Abstract
Modern machine learning relies nearly exclusively on dedicated electronic hardware accelerators. Photonic approaches, with low consumption and high operation speed, are increasingly considered for inference but, to date, remain mostly limited to relatively basic tasks. Simultaneously, the problem of training deep and complex neural networks, overwhelmingly performed through backpropagation, remains a significant limitation to the size and, consequently, the performance of current architectures and a major compute and energy bottleneck. Here, we experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform. An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps. We perform optical training of one of the most recent deep learning architectures, including Transformers, with more than 1B parameters, and obtain good performances on both language and vision tasks. We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks, thus opening a promising route to sustain the exponential growth of modern artificial intelligence beyond traditional von Neumann approaches., Comment: 19 pages, 4 figures
- Published
- 2024