Back to Search Start Over

Training Deep Architectures Without End-to-End Backpropagation: A Survey on the Provably Optimal Methods

Authors :
Shiyu Duan
Jose C. Principe
Source :
IEEE Computational Intelligence Magazine. 17:39-51
Publication Year :
2022
Publisher :
Institute of Electrical and Electronics Engineers (IEEE), 2022.

Abstract

This tutorial paper surveys provably optimal alternatives to end-to-end backpropagation (E2EBP) -- the de facto standard for training deep architectures. Modular training refers to strictly local training without both the forward and the backward pass, i.e., dividing a deep architecture into several nonoverlapping modules and training them separately without any end-to-end operation. Between the fully global E2EBP and the strictly local modular training, there are weakly modular hybrids performing training without the backward pass only. These alternatives can match or surpass the performance of E2EBP on challenging datasets such as ImageNet, and are gaining increasing attention primarily because they offer practical advantages over E2EBP, which will be enumerated herein. In particular, they allow for greater modularity and transparency in deep learning workflows, aligning deep learning with the mainstream computer science engineering that heavily exploits modularization for scalability. Modular training has also revealed novel insights about learning and has further implications on other important research domains. Specifically, it induces natural and effective solutions to some important practical problems such as data efficiency and transferability estimation.<br />Comment: Accepted by IEEE Computational Intelligence Magazine

Details

ISSN :
15566048 and 1556603X
Volume :
17
Database :
OpenAIRE
Journal :
IEEE Computational Intelligence Magazine
Accession number :
edsair.doi.dedup.....97ab320ef9110d5a9859f3bf512b6c94