Back to Search
Start Over
Conditional computation in neural networks: principles and research trends
- Source :
- Intelligenza Artificiale, vol. Pre-press, pp. 1-16, 2024
- Publication Year :
- 2024
-
Abstract
- This article summarizes principles and ideas from the emerging area of applying \textit{conditional computation} methods to the design of neural networks. In particular, we focus on neural networks that can dynamically activate or de-activate parts of their computational graph conditionally on their input. Examples include the dynamic selection of, e.g., input tokens, layers (or sets of layers), and sub-modules inside each layer (e.g., channels in a convolutional filter). We first provide a general formalism to describe these techniques in an uniform way. Then, we introduce three notable implementations of these principles: mixture-of-experts (MoEs) networks, token selection mechanisms, and early-exit neural networks. The paper aims to provide a tutorial-like introduction to this growing field. To this end, we analyze the benefits of these modular designs in terms of efficiency, explainability, and transfer learning, with a focus on emerging applicative areas ranging from automated scientific discovery to semantic communication.
- Subjects :
- Computer Science - Machine Learning
Computer Science - Artificial Intelligence
Subjects
Details
- Database :
- arXiv
- Journal :
- Intelligenza Artificiale, vol. Pre-press, pp. 1-16, 2024
- Publication Type :
- Report
- Accession number :
- edsarx.2403.07965
- Document Type :
- Working Paper
- Full Text :
- https://doi.org/10.3233/IA-240035