1. Flash Inference: Near Linear Time Inference for Long Convolution Sequence Models and Beyond
- Author
-
Oncescu, Costin-Andrei, Purandare, Sanket, Idreos, Stratos, and Kakade, Sham
- Subjects
Computer Science - Machine Learning ,Computer Science - Artificial Intelligence - Abstract
While transformers have been at the core of most recent advancements in sequence generative models, their computational cost remains quadratic in sequence length. Several subquadratic architectures have been proposed to address this computational issue. Some of them, including long convolution sequence models (LCSMs), such as Hyena, address this issue at training time but remain quadratic during inference. We propose a method for speeding up LCSMs' exact inference to quasilinear $O(L\log^2L)$ time, identify the key properties that make this possible, and propose a general framework that exploits these. Our approach, inspired by previous work on relaxed polynomial interpolation, is based on a tiling which helps decrease memory movement and share computation. It has the added benefit of allowing for almost complete parallelization across layers of the position-mixing part of the architecture. Empirically, we provide a proof of concept implementation for Hyena, which gets up to $1.6\times$ end-to-end improvement over standard inference by improving $50\times$ within the position-mixing part., Comment: 15 pages, 9 figures, 5 algorithms
- Published
- 2024