1. On Logical Extrapolation for Mazes with Recurrent and Implicit Networks
- Author
-
Knutson, Brandon, Rabeendran, Amandin Chyba, Ivanitskiy, Michael, Pettyjohn, Jordan, Diniz-Behn, Cecilia, Fung, Samy Wu, and McKenzie, Daniel
- Subjects
Computer Science - Machine Learning ,Statistics - Machine Learning - Abstract
Recent work has suggested that certain neural network architectures-particularly recurrent neural networks (RNNs) and implicit neural networks (INNs) are capable of logical extrapolation. That is, one may train such a network on easy instances of a specific task and then apply it successfully to more difficult instances of the same task. In this paper, we revisit this idea and show that (i) The capacity for extrapolation is less robust than previously suggested. Specifically, in the context of a maze-solving task, we show that while INNs (and some RNNs) are capable of generalizing to larger maze instances, they fail to generalize along axes of difficulty other than maze size. (ii) Models that are explicitly trained to converge to a fixed point (e.g. the INN we test) are likely to do so when extrapolating, while models that are not (e.g. the RNN we test) may exhibit more exotic limiting behaviour such as limit cycles, even when they correctly solve the problem. Our results suggest that (i) further study into why such networks extrapolate easily along certain axes of difficulty yet struggle with others is necessary, and (ii) analyzing the dynamics of extrapolation may yield insights into designing more efficient and interpretable logical extrapolators.
- Published
- 2024