Back to Search Start Over

How language models extrapolate outside the training data: A case study in Textualized Gridworld

Authors :
Kim, Doyoung
Lee, Jongwon
Park, Jinho
Seo, Minjoon
Publication Year :
2024

Abstract

Language models' ability to extrapolate learned behaviors to novel, more complex environments beyond their training scope is highly unknown. This study introduces a path planning task in a textualized Gridworld to probe language models' extrapolation capabilities. We show that conventional approaches, including next-token prediction and Chain of Thought (CoT) fine-tuning, fail to generalize in larger, unseen environments. Inspired by human cognition and dual-process theory, we propose language models should construct cognitive maps before interaction. Our research demonstrates that autoregressive generation of cognitive maps and planning sequences enhances planning capabilities in extrapolated environments. Unlike CoT, we find that cognitive maps cannot be obtained through simple prompting, necessitating additional training schemes for integration. Our findings in Gridworld offer insights into training language models with improved reasoning and adaptability, potentially advancing more human-like cognition and opening avenues for enhancing model generalization across diverse, complex tasks.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.15275
Document Type :
Working Paper