1. Cognitive Modeling with Scaffolded LLMs: A Case Study of Referential Expression Generation
- Author
-
Tsvilodub, Polina, Franke, Michael, and Carcassi, Fausto
- Subjects
Computer Science - Computation and Language - Abstract
To what extent can LLMs be used as part of a cognitive model of language generation? In this paper, we approach this question by exploring a neuro-symbolic implementation of an algorithmic cognitive model of referential expression generation by Dale & Reiter (1995). The symbolic task analysis implements the generation as an iterative procedure that scaffolds symbolic and gpt-3.5-turbo-based modules. We compare this implementation to an ablated model and a one-shot LLM-only baseline on the A3DS dataset (Tsvilodub & Franke, 2023). We find that our hybrid approach is cognitively plausible and performs well in complex contexts, while allowing for more open-ended modeling of language generation in a larger domain., Comment: 11 pages, 3 figures, 2 algorithms, to appear at the ICML 2024 workshop on Large Language Models and Cognition
- Published
- 2024