Back to Search Start Over

Do Large Language Models Solve ARC Visual Analogies Like People Do?

Authors :
Opiełka, Gustaw
Rosenbusch, Hannes
Vijverberg, Veerle
Stevenson, Claire E.
Publication Year :
2024

Abstract

The Abstraction Reasoning Corpus (ARC) is a visual analogical reasoning test designed for humans and machines (Chollet, 2019). We compared human and large language model (LLM) performance on a new child-friendly set of ARC items. Results show that both children and adults outperform most LLMs on these tasks. Error analysis revealed a similar "fallback" solution strategy in LLMs and young children, where part of the analogy is simply copied. In addition, we found two other error types, one based on seemingly grasping key concepts (e.g., Inside-Outside) and the other based on simple combinations of analogy input matrices. On the whole, "concept" errors were more common in humans, and "matrix" errors were more common in LLMs. This study sheds new light on LLM reasoning ability and the extent to which we can use error analyses and comparisons with human development to understand how LLMs solve visual analogies.<br />Comment: Changes (based on CogSci 2024 reviewers): - Shortened Intro - Added a table summarizing children performance across age - Added Theoretical discussion in the Discussion section - Corrected the naming of plots - Small clarifications in the Methods section

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2403.09734
Document Type :
Working Paper