Back to Search Start Over

Do Large Language Models Know What Humans Know?

Authors :
Trott S
Jones C
Chang T
Michaelov J
Bergen B
Source :
Cognitive science [Cogn Sci] 2023 Jul; Vol. 47 (7), pp. e13309.
Publication Year :
2023

Abstract

Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre-registered analyses, we present a linguistic version of the False Belief Task to both human participants and a large language model, GPT-3. Both are sensitive to others' beliefs, but while the language model significantly exceeds chance behavior, it does not perform as well as the humans nor does it explain the full extent of their behavior-despite being exposed to more language than a human would in a lifetime. This suggests that while statistical learning from language exposure may in part explain how humans develop the ability to reason about the mental states of others, other mechanisms are also responsible.<br /> (© 2023 The Authors. Cognitive Science published by Wiley Periodicals LLC on behalf of Cognitive Science Society (CSS).)

Details

Language :
English
ISSN :
1551-6709
Volume :
47
Issue :
7
Database :
MEDLINE
Journal :
Cognitive science
Publication Type :
Academic Journal
Accession number :
37401923
Full Text :
https://doi.org/10.1111/cogs.13309