Back to Search Start Over

NegotiationToM: A Benchmark for Stress-testing Machine Theory of Mind on Negotiation Surrounding

Authors :
Chan, Chunkit
Jiayang, Cheng
Yim, Yauwai
Deng, Zheye
Fan, Wei
Li, Haoran
Liu, Xin
Zhang, Hongming
Wang, Weiqi
Song, Yangqiu
Publication Year :
2024

Abstract

Large Language Models (LLMs) have sparked substantial interest and debate concerning their potential emergence of Theory of Mind (ToM) ability. Theory of mind evaluations currently focuses on testing models using machine-generated data or game settings prone to shortcuts and spurious correlations, which lacks evaluation of machine ToM ability in real-world human interaction scenarios. This poses a pressing demand to develop new real-world scenario benchmarks. We introduce NegotiationToM, a new benchmark designed to stress-test machine ToM in real-world negotiation surrounding covered multi-dimensional mental states (i.e., desires, beliefs, and intentions). Our benchmark builds upon the Belief-Desire-Intention (BDI) agent modeling theory and conducts the necessary empirical experiments to evaluate large language models. Our findings demonstrate that NegotiationToM is challenging for state-of-the-art LLMs, as they consistently perform significantly worse than humans, even when employing the chain-of-thought (CoT) method.<br />Comment: Accepted to EMNLP 2024 findings. Dataset: https://github.com/HKUST-KnowComp/NegotiationToM

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.13627
Document Type :
Working Paper