Back to Search Start Over

I Think, Therefore I am: Benchmarking Awareness of Large Language Models Using AwareBench

Authors :
Li, Yuan
Huang, Yue
Lin, Yuli
Wu, Siyuan
Wan, Yao
Sun, Lichao
Publication Year :
2024

Abstract

Do large language models (LLMs) exhibit any forms of awareness similar to humans? In this paper, we introduce AwareBench, a benchmark designed to evaluate awareness in LLMs. Drawing from theories in psychology and philosophy, we define awareness in LLMs as the ability to understand themselves as AI models and to exhibit social intelligence. Subsequently, we categorize awareness in LLMs into five dimensions, including capability, mission, emotion, culture, and perspective. Based on this taxonomy, we create a dataset called AwareEval, which contains binary, multiple-choice, and open-ended questions to assess LLMs' understandings of specific awareness dimensions. Our experiments, conducted on 13 LLMs, reveal that the majority of them struggle to fully recognize their capabilities and missions while demonstrating decent social intelligence. We conclude by connecting awareness of LLMs with AI alignment and safety, emphasizing its significance to the trustworthy and ethical development of LLMs. Our dataset and code are available at https://github.com/HowieHwong/Awareness-in-LLM.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.17882
Document Type :
Working Paper