Back to Search Start Over

The Hallucinations Leaderboard -- An Open Effort to Measure Hallucinations in Large Language Models

Authors :
Hong, Giwon
Gema, Aryo Pradipta
Saxena, Rohit
Du, Xiaotang
Nie, Ping
Zhao, Yu
Perez-Beltrachini, Laura
Ryabinin, Max
He, Xuanli
Fourrier, Clémentine
Minervini, Pasquale
Publication Year :
2024

Abstract

Large Language Models (LLMs) have transformed the Natural Language Processing (NLP) landscape with their remarkable ability to understand and generate human-like text. However, these models are prone to ``hallucinations'' -- outputs that do not align with factual reality or the input context. This paper introduces the Hallucinations Leaderboard, an open initiative to quantitatively measure and compare the tendency of each model to produce hallucinations. The leaderboard uses a comprehensive set of benchmarks focusing on different aspects of hallucinations, such as factuality and faithfulness, across various tasks, including question-answering, summarisation, and reading comprehension. Our analysis provides insights into the performance of different models, guiding researchers and practitioners in choosing the most reliable models for their applications.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.05904
Document Type :
Working Paper