Back to Search Start Over

Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph

Authors :
Vashurin, Roman
Fadeeva, Ekaterina
Vazhentsev, Artem
Rvanova, Lyudmila
Tsvigun, Akim
Vasilev, Daniil
Xing, Rui
Sadallah, Abdelrahman Boda
Grishchenkov, Kirill
Petrakov, Sergey
Panchenko, Alexander
Baldwin, Timothy
Nakov, Preslav
Panov, Maxim
Shelmanov, Artem
Publication Year :
2024

Abstract

Uncertainty quantification (UQ) is a critical component of machine learning (ML) applications. The rapid proliferation of large language models (LLMs) has stimulated researchers to seek efficient and effective approaches to UQ for text generation. As with other ML models, LLMs are prone to making incorrect predictions, in the form of ``hallucinations'' whereby claims are fabricated or low-quality outputs are generated for a given input. UQ is a key element in dealing with these challenges. However, research to date on UQ methods for LLMs has been fragmented, in terms of the literature on UQ techniques and evaluation methods. In this work, we tackle this issue by introducing a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel UQ techniques over various text generation tasks. Our benchmark also supports the assessment of confidence normalization methods in terms of their ability to provide interpretable scores. Using our benchmark, we conduct a large-scale empirical investigation of UQ and normalization techniques across nine tasks, and identify the most promising approaches. Code: https://github.com/IINemo/lm-polygraph<br />Comment: Roman Vashurin, Ekaterina Fadeeva, Artem Vazhentsev contributed equally

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.15627
Document Type :
Working Paper