Back to Search Start Over

Influences on LLM Calibration: A Study of Response Agreement, Loss Functions, and Prompt Styles

Authors :
Xia, Yuxi
de Araujo, Pedro Henrique Luz
Zaporojets, Klim
Roth, Benjamin
Publication Year :
2025

Abstract

Calibration, the alignment between model confidence and prediction accuracy, is critical for the reliable deployment of large language models (LLMs). Existing works neglect to measure the generalization of their methods to other prompt styles and different sizes of LLMs. To address this, we define a controlled experimental setting covering 12 LLMs and four prompt styles. We additionally investigate if incorporating the response agreement of multiple LLMs and an appropriate loss function can improve calibration performance. Concretely, we build Calib-n, a novel framework that trains an auxiliary model for confidence estimation that aggregates responses from multiple LLMs to capture inter-model agreement. To optimize calibration, we integrate focal and AUC surrogate losses alongside binary cross-entropy. Experiments across four datasets demonstrate that both response agreement and focal loss improve calibration from baselines. We find that few-shot prompts are the most effective for auxiliary model-based methods, and auxiliary models demonstrate robust calibration performance across accuracy variations, outperforming LLMs' internal probabilities and verbalized confidences. These insights deepen the understanding of influence factors in LLM calibration, supporting their reliable deployment in diverse applications.<br />Comment: 24 pages, 11 figures, 8 tables

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.03991
Document Type :
Working Paper