Back to Search Start Over

On Task Performance and Model Calibration with Supervised and Self-Ensembled In-Context Learning

Authors :
Li, Chengzu
Zhou, Han
Glavaš, Goran
Korhonen, Anna
Vulić, Ivan
Publication Year :
2023

Abstract

Following the standard supervised fine-tuning (SFT) paradigm, in-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs), yielding promising performance across various tasks in few-shot data setups. However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration), especially in such limited data setups. In this work, we deliver an in-depth analysis of the behavior across different choices of learning methods from the perspective of both performance and calibration, as well as their interplay. Through extensive controlled experiments, we find that simultaneous gains for both task performance and calibration are difficult to achieve, and the problem of miscalibration exists across all learning methods in low-resource scenarios. To address this challenging trade-off between performance and calibration, we then investigate the potential of self-ensembling techniques applied at different modeling stages (e.g., variations of in-context examples or variations in prompts or different ensembling strategies). We justify the feasibility of self-ensembling on SFT in addition to ICL, to make the predictions more calibrated and have comparable or even better performance. Our work sheds light on which learning paradigm to choose and how to enhance both task performance and calibration of LLMs.<br />Comment: 9 pages, 4 figures, 5 tables (20 pages, 5 figures, 13 tables including references and appendices)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.13772
Document Type :
Working Paper