Back to Search Start Over

Trusted Source Alignment in Large Language Models

Authors :
Bashlovkina, Vasilisa
Kuang, Zhaobin
Matthews, Riley
Clifford, Edward
Jun, Yennie
Cohen, William W.
Baumgartner, Simon
Publication Year :
2023

Abstract

Large language models (LLMs) are trained on web-scale corpora that inevitably include contradictory factual information from sources of varying reliability. In this paper, we propose measuring an LLM property called trusted source alignment (TSA): the model's propensity to align with content produced by trusted publishers in the face of uncertainty or controversy. We present FactCheckQA, a TSA evaluation dataset based on a corpus of fact checking articles. We describe a simple protocol for evaluating TSA and offer a detailed analysis of design considerations including response extraction, claim contextualization, and bias in prompt formulation. Applying the protocol to PaLM-2, we find that as we scale up the model size, the model performance on FactCheckQA improves from near-random to up to 80% balanced accuracy in aligning with trusted sources.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2311.06697
Document Type :
Working Paper