Back to Search Start Over

ContextCite: Attributing Model Generation to Context

Authors :
Cohen-Wang, Benjamin
Shah, Harshay
Georgiev, Kristian
Madry, Aleksander
Publication Year :
2024

Abstract

How do language models use information provided as context when generating a response? Can we infer whether a particular generated statement is actually grounded in the context, a misinterpretation, or fabricated? To help answer these questions, we introduce the problem of context attribution: pinpointing the parts of the context (if any) that led a model to generate a particular statement. We then present ContextCite, a simple and scalable method for context attribution that can be applied on top of any existing language model. Finally, we showcase the utility of ContextCite through three applications: (1) helping verify generated statements (2) improving response quality by pruning the context and (3) detecting poisoning attacks. We provide code for ContextCite at https://github.com/MadryLab/context-cite.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.00729
Document Type :
Working Paper