Back to Search Start Over

KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval

Authors :
Abdin, Marah I
Gunasekar, Suriya
Chandrasekaran, Varun
Li, Jerry
Yuksekgonul, Mert
Peshawaria, Rahee Ghosh
Naik, Ranjita
Nushi, Besmira
Publication Year :
2023

Abstract

We study the ability of state-of-the art models to answer constraint satisfaction queries for information retrieval (e.g., 'a list of ice cream shops in San Diego'). In the past, such queries were considered to be tasks that could only be solved via web-search or knowledge bases. More recently, large language models (LLMs) have demonstrated initial emergent abilities in this task. However, many current retrieval benchmarks are either saturated or do not measure constraint satisfaction. Motivated by rising concerns around factual incorrectness and hallucinations of LLMs, we present KITAB, a new dataset for measuring constraint satisfaction abilities of language models. KITAB consists of book-related data across more than 600 authors and 13,000 queries, and also offers an associated dynamic data collection and constraint verification approach for acquiring similar test data for other authors. Our extended experiments on GPT4 and GPT3.5 characterize and decouple common failure modes across dimensions such as information popularity, constraint types, and context availability. Results show that in the absence of context, models exhibit severe limitations as measured by irrelevant information, factual errors, and incompleteness, many of which exacerbate as information popularity decreases. While context availability mitigates irrelevant information, it is not helpful for satisfying constraints, identifying fundamental barriers to constraint satisfaction. We open source our contributions to foster further research on improving constraint satisfaction abilities of future models.<br />Comment: 23 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2310.15511
Document Type :
Working Paper