1. An Evaluation Framework for Attributed Information Retrieval using Large Language Models
- Author
-
Djeddal, Hanane, Erbacher, Pierre, Toukal, Raouf, Soulier, Laure, Pinel-Sauvagnat, Karen, Katrenko, Sophia, and Tamine, Lynda
- Subjects
Computer Science - Information Retrieval - Abstract
With the growing success of Large Language models (LLMs) in information-seeking scenarios, search engines are now adopting generative approaches to provide answers along with in-line citations as attribution. While existing work focuses mainly on attributed question answering, in this paper, we target information-seeking scenarios which are often more challenging due to the open-ended nature of the queries and the size of the label space in terms of the diversity of candidate-attributed answers per query. We propose a reproducible framework to evaluate and benchmark attributed information seeking, using any backbone LLM, and different architectural designs: (1) Generate (2) Retrieve then Generate, and (3) Generate then Retrieve. Experiments using HAGRID, an attributed information-seeking dataset, show the impact of different scenarios on both the correctness and attributability of answers.
- Published
- 2024
- Full Text
- View/download PDF