Back to Search Start Over

Can Users Detect Biases or Factual Errors in Generated Responses in Conversational Information-Seeking?

Authors :
Łajewska, Weronika
Balog, Krisztian
Spina, Damiano
Trippas, Johanne
Publication Year :
2024

Abstract

Information-seeking dialogues span a wide range of questions, from simple factoid to complex queries that require exploring multiple facets and viewpoints. When performing exploratory searches in unfamiliar domains, users may lack background knowledge and struggle to verify the system-provided information, making them vulnerable to misinformation. We investigate the limitations of response generation in conversational information-seeking systems, highlighting potential inaccuracies, pitfalls, and biases in the responses. The study addresses the problem of query answerability and the challenge of response incompleteness. Our user studies explore how these issues impact user experience, focusing on users' ability to identify biased, incorrect, or incomplete responses. We design two crowdsourcing tasks to assess user experience with different system response variants, highlighting critical issues to be addressed in future conversational information-seeking research. Our analysis reveals that it is easier for users to detect response incompleteness than query answerability and user satisfaction is mostly associated with response diversity, not factual correctness.<br />Comment: Extended version of the paper that appeared in the Proceedings of the 2024 Annual International ACM SIGIR Conference on Research and Development in Information Retrieval in the Asia Pacific Region (SIGIR-AP '24)

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.21529
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3673791.3698409