Back to Search Start Over

Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams.

Authors :
Lavender, Bryan
Abuhaimed, Sami
Sen, Sandip
Source :
International Journal on Artificial Intelligence Tools. May2024, Vol. 33 Issue 3, p1-22. 22p.
Publication Year :
2024

Abstract

Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers' perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02182130
Volume :
33
Issue :
3
Database :
Academic Search Index
Journal :
International Journal on Artificial Intelligence Tools
Publication Type :
Academic Journal
Accession number :
176779976
Full Text :
https://doi.org/10.1142/S0218213024600042