Back to Search Start Over

How Much Reliability Is Enough? A Context-Specific View on Human Interaction With (Artificial) Agents From Different Perspectives

Authors :
Appelganc, Ksenia
Rieger, Tobias
Roesler, Eileen
Manzey, Dietrich
Source :
Journal of Cognitive Engineering and Decision Making; December 2022, Vol. 16 Issue: 4 p207-221, 15p
Publication Year :
2022

Abstract

Tasks classically performed by human–human teams in today’s workplaces are increasingly given to human–technology teams instead. The role of technology is not only played by classic decision support systems (DSSs) but more and more by artificial intelligence (AI). Reliability is a key factor influencing trust in technology. Therefore, we investigated the reliability participants require in order to perceive the support agents (human, AI, and DSS) as “highly reliable.” We then examined how trust differed between these highly reliable agents. Whilst there is a range of research identifying trust as an important determinant in human–DSS interaction, the question is whether these findings are also applicable to the interaction between humans and AI. To study these issues, we conducted an experiment (N= 300) with two different tasks, usually performed by dyadic teams (loan assignment and x-ray screening), from two different perspectives (i.e., working together or being evaluated by the agent). In contrast to our hypotheses, the required reliability if working together was equal regardless of the agent. Nevertheless, participants trusted the human more than an AI or DSS. They also required that AI be more reliable than a human when used to evaluate themselves, thus illustrating the importance of changing perspective.

Details

Language :
English
ISSN :
15553434
Volume :
16
Issue :
4
Database :
Supplemental Index
Journal :
Journal of Cognitive Engineering and Decision Making
Publication Type :
Periodical
Accession number :
ejs60331445
Full Text :
https://doi.org/10.1177/15553434221104615