Back to Search
Start Over
Examining the effects of power status of an explainable artificial intelligence system on users' perceptions.
- Source :
- Behaviour & Information Technology; Apr2022, Vol. 41 Issue 5, p946-958, 13p, 2 Diagrams, 8 Charts
- Publication Year :
- 2022
-
Abstract
- Contrary to the traditional concept of artificial intelligence, explainable artificial intelligence (XAI) aims to provide explanations for the prediction results and make users perceive the system as being reliable. However, despite its importance, only a few studies have investigated how the explanations of an XAI system should be designed. This study investigates how people attribute the perceived ability of XAI systems based on perceived attributional qualities and how the power status of the XAI and anthropomorphism affect the attribution process. In a laboratory experiment, participants (N = 500) read a scenarios of using an XAI system with either lower or higher power status and reported their perceptions of the system. Results indicated that an XAI system with a higher power status caused users to perceive the outputs of the XAI system to be more controllable by intention, and higher perceived stability and uncontrollability resulted in greater confidence in the system's ability. The effect of perceived controllability on perceived ability was moderated by the extent to which participants anthropomorphised the system. Several design implications for XAI systems are suggested based on our findings. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 0144929X
- Volume :
- 41
- Issue :
- 5
- Database :
- Complementary Index
- Journal :
- Behaviour & Information Technology
- Publication Type :
- Academic Journal
- Accession number :
- 156709143
- Full Text :
- https://doi.org/10.1080/0144929X.2020.1846789