Back to Search
Start Over
'That's (not) the output I expected!' On the role of end user expectations in creating explanations of AI systems
- Source :
- Artificial Intelligence, 298
- Publication Year :
- 2021
- Publisher :
- Elsevier BV, 2021.
-
Abstract
- Contains fulltext : 234229.pdf (Publisher’s version ) (Open Access) Research in the social sciences has shown that expectations are an important factor in explanations as used between humans: rather than explaining the cause of an event per se, the explainer will often address another event that did not occur but that the explainee might have expected. For AI-powered systems, this finding suggests that explanation-generating systems may need to identify such end user expectations. In general, this is a challenging task, not the least because users often keep them implicit; there is thus a need to investigate the importance of such an ability. In this paper, we report an empirical study with 181 participants who were shown outputs from a text classifier system along with an explanation of why the system chose a particular class for each text. Explanations were both factual, explaining why the system produced a certain output or counterfactual, explaining why the system produced one output instead of another. Our main hypothesis was explanations should align with end user expectations; that is, a factual explanation should be given when the system's output is in line with end user expectations, and a counterfactual explanation when it is not. We find that factual explanations are indeed appropriate when expectations and output match. When they do not, neither factual nor counterfactual explanations appear appropriate, although we do find indications that our counterfactual explanations contained at least some necessary elements. Overall, this suggests that it is important for systems that create explanations of AI systems to infer what outputs the end user expected so that factual explanations can be generated at the appropriate moments. At the same time, this information is, by itself, not sufficient to also create appropriate explanations when the output and user expectations do not match. This is somewhat surprising given investigations of explanations in the social sciences, and will need more scrutiny in future studies. 27 p.
- Subjects :
- Counterfactual thinking
Linguistics and Language
Scrutiny
Computer science
AI systems
02 engineering and technology
User expectations
Language and Linguistics
Task (project management)
Text classifiers
Mental models
Empirical research
Artificial Intelligence
020204 information systems
0202 electrical engineering, electronic engineering, information engineering
Machine behaviour
Factual
Class (computer programming)
Explanations
Classification (of information)
Event (computing)
End user
Human-AI interaction
Expectations
Cognitive artificial intelligence
Human Computer Interaction
Människa-datorinteraktion (interaktionsdesign)
End users
Empirical studies
Explainable AI
020201 artificial intelligence & image processing
Counterfactual
Generating system
Behavioral research
Contrastive
Cognitive psychology
Subjects
Details
- ISSN :
- 00043702
- Volume :
- 298
- Database :
- OpenAIRE
- Journal :
- Artificial Intelligence
- Accession number :
- edsair.doi.dedup.....855fb340ca39d9e9b01e960b0031724a