1. Explaining the Most Probable Explanation
- Author
-
Butz, R.S., Hommersom, Arjen, van Eekelen, Marko, Ciucci, Davide, Pasi, Gabriella, Vantaggi, Barbara, Department Computer Science, RS-Research Line Resilience (part of LIRS program), Academic Field Technology, RS-Research Program Learning and Innovation in Resilient systems (LIRS), Ciucci, Davide, Pasi, Gabriella, and Vantaggi, Barbara
- Subjects
Computer science ,business.industry ,Most probable explanation ,Bayesian network ,Inference ,Argumentation theory ,Context (language use) ,06 humanities and the arts ,02 engineering and technology ,0603 philosophy, ethics and religion ,Task (project management) ,Domain (software engineering) ,Subject-matter expert ,Bayesian networks ,0202 electrical engineering, electronic engineering, information engineering ,020201 artificial intelligence & image processing ,060301 applied ethics ,Artificial intelligence ,business ,Natural language - Abstract
The use of Bayesian networks has been shown to be powerful for supporting decision making, for example in a medical context. A particularly useful inference task is the most probable explanation (MPE), which provides the most likely assignment to all the random variables that is consistent with the given evidence. A downside of this MPE solution is that it is static and not very informative for (medical) domain experts. In our research to overcome this problem, we were inspired by recent research results on augmenting Bayesian networks with argumentation theory. We use arguments to generate explanations of the MPE solution in natural language to make it more understandable for the domain expert. Moreover, the approach allows decision makers to further explore explanations of different scenarios providing more insight why certain alternative explanations are considered less probable than the MPE solution.
- Published
- 2018