Back to Search Start Over

Advancing Interactive Explainable AI via Belief Change Theory

Authors :
Rago, Antonio
Martinez, Maria Vanina
Publication Year :
2024

Abstract

As AI models become ever more complex and intertwined in humans' daily lives, greater levels of interactivity of explainable AI (XAI) methods are needed. In this paper, we propose the use of belief change theory as a formal foundation for operators that model the incorporation of new information, i.e. user feedback in interactive XAI, to logical representations of data-driven classifiers. We argue that this type of formalisation provides a framework and a methodology to develop interactive explanations in a principled manner, providing warranted behaviour and favouring transparency and accountability of such interactions. Concretely, we first define a novel, logic-based formalism to represent explanatory information shared between humans and machines. We then consider real world scenarios for interactive XAI, with different prioritisations of new and existing knowledge, where our formalism may be instantiated. Finally, we analyse a core set of belief change postulates, discussing their suitability for our real world settings and pointing to particular challenges that may require the relaxation or reinterpretation of some of the theoretical assumptions underlying existing operators.<br />Comment: 9 pages. To be published at KR 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.06875
Document Type :
Working Paper