51. HiTZ@Antidote: Argumentation-driven Explainable Artificial Intelligence for Digital Medicine
- Author
-
Agerri, Rodrigo, Alonso, Iñigo, Atutxa, Aitziber, Berrondo, Ander, Estarrona, Ainara, Garcia-Ferrero, Iker, Goenaga, Iakes, Gojenola, Koldo, Oronoz, Maite, Perez-Tejedor, Igor, Rigau, German, and Yeginbergenova, Anar
- Subjects
Computer Science - Computation and Language ,Computer Science - Artificial Intelligence - Abstract
Providing high quality explanations for AI predictions based on machine learning is a challenging and complex task. To work well it requires, among other factors: selecting a proper level of generality/specificity of the explanation; considering assumptions about the familiarity of the explanation beneficiary with the AI task under consideration; referring to specific elements that have contributed to the decision; making use of additional knowledge (e.g. expert evidence) which might not be part of the prediction process; and providing evidence supporting negative hypothesis. Finally, the system needs to formulate the explanation in a clearly interpretable, and possibly convincing, way. Given these considerations, ANTIDOTE fosters an integrated vision of explainable AI, where low-level characteristics of the deep learning process are combined with higher level schemes proper of the human argumentation capacity. ANTIDOTE will exploit cross-disciplinary competences in deep learning and argumentation to support a broader and innovative view of explainable AI, where the need for high-quality explanations for clinical cases deliberation is critical. As a first result of the project, we publish the Antidote CasiMedicos dataset to facilitate research on explainable AI in general, and argumentation in the medical domain in particular., Comment: To appear: In SEPLN 2023: 39th International Conference of the Spanish Society for Natural Language Processing
- Published
- 2023