Back to Search Start Over

Towards explainable, compliant and adaptive human-automation interaction

Authors :
Gallina, Barbara
Pacaci, G.
Johnson, D.
McKeever, S.
Hamfelt, A.
Costantini, S.
Dell’Acqua, P.
Crisan, G-C
Gallina, Barbara
Pacaci, G.
Johnson, D.
McKeever, S.
Hamfelt, A.
Costantini, S.
Dell’Acqua, P.
Crisan, G-C
Publication Year :
2021

Abstract

AI-based systems use trained machine learning models to make important decisions in critical contexts. The EU guidelines for trustworthy AI emphasise the respect for human autonomy, prevention of harm, fairness, and explicability. Many successful machine learning methods, however, deliver opaque models where the reasons for decisions remain unclear to the end user. Hence, accountability and trust are difficult to ascertain. In this position paper, we focus on AI systems that are expected to interact with humans and we propose our visionary architecture, called ECA-HAI (Explainable, Compliant and Adaptive Human-Automation Interaction)-RefArch. ECA-HAI-RefArch allows for building intelligent systems where humans and AIs form teams, able to learn from data but also to learn from each other by playing “serious games”, for a continuous improvement of the overall system. Finally, conclusions are drawn.

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1337537576
Document Type :
Electronic Resource