Back to Search Start Over

Applying and verifying an explainability method based on policy graphs in the context of reinforcement learning

Authors :
Dmitry Gnatyshak
Antoni Climent
Sergio Alvarez-Napagao
Universitat Politècnica de Catalunya. Doctorat en Intel·ligència Artificial
Universitat Politècnica de Catalunya. Departament de Ciències de la Computació
Barcelona Supercomputing Center
Universitat Politècnica de Catalunya. KEMLG - Grup d'Enginyeria del Coneixement i Aprenentatge Automàtic
Source :
UPCommons. Portal del coneixement obert de la UPC, Universitat Politècnica de Catalunya (UPC), CCIA
Publication Year :
2021
Publisher :
IOS Press, 2021.

Abstract

The advancement on explainability techniques is quite relevant in the field of Reinforcement Learning (RL) and its applications can be beneficial for the development of intelligent agents that are understandable by humans and are able cooperate with them. When dealing with Deep RL some approaches already exist in the literature, but a common problem is that it can be tricky to define whether the explanations generated for an agent really reflect the behaviour of the trained agent. In this work we will apply an approach for explainability based on the creation of a Policy Graph (PG) that represents the agent’s behaviour. Our main contribution is a way to measure the similarity between the explanations and the agent’s behaviour, by building another agent that follows a policy based on the explainability method and comparing the behaviour of both agents.

Details

Language :
English
Database :
OpenAIRE
Journal :
UPCommons. Portal del coneixement obert de la UPC, Universitat Politècnica de Catalunya (UPC), CCIA
Accession number :
edsair.doi.dedup.....78581ca70f27de3b7769d9a65d228a2d