Back to Search Start Over

Using Decision Trees produced by Generative Adversarial Imitation Learning to give insight into black box Reinforcement Learning models

Authors :
Meijer, Caspar (author)
Meijer, Caspar (author)
Publication Year :
2022

Abstract

Machine learning models are increasingly being used in fields that have a direct impact on the lives of humans. Often these machine learning models are black-box models and they lack transparency and trust which is holding back the implementation. To increase transparency and trust this research investigates whether imitation learning, specifically Generative Adversarial ImitationLearning (GAIL), can be used to give insights into the black-box models by extracting decision trees. To achieve this, an extension of GAIL was made allowing it to extract decision trees. The decision trees were then measured in terms of performance, fidelity, behavior, and interpretability in three different environments. We find that GAIL is able to extract decision trees with high fidelity and can give insightful information into the expert models. Moreover, further research can be done on more complex environments and black-box models, other surrogate models, and possibilities for more specific local insights.<br />CSE3000 Research Project<br />Computer Science and Engineering

Details

Database :
OAIster
Notes :
English
Publication Type :
Electronic Resource
Accession number :
edsoai.on1296537554
Document Type :
Electronic Resource