Back to Search Start Over

Inverse Reinforcement Learning with Missing Data

Authors :
Mai, Tien
Nguyen, Quoc Phong
Low, Kian Hsiang
Jaillet, Patrick
Publication Year :
2019

Abstract

We consider the problem of recovering an expert's reward function with inverse reinforcement learning (IRL) when there are missing/incomplete state-action pairs or observations in the demonstrated trajectories. This issue of missing trajectory data or information occurs in many situations, e.g., GPS signals from vehicles moving on a road network are intermittent. In this paper, we propose a tractable approach to directly compute the log-likelihood of demonstrated trajectories with incomplete/missing data. Our algorithm is efficient in handling a large number of missing segments in the demonstrated trajectories, as it performs the training with incomplete data by solving a sequence of systems of linear equations, and the number of such systems to be solved does not depend on the number of missing segments. Empirical evaluation on a real-world dataset shows that our training algorithm outperforms other conventional techniques.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1911.06930
Document Type :
Working Paper