Back to Search Start Over

Learning Causal State Representations of Partially Observable Environments

Authors :
Zhang, Amy
Lipton, Zachary C.
Pineda, Luis
Azizzadenesheli, Kamyar
Anandkumar, Anima
Itti, Laurent
Pineau, Joelle
Furlanello, Tommaso
Publication Year :
2019

Abstract

Intelligent agents can cope with sensory-rich environments by learning task-agnostic state abstractions. In this paper, we propose an algorithm to approximate causal states, which are the coarsest partition of the joint history of actions and observations in partially-observable Markov decision processes (POMDP). Our method learns approximate causal state representations from RNNs trained to predict subsequent observations given the history. We demonstrate that these learned state representations are useful for learning policies efficiently in reinforcement learning problems with rich observation spaces. We connect causal states with causal feature sets from the causal inference literature, and also provide theoretical guarantees on the optimality of the continuous version of this causal state representation under Lipschitz assumptions by proving equivalence to bisimulation, a relation between behaviorally equivalent systems. This allows for lower bounds on the optimal value function of the learned representation, which is tight given certain assumptions. Finally, we empirically evaluate causal state representations using multiple partially observable tasks and compare with prior methods.<br />35 pages, 8 figures

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....c228ec489cf5b71b4046ac0b31ea2614