Back to Search Start Over

Learning and Querying Fast Generative Models for Reinforcement Learning

Authors :
Buesing, Lars
Weber, Theophane
Racaniere, Sebastien
Eslami, S. M. Ali
Rezende, Danilo
Reichert, David P.
Viola, Fabio
Besse, Frederic
Gregor, Karol
Hassabis, Demis
Wierstra, Daan
Publication Year :
2018

Abstract

A key challenge in model-based reinforcement learning (RL) is to synthesize computationally efficient and accurate environment models. We show that carefully designed generative models that learn and operate on compact state representations, so-called state-space models, substantially reduce the computational costs for predicting outcomes of sequences of actions. Extensive experiments establish that state-space models accurately capture the dynamics of Atari games from the Arcade Learning Environment from raw pixels. The computational speed-up of state-space models while maintaining high accuracy makes their application in RL feasible: We demonstrate that agents which query these models for decision making outperform strong model-free baselines on the game MSPACMAN, demonstrating the potential of using learned environment models for planning.

Subjects

Subjects :
Computer Science - Learning

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1802.03006
Document Type :
Working Paper