Back to Search Start Over

Visual Semantic Planning using Deep Successor Representations

Authors :
Zhu, Yuke
Gordon, Daniel
Kolve, Eric
Fox, Dieter
Fei-Fei, Li
Gupta, Abhinav
Mottaghi, Roozbeh
Farhadi, Ali
Publication Year :
2017

Abstract

A crucial capability of real-world intelligent agents is their ability to plan a sequence of actions to achieve their goals in the visual world. In this work, we address the problem of visual semantic planning: the task of predicting a sequence of actions from visual observations that transform a dynamic environment from an initial state to a goal state. Doing so entails knowledge about objects and their affordances, as well as actions and their preconditions and effects. We propose learning these through interacting with a visual and dynamic environment. Our proposed solution involves bootstrapping reinforcement learning with imitation learning. To ensure cross task generalization, we develop a deep predictive model based on successor representations. Our experimental results show near optimal results across a wide range of tasks in the challenging THOR environment.<br />Comment: ICCV 2017 camera ready

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1705.08080
Document Type :
Working Paper