Back to Search Start Over

Learning Manipulation by Predicting Interaction

Authors :
Zeng, Jia
Bu, Qingwen
Wang, Bangjun
Xia, Wenke
Chen, Li
Dong, Hao
Song, Haoming
Wang, Dong
Hu, Di
Luo, Ping
Cui, Heming
Zhao, Bin
Li, Xuelong
Qiao, Yu
Li, Hongyang
Publication Year :
2024

Abstract

Representation learning approaches for robotic manipulation have boomed in recent years. Due to the scarcity of in-domain robot data, prevailing methodologies tend to leverage large-scale human video datasets to extract generalizable features for visuomotor policy learning. Despite the progress achieved, prior endeavors disregard the interactive dynamics that capture behavior patterns and physical interaction during the manipulation process, resulting in an inadequate understanding of the relationship between objects and the environment. To this end, we propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction (MPI) and enhances the visual representation.Given a pair of keyframes representing the initial and final states, along with language instructions, our algorithm predicts the transition frame and detects the interaction object, respectively. These two learning objectives achieve superior comprehension towards "how-to-interact" and "where-to-interact". We conduct a comprehensive evaluation of several challenging robotic tasks.The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms as well as simulation environments. Code and checkpoints are publicly shared at https://github.com/OpenDriveLab/MPI.<br />Comment: Accepted to RSS 2024. Project page: https://github.com/OpenDriveLab/MPI

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.00439
Document Type :
Working Paper