Back to Search Start Over

Exploring Visual Pre-training for Robot Manipulation: Datasets, Models and Methods

Authors :
Jing, Ya
Zhu, Xuelin
Liu, Xingbin
Sima, Qie
Yang, Taozheng
Feng, Yunhai
Kong, Tao
Publication Year :
2023

Abstract

Visual pre-training with large-scale real-world data has made great progress in recent years, showing great potential in robot learning with pixel observations. However, the recipes of visual pre-training for robot manipulation tasks are yet to be built. In this paper, we thoroughly investigate the effects of visual pre-training strategies on robot manipulation tasks from three fundamental perspectives: pre-training datasets, model architectures and training methods. Several significant experimental findings are provided that are beneficial for robot learning. Further, we propose a visual pre-training scheme for robot manipulation termed Vi-PRoM, which combines self-supervised learning and supervised learning. Concretely, the former employs contrastive learning to acquire underlying patterns from large-scale unlabeled data, while the latter aims learning visual semantics and temporal dynamics. Extensive experiments on robot manipulations in various simulation environments and the real robot demonstrate the superiority of the proposed scheme. Videos and more details can be found on \url{https://explore-pretrain-robot.github.io}.<br />Comment: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.03620
Document Type :
Working Paper