1. Sparsh: Self-supervised touch representations for vision-based tactile sensing
- Author
-
Higuera, Carolina, Sharma, Akash, Bodduluri, Chaithanya Krishna, Fan, Taosha, Lancaster, Patrick, Kalakrishnan, Mrinal, Kaess, Michael, Boots, Byron, Lambeta, Mike, Wu, Tingfan, and Mukadam, Mustafa
- Subjects
Computer Science - Robotics - Abstract
In this work, we introduce general purpose touch representations for the increasingly accessible class of vision-based tactile sensors. Such sensors have led to many recent advances in robot manipulation as they markedly complement vision, yet solutions today often rely on task and sensor specific handcrafted perception models. Collecting real data at scale with task centric ground truth labels, like contact forces and slip, is a challenge further compounded by sensors of various form factor differing in aspects like lighting and gel markings. To tackle this we turn to self-supervised learning (SSL) that has demonstrated remarkable performance in computer vision. We present Sparsh, a family of SSL models that can support various vision-based tactile sensors, alleviating the need for custom labels through pre-training on 460k+ tactile images with masking and self-distillation in pixel and latent spaces. We also build TacBench, to facilitate standardized benchmarking across sensors and models, comprising of six tasks ranging from comprehending tactile properties to enabling physical perception and manipulation planning. In evaluations, we find that SSL pre-training for touch representation outperforms task and sensor-specific end-to-end training by 95.1% on average over TacBench, and Sparsh (DINO) and Sparsh (IJEPA) are the most competitive, indicating the merits of learning in latent space for tactile images. Project page: https://sparsh-ssl.github.io/, Comment: Conference on Robot Learning (CoRL), 2024
- Published
- 2024