Back to Search Start Over

Moving Object Segmentation: All You Need Is SAM (and Flow)

Authors :
Xie, Junyu
Yang, Charig
Xie, Weidi
Zisserman, Andrew
Publication Year :
2024

Abstract

The objective of this paper is motion segmentation -- discovering and segmenting the moving objects in a video. This is a much studied area with numerous careful,and sometimes complex, approaches and training schemes including: self-supervised learning, learning from synthetic datasets, object-centric representations, amodal representations, and many more. Our interest in this paper is to determine if the Segment Anything model (SAM) can contribute to this task. We investigate two models for combining SAM with optical flow that harness the segmentation power of SAM with the ability of flow to discover and group moving objects. In the first model, we adapt SAM to take optical flow, rather than RGB, as an input. In the second, SAM takes RGB as an input, and flow is used as a segmentation prompt. These surprisingly simple methods, without any further modifications, outperform all previous approaches by a considerable margin in both single and multi-object benchmarks. We also extend these frame-level segmentations to sequence-level segmentations that maintain object identity. Again, this simple model outperforms previous methods on multiple video object segmentation benchmarks.<br />Comment: Project Page: https://www.robots.ox.ac.uk/~vgg/research/flowsam/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.12389
Document Type :
Working Paper