Back to Search Start Over

EdgeTAM: On-Device Track Anything Model

Authors :
Zhou, Chong
Zhu, Chenchen
Xiong, Yunyang
Suri, Saksham
Xiao, Fanyi
Wu, Lemeng
Krishnamoorthi, Raghuraman
Dai, Bo
Loy, Chen Change
Chandra, Vikas
Soran, Bilge
Publication Year :
2025

Abstract

On top of Segment Anything Model (SAM), SAM 2 further extends its capability from image to video inputs through a memory bank mechanism and obtains a remarkable performance compared with previous methods, making it a foundation model for video segmentation task. In this paper, we aim at making SAM 2 much more efficient so that it even runs on mobile devices while maintaining a comparable performance. Despite several works optimizing SAM for better efficiency, we find they are not sufficient for SAM 2 because they all focus on compressing the image encoder, while our benchmark shows that the newly introduced memory attention blocks are also the latency bottleneck. Given this observation, we propose EdgeTAM, which leverages a novel 2D Spatial Perceiver to reduce the computational cost. In particular, the proposed 2D Spatial Perceiver encodes the densely stored frame-level memories with a lightweight Transformer that contains a fixed set of learnable queries. Given that video segmentation is a dense prediction task, we find preserving the spatial structure of the memories is essential so that the queries are split into global-level and patch-level groups. We also propose a distillation pipeline that further improves the performance without inference overhead. As a result, EdgeTAM achieves 87.7, 70.0, 72.3, and 71.7 J&F on DAVIS 2017, MOSE, SA-V val, and SA-V test, while running at 16 FPS on iPhone 15 Pro Max.<br />Comment: Code will be released at https://github.com/facebookresearch/EdgeTAM

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2501.07256
Document Type :
Working Paper