Back to Search Start Over

Tokenize Anything via Prompting

Authors :
Pan, Ting
Tang, Lulu
Wang, Xinlong
Shan, Shiguang
Publication Year :
2023

Abstract

We present a unified, promptable model capable of simultaneously segmenting, recognizing, and captioning anything. Unlike SAM, we aim to build a versatile region representation in the wild via visual prompting. To achieve this, we train a generalizable model with massive segmentation masks, \eg, SA-1B masks, and semantic priors from a pre-trained CLIP model with 5 billion parameters. Specifically, we construct a promptable image decoder by adding a semantic token to each mask token. The semantic token is responsible for learning the semantic priors in a predefined concept space. Through joint optimization of segmentation on mask tokens and concept prediction on semantic tokens, our model exhibits strong regional recognition and localization capabilities. For example, an additional 38M-parameter causal text decoder trained from scratch sets a new record with a CIDEr score of 164.7 on the Visual Genome region captioning task. We believe this model can be a versatile region-level image tokenizer, capable of encoding general-purpose region context for a broad range of visual perception tasks. Code and models are available at {\footnotesize \url{https://github.com/baaivision/tokenize-anything}}.<br />Comment: code, model, and demo: https://github.com/baaivision/tokenize-anything

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.09128
Document Type :
Working Paper