Back to Search Start Over

Rethinking Prior Information Generation with CLIP for Few-Shot Segmentation

Authors :
Wang, Jin
Zhang, Bingfeng
Pang, Jian
Chen, Honglong
Liu, Weifeng
Publication Year :
2024

Abstract

Few-shot segmentation remains challenging due to the limitations of its labeling information for unseen classes. Most previous approaches rely on extracting high-level feature maps from the frozen visual encoder to compute the pixel-wise similarity as a key prior guidance for the decoder. However, such a prior representation suffers from coarse granularity and poor generalization to new classes since these high-level feature maps have obvious category bias. In this work, we propose to replace the visual prior representation with the visual-text alignment capacity to capture more reliable guidance and enhance the model generalization. Specifically, we design two kinds of training-free prior information generation strategy that attempts to utilize the semantic alignment capability of the Contrastive Language-Image Pre-training model (CLIP) to locate the target class. Besides, to acquire more accurate prior guidance, we build a high-order relationship of attention maps and utilize it to refine the initial prior information. Experiments on both the PASCAL-5{i} and COCO-20{i} datasets show that our method obtains a clearly substantial improvement and reaches the new state-of-the-art performance.<br />Comment: Accepted by CVPR 2024; The camera-ready version

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.08458
Document Type :
Working Paper