Back to Search Start Over

ATPrompt: Textual Prompt Learning with Embedded Attributes

Authors :
Li, Zheng
Song, Yibing
Zhao, Penghai
Cheng, Ming-Ming
Li, Xiang
Yang, Jian
Publication Year :
2024

Abstract

Textual-based prompt learning methods primarily employ multiple learnable soft prompts and hard class tokens in a cascading manner as text prompt inputs, aiming to align image and text (category) spaces for downstream tasks. However, current training is restricted to aligning images with predefined known categories and cannot be associated with unknown categories. In this work, we propose utilizing universal attributes as a bridge to enhance the alignment between images and unknown categories. Specifically, we introduce an Attribute-embedded Textual Prompt learning method for vision-language models, named ATPrompt. This approach expands the learning space of soft prompts from the original one-dimensional category level into the multi-dimensional attribute level by incorporating multiple universal attribute tokens into the learnable soft prompts. Through this modification, we transform the text prompt from a category-centric form to an attribute-category hybrid form. To finalize the attributes for downstream tasks, we propose a differentiable attribute search method that learns to identify representative and suitable attributes from a candidate pool summarized by a large language model. As an easy-to-use plug-in technique, ATPrompt can seamlessly replace the existing prompt format of textual-based methods, offering general improvements at a negligible computational cost. Extensive experiments on 11 datasets demonstrate the effectiveness of our method.<br />Comment: Technical Report. Project Page: https://zhengli97.github.io/ATPrompt/

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.09442
Document Type :
Working Paper