Back to Search Start Over

Dual-Dependency Attention Transformer for Fine-Grained Visual Classification.

Authors :
Cui, Shiyan
Hui, Bin
Source :
Sensors (14248220). Apr2024, Vol. 24 Issue 7, p2337. 23p.
Publication Year :
2024

Abstract

Visual transformers (ViTs) are widely used in various visual tasks, such as fine-grained visual classification (FGVC). However, the self-attention mechanism, which is the core module of visual transformers, leads to quadratic computational and memory complexity. The sparse-attention and local-attention approaches currently used by most researchers are not suitable for FGVC tasks. These tasks require dense feature extraction and global dependency modeling. To address this challenge, we propose a dual-dependency attention transformer model. It decouples global token interactions into two paths. The first is a position-dependency attention pathway based on the intersection of two types of grouped attention. The second is a semantic dependency attention pathway based on dynamic central aggregation. This approach enhances the high-quality semantic modeling of discriminative cues while reducing the computational cost to linear computational complexity. In addition, we develop discriminative enhancement strategies. These strategies increase the sensitivity of high-confidence discriminative cue tracking with a knowledge-based representation approach. Experiments on three datasets, NABIRDS, CUB, and DOGS, show that the method is suitable for fine-grained image classification. It finds a balance between computational cost and performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14248220
Volume :
24
Issue :
7
Database :
Academic Search Index
Journal :
Sensors (14248220)
Publication Type :
Academic Journal
Accession number :
176594705
Full Text :
https://doi.org/10.3390/s24072337