1. SFT: Few-Shot Learning via Self-Supervised Feature Fusion With Transformer
- Author
-
Jit Yan Lim, Kian Ming Lim, Chin Poo Lee, and Yong Xuan Tan
- Subjects
Few-shot learning ,self-supervised learning ,contrastive learning ,feature fusion ,transformer ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The few-shot learning paradigm aims to generalize to unseen tasks with limited samples. However, a focus solely on class-level discrimination may fall short of achieving robust generalization, especially when neglecting instance diversity and discriminability. This study introduces a metric-based few-shot approach, named Self-supervised Feature Fusion with Transformer (SFT), which integrates self-supervised learning with a transformer. SFT addresses the limitations of previous approaches by employing two distinct self-supervised tasks in separate models during pre-training, thus enhancing both instance diversity and discriminability in the feature space. The training process unfolds in two stages: pre-training and transfer learning. In pre-training, each model undergoes training with specific self-supervised tasks to harness the benefits of enhanced feature space. In the subsequent transfer learning stage, model weights are frozen, acting as feature extractors. The features from both models are amalgamated using a feature fusion technique and are transformed into task-specific features by a transformer, boosting discrimination on unseen tasks. The combined features enable the model to learn a well-generalized representation, effectively tackling the challenges posed by few-shot tasks. The proposed SFT method achieves state-of-the-art results on three benchmark datasets in few-shot image classification.
- Published
- 2024
- Full Text
- View/download PDF