1. SRL‐ProtoNet: Self‐supervised representation learning for few‐shot remote sensing scene classification
- Author
-
Bing Liu, Hongwei Zhao, Jiao Li, Yansheng Gao, and Jianrong Zhang
- Subjects
image classification ,natural scenes ,remote sensing ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Computer software ,QA76.75-76.765 - Abstract
Abstract Using a deep learning method to classify a large amount of labelled remote sensing scene data produces good performance. However, it is challenging for deep learning based methods to generalise to classification tasks with limited data. Few‐shot learning allows neural networks to classify unseen categories when confronted with a handful of labelled data. Currently, episodic tasks based on meta‐learning can effectively complete few‐shot classification, and training an encoder that can conduct representation learning has become an important component of few‐shot learning. An end‐to‐end few‐shot remote sensing scene classification model based on ProtoNet and self‐supervised learning is proposed. The authors design the Pre‐prototype for a more discrete feature space and better integration with self‐supervised learning, and also propose the ProtoMixer for higher quality prototypes with a global receptive field. The authors’ method outperforms the existing state‐of‐the‐art self‐supervised based methods on three widely used benchmark datasets: UC‐Merced, NWPU‐RESISC45, and AID. Compare with previous state‐of‐the‐art performance. For the one‐shot setting, this method improves by 1.21%, 2.36%, and 0.84% in AID, UC‐Merced, and NWPU‐RESISC45, respectively. For the five‐shot setting, this method surpasses by 0.85%, 2.79%, and 0.74% in the AID, UC‐Merced, and NWPU‐RESISC45, respectively.
- Published
- 2024
- Full Text
- View/download PDF