Back to Search
Start Over
Multimodal Feature-Guided Pretraining for RGB-T Perception
- Source :
- IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol 17, Pp 16041-16050 (2024)
- Publication Year :
- 2024
- Publisher :
- IEEE, 2024.
-
Abstract
- Wide-range multiscale object detection for multispectral scene perception from a drone perspective is challenging. Previous RGB-T perception methods directly use backbone pretrained on RGB for thermal infrared feature extraction, leading to unexpected domain shift. We propose a novel multimodal feature-guided masked reconstruction pretraining method, named M2FP, aimed at learning transferable representations for drone-based RGB-T environmental perception tasks without domain bias. This article includes two key innovations as follows. 1) We design a cross-modal feature interaction module in M2FP, which encourages modality-specific backbones to actively learn cross-modal feature representations and avoid modality bias issues. 2) We design a global-aware feature interaction and fusion module suitable for various downstream tasks, which enhances the model's environmental perception from a global perspective in wide-range drone-based scenes. We fine-tune M2FP on the drone-based object detection dataset (DroneVehicle) and semantic segmentation dataset (Kust4K). On these two tasks, compared to the second-best methods, M2FP achieves state-of-the-art performance, with an improvement of 1.8% in mean average precision and 0.9% in mean intersection over union, respectively.
Details
- Language :
- English
- ISSN :
- 19391404 and 21511535
- Volume :
- 17
- Database :
- Directory of Open Access Journals
- Journal :
- IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.bf1b8e5e33d948039c20deb46e22562f
- Document Type :
- article
- Full Text :
- https://doi.org/10.1109/JSTARS.2024.3454054