Back to Search
Start Over
A transformer-based approach empowered by a self-attention technique for semantic segmentation in remote sensing.
- Source :
-
Heliyon [Heliyon] 2024 Apr 12; Vol. 10 (8), pp. e29396. Date of Electronic Publication: 2024 Apr 12 (Print Publication: 2024). - Publication Year :
- 2024
-
Abstract
- Semantic segmentation of Remote Sensing (RS) images involves the classification of each pixel in a satellite image into distinct and non-overlapping regions or segments. This task is crucial in various domains, including land cover classification, autonomous driving, and scene understanding. While deep learning has shown promising results, there is limited research that specifically addresses the challenge of processing fine details in RS images while also considering the high computational demands. To tackle this issue, we propose a novel approach that combines convolutional and transformer architectures. Our design incorporates convolutional layers with a low receptive field to generate fine-grained feature maps for small objects in very high-resolution images. On the other hand, transformer blocks are utilized to capture contextual information from the input. By leveraging convolution and self-attention in this manner, we reduce the need for extensive downsampling and enable the network to work with full-resolution features, which is particularly beneficial for handling small objects. Additionally, our approach eliminates the requirement for vast datasets, which is often necessary for purely transformer-based networks. In our experimental results, we demonstrate the effectiveness of our method in generating local and contextual features using convolutional and transformer layers, respectively. Our approach achieves a mean dice score of 80.41%, outperforming other well-known techniques such as UNet, Fully-Connected Network (FCN), Pyramid Scene Parsing Network (PSP Net), and the recent Convolutional vision Transformer (CvT) model, which achieved mean dice scores of 78.57%, 74.57%, 73.45%, and 62.97% respectively, under the same training conditions and using the same training dataset.<br />Competing Interests: The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.<br /> (© 2024 The Author(s).)
Details
- Language :
- English
- ISSN :
- 2405-8440
- Volume :
- 10
- Issue :
- 8
- Database :
- MEDLINE
- Journal :
- Heliyon
- Publication Type :
- Academic Journal
- Accession number :
- 38665569
- Full Text :
- https://doi.org/10.1016/j.heliyon.2024.e29396