5 results on '"remote sensing segmentation"'
Search Results
2. A new framework for improving semantic segmentation in aerial imagery.
- Author
-
Shuke He, Chen Jin, Lisheng Shu, Xuzhi He, Mingyi Wang, and Gang Liu
- Subjects
REMOTE sensing ,SPATIAL resolution ,ORTHOPHOTOGRAPHY ,TAPESTRY ,DEEP learning - Abstract
High spatial resolution (HSR) remote sensing imagery presents a rich tapestry of foreground-background intricacies, rendering semantic segmentation in aerial contexts a formidable and vital undertaking. At its core, this challenge revolves around two pivotal questions: 1) Mitigating Background Interference and Enhancing Foreground Clarity. 2) Accurate Segmentation in Dense Small Object Cluster. Conventional semantic segmentation methods primarily cater to the segmentation of large-scale objects in natural scenes, yet they often falter when confronted with aerial imagery's characteristic traits such as vast background areas, diminutive foreground objects, and densely clustered targets. In response, we propose a novel semantic segmentation framework tailored to overcome these obstacles. To address the first challenge, we leverage PointFlow modules in tandem with the Foreground-Scene (F-S) module. PointFlow modules act as a barrier against extraneous background information, while the F-S module fosters a symbiotic relationship between the scene and foreground, enhancing clarity. For the second challenge, we adopt a dual-branch structure termed disentangled learning, comprising Foreground Precedence Estimation and Small Object Edge Alignment (SOEA). Our foreground saliency guided loss optimally directs the training process by prioritizing foreground examples and challenging background instances. Extensive experimentation on the iSAID and Vaihingen datasets validates the efficacy of our approach. Not only does our method surpass prevailing generic semantic segmentation techniques, but it also outperforms state-of-the-art remote sensing segmentation methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. DDRNet: Dual-Domain Refinement Network for Remote Sensing Image Semantic Segmentation
- Author
-
Zhenhao Yang, Fukun Bi, Xinghai Hou, Dehao Zhou, and Yanping Wang
- Subjects
Foreground saliency enhancement ,frequency domain ,remote sensing segmentation ,small objects ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Semantic segmentation is crucial for interpreting remote sensing images. The segmentation performance has been significantly improved recently with the development of deep learning. However, complex background samples and small objects greatly increase the challenge of the semantic segmentation task for remote sensing images. To address these challenges, we propose a dual-domain refinement network (DDRNet) for accurate segmentation. Specifically, we first propose a spatial and frequency feature reconstruction module, which separately utilizes the characteristics of the frequency and spatial domains to refine the global salient features and the fine-grained spatial features of objects. This process enhances the foreground saliency and adaptively suppresses background noise. Subsequently, we propose a feature alignment module that selectively couples the features refined from both domains via cross-attention, achieving semantic alignment between frequency and spatial domains. In addition, a meticulously designed detail-aware attention module is introduced to compensate for the loss of small objects during feature propagation. This module leverages cross-correlation matrices between high-level features and the original image to quantify the similarities among objects belonging to the same category, thereby transmitting rich semantic information from high-level features to small objects. The results on multiple datasets validate that our method outperforms the existing methods and achieves a good compromise between computational overhead and accuracy.
- Published
- 2024
- Full Text
- View/download PDF
4. Trans-Diff: Heterogeneous Domain Adaptation for Remote Sensing Segmentation With Transfer Diffusion
- Author
-
Yuhan Kang, Jie Wu, Qiang Liu, Jun Yue, and Leyuan Fang
- Subjects
Cross-domain prompt ,diffusion model ,heterogeneous domain adaptation ,remote sensing segmentation ,Ocean engineering ,TC1501-1800 ,Geophysics. Cosmic physics ,QC801-809 - Abstract
Domain adaptation has been demonstrated to be an important technique to reduce the expensive annotation costs for remote sensing segmentation. However, for remote sensing images (RSIs) acquired from different imaging modalities with significant differences, a model trained on one modality can hardly be utilized for images of other modalities. This leads to a greater challenge in domain adaptation, called heterogeneous domain adaptation (HDA). To address this issue, we propose a novel method called transfer diffusion (Trans-Diff), which is the first work to explore the diffusion model for HDA remote sensing segmentation. The proposed Trans-Diff constructs cross-domain unified prompts for the diffusion model. This approach enables the generation of images from different modalities with specific semantics, leading to efficient HDA segmentation. Specifically, we first propose an interrelated semantic modeling method to establish semantic interrelation between heterogeneous RSIs and annotations in a high-dimensional feature space and extract the unified features as the cross-domain prompts. Then, we construct a semantic guidance diffusion model to further improve the semantic guidance of images generated with the cross-domain prompts, which effectively facilitates the semantic transfer of RSIs from source modality to target modality. In addition, we design an adaptive sampling strategy to dynamically regulate the generated images' stylistic consistency and semantic consistency. This can effectively reduce the cross-domain discrepancies between different modalities of RSIs, ultimately significantly improving the HDA remote sensing segmentation performance. Experimental results demonstrate the superior performance of Trans-Diff over advanced methods on several heterogeneous RSI datasets.
- Published
- 2024
- Full Text
- View/download PDF
5. RSSFormer: Foreground Saliency Enhancement for Remote Sensing Land-Cover Segmentation.
- Author
-
Xu, Rongtao, Wang, Changwei, Zhang, Jiguang, Xu, Shibiao, Meng, Weiliang, and Zhang, Xiaopeng
- Subjects
- *
REMOTE sensing , *TASK analysis , *SPATIAL resolution , *NOISE - Abstract
High spatial resolution (HSR) remote sensing images contain complex foreground-background relationships, which makes the remote sensing land cover segmentation a special semantic segmentation task. The main challenges come from the large-scale variation, complex background samples and imbalanced foreground-background distribution. These issues make recent context modeling methods sub-optimal due to the lack of foreground saliency modeling. To handle these problems, we propose a Remote Sensing Segmentation framework (RSSFormer), including Adaptive TransFormer Fusion Module, Detail-aware Attention Layer and Foreground Saliency Guided Loss. Specifically, from the perspective of relation-based foreground saliency modeling, our Adaptive Transformer Fusion Module can adaptively suppress background noise and enhance object saliency when fusing multi-scale features. Then our Detail-aware Attention Layer extracts the detail and foreground-related information via the interplay of spatial attention and channel attention, which further enhances the foreground saliency. From the perspective of optimization-based foreground saliency modeling, our Foreground Saliency Guided Loss can guide the network to focus on hard samples with low foreground saliency responses to achieve balanced optimization. Experimental results on LoveDA datasets, Vaihingen datasets, Potsdam datasets and iSAID datasets validate that our method outperforms existing general semantic segmentation methods and remote sensing segmentation methods, and achieves a good compromise between computational overhead and accuracy. Our code is available at https://github.com/Rongtao-Xu/RepresentationLearning/tree/main/RSSFormer-TIP2023. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.