Back to Search Start Over

Flooded Infrastructure Change Detection in Deeply Supervised Networks Based on Multi-Attention-Constrained Multi-Scale Feature Fusion.

Authors :
Qin, Gang
Wang, Shixin
Wang, Futao
Li, Suju
Wang, Zhenqing
Zhu, Jinfeng
Liu, Ming
Gu, Changjun
Zhao, Qing
Source :
Remote Sensing. Nov2024, Vol. 16 Issue 22, p4328. 17p.
Publication Year :
2024

Abstract

Flood disasters are frequent, sudden, and have significant chain effects, seriously damaging infrastructure. Remote sensing images provide a means for timely flood emergency monitoring. When floods occur, emergency management agencies need to respond quickly and assess the damage. However, manual evaluation takes a significant amount of time; in current, commercial applications, the post-disaster flood vector range is used to directly overlay land cover data. On the one hand, land cover data are not updated in time, resulting in the misjudgment of disaster losses; on the other hand, since buildings block floods, the above methods cannot detect flooded buildings. Automated change-detection methods can effectively alleviate the above problems. However, the ability of change-detection structures and deep learning models for flooding to characterize flooded buildings and roads is unclear. This study specifically evaluated the performance of different change-detection structures and different deep learning models for the change detection of flooded buildings and roads in very-high-resolution remote sensing images. At the same time, a plug-and-play, multi-attention-constrained, deeply supervised high-dimensional and low-dimensional multi-scale feature fusion (MSFF) module is proposed. The MSFF module was extended to different deep learning models. Experimental results showed that the embedded MSFF performs better than the baseline model, demonstrating that MSFF can be used as a general multi-scale feature fusion component. After FloodedCDNet introduced MSFF, the detection accuracy of flooded buildings and roads changed after the data augmentation reached a maximum of 69.1% MIoU. This demonstrates its effectiveness and robustness in identifying change regions and categories from very-high-resolution remote sensing images. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20724292
Volume :
16
Issue :
22
Database :
Academic Search Index
Journal :
Remote Sensing
Publication Type :
Academic Journal
Accession number :
181203559
Full Text :
https://doi.org/10.3390/rs16224328