1. 面向多源数据地物提取的遥感知识感知与 多尺度特征融合网络.
- Author
-
龚健雅, 张 展, 贾浩巍, 周 桓, 赵元昕, and 熊汉江
- Subjects
- *
OPTICAL radar , *LIDAR , *REMOTE sensing , *PIXELS , *INTERPOLATION , *VIDEO coding - Abstract
In recent years, the automatic ground object extraction from remote sensing images has been dramatically advanced by the fully convolutional networks (FCNs). It is an effective method to fuse high-resolution images and light detection and ranging (LiDAR) data in FCNs to improve the extraction accuracy and the robustness. However, the existing FCN-based fusion networks still face challenges in efficiency and accuracy. Methods: We propose a knowledge-aware and multi-scale fusion network (KMF‐ Net) for robust and accurate ground object extraction. The proposed network incorporates a knowledge aware module in the network encoder for better exploiting remote sensing knowledge between pixels. A series-parallel hybrid convolution module is developed to enhance multi-scale representative features from ground objects. Moreover, the network decoder uses a gradual bilinear interpolation strategy to obtain fine grained extraction results. Results: We evaluate KMFNet in the LuoJiaNET with four current mainstream ground object extraction methods (GRRNet, V-FuseNet, DLR and Res-U-Net) on ISPRS 2D semantic segmentation dataset. The comparative evaluation results show that KMFNet can obtain the best overall ac‐ curacy. Compared with the other four methods, KMFNet achieves a better effect by improving the overall accuracy by 3.20% and 2.82% on average in ISPRS-Vaihingen dataset and ISPRS-Potsdam dataset, re‐ spectively. Conclusions: KMFNet achieves the best extraction results by capturing the intrinsic pixel relationships and strengths the multi-scale representative and detailed features of ground objects. It shows great potential in high-precision remote sensing mapping applications. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF