Back to Search Start Over

Semantic Labeling in Very High Resolution Images via a Self-Cascaded Convolutional Neural Network

Authors :
Liu, Yongcheng
Fan, Bin
Wang, Lingfeng
Bai, Jun
Xiang, Shiming
Pan, Chunhong
Publication Year :
2018

Abstract

Semantic labeling for very high resolution (VHR) images in urban areas, is of significant importance in a wide range of remote sensing applications. However, many confusing manmade objects and intricate fine-structured objects make it very difficult to obtain both coherent and accurate labeling results. For this challenging task, we propose a novel deep model with convolutional neural networks (CNNs), i.e., an end-to-end self-cascaded network (ScasNet). Specifically, for confusing manmade objects, ScasNet improves the labeling coherence with sequential global-to-local contexts aggregation. Technically, multi-scale contexts are captured on the output of a CNN encoder, and then they are successively aggregated in a self-cascaded manner. Meanwhile, for fine-structured objects, ScasNet boosts the labeling accuracy with a coarse-to-fine refinement strategy. It progressively refines the target objects using the low-level features learned by CNN's shallow layers. In addition, to correct the latent fitting residual caused by multi-feature fusion inside ScasNet, a dedicated residual correction scheme is proposed. It greatly improves the effectiveness of ScasNet. Extensive experimental results on three public datasets, including two challenging benchmarks, show that ScasNet achieves the state-of-the-art performance.<br />Comment: accepted by ISPRS Journal of Photogrammetry and Remote Senseing 2017

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1807.11236
Document Type :
Working Paper
Full Text :
https://doi.org/10.1016/j.isprsjprs.2017.12.007