Back to Search Start Over

Deep Learning for SAR-Optical Image Matching

Authors :
Lloyd Haydn Hughes
Tatjana Burgmann
Stefan Auer
Nina Merkle
Michael Schmitt
Source :
IGARSS
Publication Year :
2019

Abstract

The automatic matching of corresponding regions in remote sensing imagery acquired by synthetic aperture radar (SAR) and optical sensors is a crucial pre-requesite for many data fusion endeavours such as target recognition, image registration, or 3D-reconstruction by stereogrammetry. Driven by the success of deep learning in conventional optical image matching, we have carried out extensive research with regard to deep matching for SAR-optical multi-sensor image pairs in the recent past. In this paper, we summarize the achieved findings, including different concepts based on (pseudo-)siamese convolutional neural network architectures, hard negative mining, alternative formulations of the underlying loss function, and creation of artificial images by generative adversarial networks. Based on data from state-of-the-art remote sensing missions such as TerraSAR-X, Prism, Worldview-2, and Sentinel-1/2, we show what is already possible today, while highlighting challenges to be tackled by future research endeavors.

Details

Language :
German
Database :
OpenAIRE
Journal :
IGARSS
Accession number :
edsair.doi.dedup.....d7aa03ac8c4c874f39ca55d4a0286785