A multi-sensor fusion framework based on coupled residual convolutional neural networks

Multi-sensor remote sensing image classification has been considerably improved by deep learning feature extraction and classification networks. In this paper, we propose a novel multi-sensor fusion framework for the fusion of diverse remote sensing data sources. The novelty of this paper is grounde...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Li, Hao (VerfasserIn) , Ghamisi, Pedram (VerfasserIn) , Rasti, Behnood (VerfasserIn) , Wu, Zhaoyan (VerfasserIn) , Shapiro, Aurelie (VerfasserIn) , Schultz, Michael (VerfasserIn) , Zipf, Alexander (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 26 June 2020
In: Remote sensing
Year: 2020, Jahrgang: 12, Heft: 12
ISSN:2072-4292
DOI:10.3390/rs12122067
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.3390/rs12122067
Verlag, lizenzpflichtig, Volltext: https://www.mdpi.com/2072-4292/12/12/2067
Volltext
Verfasserangaben:Hao Li, Pedram Ghamisi, Behnood Rasti, Zhaoyan Wu, Aurelie Shapiro, Michael Schultz and Alexander Zipf
Beschreibung
Zusammenfassung:Multi-sensor remote sensing image classification has been considerably improved by deep learning feature extraction and classification networks. In this paper, we propose a novel multi-sensor fusion framework for the fusion of diverse remote sensing data sources. The novelty of this paper is grounded in three important design innovations: 1- a unique adaptation of the coupled residual networks to address multi-sensor data classification; 2- a smart auxiliary training via adjusting the loss function to address classifications with limited samples; and 3- a unique design of the residual blocks to reduce the computational complexity while preserving the discriminative characteristics of multi-sensor features. The proposed classification framework is evaluated using three different remote sensing datasets: the urban Houston university datasets (including Houston 2013 and the training portion of Houston 2018) and the rural Trento dataset. The proposed framework achieves high overall accuracies of 93.57%, 81.20%, and 98.81% on Houston 2013, the training portion of Houston 2018, and Trento datasets, respectively. Additionally, the experimental results demonstrate considerable improvements in classification accuracies compared with the existing state-of-the-art methods.
Beschreibung:Gesehen am 07.09.2020
Beschreibung:Online Resource
ISSN:2072-4292
DOI:10.3390/rs12122067