Benchmarking the robustness of semantic segmentation models with respect to common corruptions

When designing a semantic segmentation model for a real-world application, such as autonomous driving, it is crucial to understand the robustness of the network with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the firs...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kamann, Christoph (VerfasserIn) , Rother, Carsten (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 2021
In: International journal of computer vision
Year: 2021, Jahrgang: 129, Heft: 2, Pages: 462-483
ISSN:1573-1405
DOI:10.1007/s11263-020-01383-2
Online-Zugang:Verlag, kostenfrei, Volltext: https://doi.org/10.1007/s11263-020-01383-2
Volltext
Verfasserangaben:Christoph Kamann, Carsten Rother
Beschreibung
Zusammenfassung:When designing a semantic segmentation model for a real-world application, such as autonomous driving, it is crucial to understand the robustness of the network with respect to a wide range of image corruptions. While there are recent robustness studies for full-image classification, we are the first to present an exhaustive study for semantic segmentation, based on many established neural network architectures. We utilize almost 400,000 images generated from the Cityscapes dataset, PASCAL VOC 2012, and ADE20K. Based on the benchmark study, we gain several new insights. Firstly, many networks perform well with respect to real-world image corruptions, such as a realistic PSF blur. Secondly, some architecture properties significantly affect robustness, such as a Dense Prediction Cell, designed to maximize performance on clean data only. Thirdly, the generalization capability of semantic segmentation models depends strongly on the type of image corruption. Models generalize well for image noise and image blur, however, not with respect to digitally corrupted data or weather corruptions.
Beschreibung:Published online: 30 September 2020
Gesehen am 09.11.2021
Beschreibung:Online Resource
ISSN:1573-1405
DOI:10.1007/s11263-020-01383-2