Channel and spatial attention based deep object co-segmentation

Object co-segmentation is a challenging task, which aims to segment common objects in multiple images at the same time. Generally, common information of the same object needs to be found to solve this problem. For various scenarios, common objects in different images only have the same semantic info...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Jia (VerfasserIn) , Chen, Yasong (VerfasserIn) , Li, Weihao (VerfasserIn) , Ning, Guoqin (VerfasserIn) , Tong, Mingwen (VerfasserIn) , Hilton, Adrian (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 23 October 2020
In: Knowledge-based systems
Year: 2021, Jahrgang: 211, Pages: 1-10
ISSN:1872-7409
DOI:10.1016/j.knosys.2020.106550
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1016/j.knosys.2020.106550
Verlag, lizenzpflichtig, Volltext: https://www.sciencedirect.com/science/article/pii/S0950705120306791
Volltext
Verfasserangaben:Jia Chen, Yasong Chen, Weihao Li, Guoqin Ning, Mingwen Tong, Adrian Hilton
Beschreibung
Zusammenfassung:Object co-segmentation is a challenging task, which aims to segment common objects in multiple images at the same time. Generally, common information of the same object needs to be found to solve this problem. For various scenarios, common objects in different images only have the same semantic information. In this paper, we propose a deep object co-segmentation method based on channel and spatial attention, which combines the attention mechanism with a deep neural network to enhance the common semantic information. Siamese encoder and decoder structure are used for this task. Firstly, the encoder network is employed to extract low-level and high-level features of image pairs. Secondly, we introduce an improved attention mechanism in the channel and spatial domain to enhance the multi-level semantic features of common objects. Then, the decoder module accepts the enhanced feature maps and generates the masks of both images. Finally, we evaluate our approach on the commonly used datasets for the co-segmentation task. And the experimental results show that our approach achieves competitive performance.
Beschreibung:Gesehen am 08.02.2021
Beschreibung:Online Resource
ISSN:1872-7409
DOI:10.1016/j.knosys.2020.106550