Deep unsupervised learning of visual similarities

Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to computer vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one po...

Full description

Saved in:
Bibliographic Details
Main Authors: Sanakoyeu, Artsiom (Author) , Bautista, Miguel (Author) , Ommer, Björn (Author)
Format: Article (Journal)
Language:English
Published: 31 January 2018
In: Pattern recognition
Year: 2018, Volume: 78, Pages: 331-343
DOI:10.1016/j.patcog.2018.01.036
Online Access:Verlag, Volltext: https://doi.org/10.1016/j.patcog.2018.01.036
Verlag, Volltext: http://www.sciencedirect.com/science/article/pii/S0031320318300293
Get full text
Author Notes:Artsiom Sanakoyeu, Miguel A. Bautista, Björn Ommer
Description
Summary:Exemplar learning of visual similarities in an unsupervised manner is a problem of paramount importance to computer vision. In this context, however, the recent breakthrough in deep learning could not yet unfold its full potential. With only a single positive sample, a great imbalance between one positive and many negatives, and unreliable relationships between most samples, training of Convolutional Neural networks is impaired. In this paper we use weak estimates of local similarities and propose a single optimization problem to extract batches of samples with mutually consistent relations. Conflicting relations are distributed over different batches and similar samples are grouped into compact groups. Learning visual similarities is then framed as a sequence of categorization tasks. The CNN then consolidates transitivity relations within and between groups and learns a single representation for all samples without the need for labels. The proposed unsupervised approach has shown competitive performance on detailed posture analysis and object classification.
Item Description:Gesehen am 16.05.2019
Physical Description:Online Resource
DOI:10.1016/j.patcog.2018.01.036