Improving deep metric learning by divide and conquer

Deep metric learning (DML) is a cornerstone of many computer vision applications. It aims at learning a mapping from the input domain to an embedding space, where semantically similar objects are located nearby and dissimilar objects far from another. The target similarity on the training data is de...

Full description

Saved in:
Bibliographic Details
Main Authors: Sanakoyeu, Artsiom (Author) , Ma, Pingchuan (Author) , Tschernezki, Vadim (Author) , Ommer, Björn (Author)
Format: Article (Journal)
Language:English
Published: [01 November 2022]
In: IEEE transactions on pattern analysis and machine intelligence
Year: 2022, Volume: 44, Issue: 11, Pages: 8306-8320
ISSN:1939-3539
DOI:10.1109/TPAMI.2021.3113270
Online Access:Verlag, Volltext: http://dx.doi.org/10.1109/TPAMI.2021.3113270
Get full text
Author Notes:Artsiom Sanakoyeu, Pingchuan Ma, Vadim Tschernezki, Björn Ommer
Description
Summary:Deep metric learning (DML) is a cornerstone of many computer vision applications. It aims at learning a mapping from the input domain to an embedding space, where semantically similar objects are located nearby and dissimilar objects far from another. The target similarity on the training data is defined by user in form of ground-truth class labels. However, while the embedding space learns to mimic the user-provided similarity on the training data, it should also generalize to novel categories not seen during training. Besides user-provided groundtruth training labels, a lot of additional visual factors (such as viewpoint changes or shape peculiarities) exist and imply different notions of similarity between objects, affecting the generalization on the images unseen during training. However, existing approaches usually directly learn a single embedding space on all available training data, struggling to encode all different types of relationships, and do not generalize well. We propose to build a more expressive representation by jointly splitting the embedding space and the data hierarchically into smaller sub-parts. We successively focus on smaller subsets of the training data, reducing its variance and learning a different embedding subspace for each data subset. Moreover, the subspaces are learned jointly to cover not only the intricacies, but the breadth of the data as well. Only after that, we build the final embedding from the subspaces in the conquering stage. The proposed algorithm acts as a transparent wrapper that can be placed around arbitrary existing DML methods. Our approach significantly improves upon the state-of-the-art on image retrieval, clustering, and re-identification tasks evaluated using CUB200-2011, CARS196, Stanford Online Products, In-shop Clothes, and PKU VehicleID datasets.
Item Description:Date of publication: 16 September 2021
Gesehen am 09.12.2022
Physical Description:Online Resource
ISSN:1939-3539
DOI:10.1109/TPAMI.2021.3113270