Learning 6D object pose estimation using 3D object coordinates

This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coord...

Full description

Saved in:
Bibliographic Details
Main Authors: Brachmann, Eric (Author) , Krull, Alexander (Author) , Michel, Frank (Author) , Gumhold, Stefan (Author) , Shotton, Jamie (Author) , Rother, Carsten (Author)
Format: Chapter/Article Conference Paper
Language:English
Published: 2014
In: Computer Vision - ECCV 2014
Year: 2014, Pages: 536-551
DOI:10.1007/978-3-319-10605-2_35
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1007%2F978-3-319-10605-2_35
Verlag, lizenzpflichtig, Volltext: https://link.springer.com/chapter/10.1007%2F978-3-319-10605-2_35
Get full text
Author Notes:Eric Brachmann, Alexander Krull, Frank Michel, Stefan Gumhold, Jamie Shotton, Carsten Rother
Description
Summary:This work addresses the problem of estimating the 6D Pose of specific objects from a single RGB-D image. We present a flexible approach that can deal with generic objects, both textured and texture-less. The key new concept is a learned, intermediate representation in form of a dense 3D object coordinate labelling paired with a dense class labelling. We are able to show that for a common dataset with texture-less objects, where template-based techniques are suitable and state of the art, our approach is slightly superior in terms of accuracy. We also demonstrate the benefits of our approach, compared to template-based techniques, in terms of robustness with respect to varying lighting conditions. Towards this end, we contribute a new ground truth dataset with 10k images of 20 objects captured each under three different lighting conditions. We demonstrate that our approach scales well with the number of objects and has capabilities to run fast.
Item Description:Gesehen am 14.09.2020
Physical Description:Online Resource
ISBN:9783319106052
DOI:10.1007/978-3-319-10605-2_35