A super-resolution framework for high-accuracy multiview reconstruction

We present a variational framework to estimate super-resolved texture maps on a 3D geometry model of a surface from multiple images. Given the calibrated images and the reconstructed geometry, the proposed functional is convex in the super-resolution texture. Using a conformal atlas of the surface,...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Goldlücke, Bastian (VerfasserIn) , Aubry, Mathieu (VerfasserIn) , Kolev, Kalin (VerfasserIn) , Cremers, Daniel (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 2014
In: International journal of computer vision
Year: 2013, Jahrgang: 106, Heft: 2, Pages: 172-191
ISSN:1573-1405
DOI:10.1007/s11263-013-0654-8
Online-Zugang:Resolving-System, lizenzpflichtig, Volltext: https://doi.org/10.1007/s11263-013-0654-8
Verlag, lizenzpflichtig, Volltext: https://link.springer.com/article/10.1007%2Fs11263-013-0654-8
Volltext
Verfasserangaben:Bastian Goldlücke, Mathieu Aubry, Kalin Kolev, Daniel Cremers
Beschreibung
Zusammenfassung:We present a variational framework to estimate super-resolved texture maps on a 3D geometry model of a surface from multiple images. Given the calibrated images and the reconstructed geometry, the proposed functional is convex in the super-resolution texture. Using a conformal atlas of the surface, we transform the model from the curved geometry to the flat charts and solve it using state-of-the-art and provably convergent primal-dual algorithms. In order to improve image alignment and quality of the texture, we extend the functional to also optimize for a normal displacement map on the surface as well as the camera calibration parameters. Since the sub-problems for displacement and camera parameters are non-convex, we revert to relaxation schemes in order to robustly estimate a minimizer via sequential convex programming. Experimental results confirm that the proposed super-resolution framework allows to recover textured models with significantly higher level-of-detail than the individual input images.
Beschreibung:Published online: 25 August 2013
Gesehen am 06.10.2020
Beschreibung:Online Resource
ISSN:1573-1405
DOI:10.1007/s11263-013-0654-8