Virtual LiDAR simulation as a high performance computing challenge: toward HPC HELIOS++

The software HELIOS++ simulates the laser scanning of a given virtual scene that can be composed of different spatial primitives and 3D meshes with distinct granularity. The high computational cost of this type of simulation software demands efficient computational solutions. Classical solutions bas...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Esmorís Pena, Alberto M. (VerfasserIn) , Yermo, Miguel (VerfasserIn) , Weiser, Hannah (VerfasserIn) , Winiwarter, Lukas (VerfasserIn) , Höfle, Bernhard (VerfasserIn) , Rivera, Francisco F. (VerfasserIn)
Dokumenttyp: Article (Journal)
Sprache:Englisch
Veröffentlicht: 30 September 2022
In: IEEE access
Year: 2022, Jahrgang: 10, Pages: 105052-105073
ISSN:2169-3536
DOI:10.1109/ACCESS.2022.3211072
Online-Zugang:Verlag, lizenzpflichtig, Volltext: https://dx.doi.org/10.1109/ACCESS.2022.3211072
Volltext
Verfasserangaben:Alberto M. Esmorís, Miguel Yermo, Hannah Weiser, Lukas Winiwarter, Bernhard Höfle, and Francisco F. Rivera
Beschreibung
Zusammenfassung:The software HELIOS++ simulates the laser scanning of a given virtual scene that can be composed of different spatial primitives and 3D meshes with distinct granularity. The high computational cost of this type of simulation software demands efficient computational solutions. Classical solutions based on GPU are not well suited when irregular geometries compose the scene combining different primitives and physics models because they lead to different computation branches. In this paper, we explore the usage of parallelization strategies based on static and dynamic workload balancing and heuristic optimization strategies to speed up the ray tracing process based on a k-dimensional tree (KDT). Using HELIOS++ as our case study, we analyze the performance of our algorithms on different parallel computers, including the CESGA FinisTerrae-II supercomputer. There is a significant performance boost in all cases, with the decrease in computation time ranging from 89.5% to 99.4%. Our results show that the proposed algorithms can boost the performance of any software that relies heavily on a KDT or a similar data structure, as well as those that spend most of the time computing with only a few synchronization barriers. Hence, the algorithms presented in this paper improve performance, whether computed on personal computers or supercomputers.
Beschreibung:Online veröffentlicht am 30 September 2022, Artikelversion 10 Oktober 2022
Gesehen am 09.12.2022
Beschreibung:Online Resource
ISSN:2169-3536
DOI:10.1109/ACCESS.2022.3211072