Analyzing GPU-controlled communication with dynamic parallelism in terms of performance and energy

Graphic Processing Units (GPUs) are widely used in high performance computing, due to their high computational power and high performance per Watt. However, one of the main bottlenecks of GPU-accelerated cluster computing is the data transfer between distributed GPUs. This not only affects performan...

Full description

Saved in:
Bibliographic Details
Main Authors: Oden, Lena (Author) , Klenk, Benjamin (Author) , Fröning, Holger (Author)
Format: Article (Journal)
Language:English
Published: 29 March 2016
In: Parallel computing
Year: 2016, Volume: 57, Pages: 125-134
ISSN:1872-7336
DOI:10.1016/j.parco.2016.02.005
Online Access:Verlag, Volltext: http://dx.doi.org/10.1016/j.parco.2016.02.005
Verlag, Volltext: http://www.sciencedirect.com/science/article/pii/S0167819116300011
Get full text
Author Notes:Lena Oden, Benjamin Klenk, Holger Fröning
Description
Summary:Graphic Processing Units (GPUs) are widely used in high performance computing, due to their high computational power and high performance per Watt. However, one of the main bottlenecks of GPU-accelerated cluster computing is the data transfer between distributed GPUs. This not only affects performance, but also power consumption. The most common way to utilize a GPU cluster is a hybrid model, in which the GPU is used to accelerate the computation, while the CPU is responsible for the communication. This approach always requires a dedicated CPU thread, which consumes additional CPU cycles and therefore increases the power consumption of the complete application. In recent work we have shown that the GPU is able to control the communication independently of the CPU. However, there are several problems with GPU-controlled communication. The main problem is intra-GPU synchronization, since GPU blocks are non-preemptive. Therefore, the use of communication requests within a GPU can easily result in a deadlock. In this work we show how dynamic parallelism solves this problem. GPU-controlled communication in combination with dynamic parallelism allows keeping the control flow of multi-GPU applications on the GPU and bypassing the CPU completely. Using other in-kernel synchronization methods results in massive performance losses, due to the forced serialization of the GPU thread blocks. Although the performance of applications using GPU-controlled communication is still slightly worse than the performance of hybrid applications, we will show that performance per Watt increases by up to 10% while still using commodity hardware.
Item Description:Gesehen am 03.07.2017
Physical Description:Online Resource
ISSN:1872-7336
DOI:10.1016/j.parco.2016.02.005