cCUDA: effective co-scheduling of concurrent kernels on GPUs

While GPUs are meantime omnipresent for many scientific and technical computations, they still continue to evolve as processors. An important recent feature is the ability to execute multiple kernels concurrently via queue streams. However, experiments show that different parameters including the be...

Full description

Saved in:
Bibliographic Details
Main Authors: Shekofteh, S. Kazem (Author) , Noori, Hamid Reza (Author) , Naghibzadeh, Mahmoud (Author) , Fröning, Holger (Author) , Yazdi, Hadi Sadoghi (Author)
Format: Article (Journal)
Language:English
Published: 2020
In: IEEE transactions on parallel and distributed systems
Year: 2020, Volume: 31, Issue: 4, Pages: 766-778
ISSN:1558-2183
DOI:10.1109/TPDS.2019.2944602
Online Access:Verlag, lizenzpflichtig: https://doi.org/10.1109/TPDS.2019.2944602
Get full text
Author Notes:S.-Kazem Shekofteh, Hamid Noori, Mahmoud Naghibzadeh, Holger Fröning, Hadi Sadoghi Yazdi
Description
Summary:While GPUs are meantime omnipresent for many scientific and technical computations, they still continue to evolve as processors. An important recent feature is the ability to execute multiple kernels concurrently via queue streams. However, experiments show that different parameters including the behavior of kernels, the order of kernel launches and other execution configurations, e.g., the number of concurrent thread blocks, may result in different execution time for concurrent kernel execution. Since kernels may have different resource requirements, they can be classified into different classes, which are traditionally assumed as either memory-bound or compute-bound. However, a kernel may belong to the different classes on different hardware according to the hardware resources. In this paper, the definition of kernel mix intensity is introduced. Based on this, a scheduling framework called concurrent CUDA (cCUDA) is proposed to co-schedule the concurrent kernels more efficiently. It first profiles and ranks kernels with different execution behaviors and then takes the kernel resource requirements into account to partition thread blocks of different kernels and overlap them to better utilize the GPU resources. Experimental results on real hardware demonstrate performance improvement in terms of execution time of up to 1.86x, and an average speedup of 1.28x for a wide range of kernels. cCUDA is available at https://github.com/kshekofteh/cCUDA.
Item Description:Date of Publication: 30 September 2019
Gesehen am 21.04.2020
Physical Description:Online Resource
ISSN:1558-2183
DOI:10.1109/TPDS.2019.2944602