A simple model for portable and fast prediction of execution time and power consumption of GPU Kernels

Characterizing compute kernel execution behavior on GPUs for efficient task scheduling is a non-trivial task. We address this with a simple model enabling portable and fast predictions among different GPUs using only hardware-independent features. This model is built based on random forests using 18...

Full description

Saved in:
Bibliographic Details
Main Authors: Braun, Lorenz (Author) , Nikas, Sotirios (Author) , Song, Chen (Author) , Heuveline, Vincent (Author) , Fröning, Holger (Author)
Format: Article (Journal)
Language:English
Published: 2021
In: ACM Transactions on architecture and code optimization
Year: 2021, Volume: 18, Issue: 1, Pages: 1-25
ISSN:1544-3973
DOI:10.1145/3431731
Online Access:Verlag, lizenzpflichtig, Volltext: https://doi.org/10.1145/3431731
Get full text
Author Notes:Lorenz Braun (Institute of Computer Engineering, Heidelberg University, Germany), Sotirios Nikas, Chen Song, and Vincent Heuveline (Engineering Mathematics and Computing Lab, Heidelberg University, Germany), Holger Fröning (Institute of Computer Engineering, Heidelberg University, Germany)
Description
Summary:Characterizing compute kernel execution behavior on GPUs for efficient task scheduling is a non-trivial task. We address this with a simple model enabling portable and fast predictions among different GPUs using only hardware-independent features. This model is built based on random forests using 189 individual compute kernels from benchmarks such as Parboil, Rodinia, Polybench-GPU, and SHOC. Evaluation of the model performance using cross-validation yields a median Mean Average Percentage Error (MAPE) of 8.86-52.0% for time and 1.84-2.94% for power prediction across five different GPUs, while latency for a single prediction varies between 15 and 108 ms.
Item Description:Publication date: December 2020
Gesehen am 31.03.2021
Physical Description:Online Resource
ISSN:1544-3973
DOI:10.1145/3431731