Joint program and layout transformations to enable convolutional operators on specialized hardware based on constraint programming

The success of Deep Artificial Neural Networks (DNNs) in many domains created a rich body of research concerned with hardware accelerators for compute-intensive DNN operators. However, implementing such operators efficiently with complex hardware intrinsics such as matrix multiply is a task not yet...

Full description

Saved in:
Bibliographic Details
Main Authors: Rieber, Dennis (Author) , Acosta, Axel (Author) , Fröning, Holger (Author)
Format: Article (Journal)
Language:English
Published: 2022
In: ACM Transactions on architecture and code optimization
Year: 2022, Volume: 19, Issue: 1, Pages: 1-26
ISSN:1544-3973
DOI:10.1145/3487922
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.1145/3487922
Verlag, kostenfrei, Volltext: https://dl.acm.org/doi/10.1145/3487922
Get full text
Author Notes:Dennis Rieber and Axel Acosta, Holger Fröning
Description
Summary:The success of Deep Artificial Neural Networks (DNNs) in many domains created a rich body of research concerned with hardware accelerators for compute-intensive DNN operators. However, implementing such operators efficiently with complex hardware intrinsics such as matrix multiply is a task not yet automated gracefully. Solving this task often requires joint program and data layout transformations. First solutions to this problem have been proposed, such as TVM, UNIT, or ISAMIR, which work on a loop-level representation of operators and specify data layout and possible program transformations before the embedding into the operator is performed. This top-down approach creates a tension between exploration range and search space complexity, especially when also exploring data layout transformations such as im2col, channel packing, or padding. In this work, we propose a new approach to this problem. We created a bottom-up method that allows the joint transformation of both computation and data layout based on the found embedding. By formulating the embedding as a constraint satisfaction problem over the scalar dataflow, every possible embedding solution is contained in the search space. Adding additional constraints and optimization targets to the solver generates the subset of preferable solutions. An evaluation using the VTA hardware accelerator with the Baidu DeepBench inference benchmark shows that our approach can automatically generate code competitive to reference implementations. Further, we show that dynamically determining the data layout based on intrinsic and workload is beneficial for hardware utilization and performance. In cases where the reference implementation has low hardware utilization due to its fixed deployment strategy, we achieve a geomean speedup of up to x2.813, while individual operators can improve as much as x170.
Item Description:Online veröffentlicht: 6. Dezember 2021
Gesehen am 21.11.2022
Physical Description:Online Resource
ISSN:1544-3973
DOI:10.1145/3487922