ARTS: an approximate reduced tree and segmentation-based multiplier

Due to the increasing use of machine learning applications in daily human life, efficient hardware implementation of these applications has turned into a serious challenge recently. Multipliers are one of the most prevalent, and at the same time expensive (from a hardware perspective), arithmetic un...

Full description

Saved in:
Bibliographic Details
Main Authors: Kelayeh, Mahla Salehi Sheikhali (Author) , Divsalar, Sahand (Author) , Vahdat, Shaghayegh (Author) , Taherinejad, Nima (Author)
Format: Article (Journal)
Language:English
Published: February 2026
In: Future generation computer systems
Year: 2026, Volume: 175, Pages: 1-12
ISSN:1872-7115
DOI:10.1016/j.future.2025.108098
Online Access:Resolving-System, lizenzpflichtig, Volltext: https://doi.org/10.1016/j.future.2025.108098
Verlag, lizenzpflichtig, Volltext: https://www.sciencedirect.com/science/article/pii/S0167739X25003929
Get full text
Author Notes:Mahla Salehi Sheikhali Kelayeh, Sahand Divsalar, Shaghayegh Vahdat, Nima TaheriNejad
Description
Summary:Due to the increasing use of machine learning applications in daily human life, efficient hardware implementation of these applications has turned into a serious challenge recently. Multipliers are one of the most prevalent, and at the same time expensive (from a hardware perspective), arithmetic units used in such applications. In this paper, a novel approximate multiplier, called ARTS, is designed based on the idea of dividing input operands into different static segments and completing certain steps of the calculation approximately by using simplified reduction trees to sum up the partial products. ARTS manifests significant improvements in hardware characteristics. Namely, 68.6%, 16.5%, and 60% improvements in power, delay, and area are achieved with respect to an exact 8-bit Wallace multiplier, while up to 59.8%, 37.2%, and 52.8% improvements are obtained compared to the other start-of-the-art (SoTA) approximate multipliers. The efficiency of ARTS is assessed in image processing and DNN applications. ARTS shows up to 91.4% and 28.3% better PSNR and 52.4% and 20.5% better SSIM in image multiplication and Sobel edge detection applications, respectively, compared to the other SoTA approximate multipliers. In DNN applications, ARTS exhibits outstanding performance, achieving up to 84.8% higher classification accuracy compared to SoTA approximate designs with similar hardware characteristics. Additionally, when compared to SoTA designs offering comparable accuracy, ARTS achieves this performance with up to 191% lower energy consumption.
Item Description:"Available online 22 August 2025"
Gesehen am 28.01.2026
Physical Description:Online Resource
ISSN:1872-7115
DOI:10.1016/j.future.2025.108098