Rapid convolutional neural networks for Gram-stained image classification at inference time on mobile devices: empirical study from transfer learning to optimization

Despite the emergence of mobile health and the success of deep learning (DL), deploying production-ready DL models to resource-limited devices remains challenging. Especially, during inference time, the speed of DL models becomes relevant. We aimed to accelerate inference time for Gram-stained analy...

Full description

Saved in:
Bibliographic Details
Main Authors: Kim, Hee Eun (Author) , Maros, Máté E. (Author) , Siegel, Fabian (Author) , Ganslandt, Thomas (Author)
Format: Article (Journal)
Language:English
Published: 4 November 2022
In: Biomedicines
Year: 2022, Volume: 10, Issue: 11, Pages: 1-12
ISSN:2227-9059
DOI:10.3390/biomedicines10112808
Online Access:Verlag, kostenfrei, Volltext: https://doi.org/10.3390/biomedicines10112808
Verlag, kostenfrei, Volltext: https://www.mdpi.com/2227-9059/10/11/2808
Get full text
Author Notes:Hee E. Kim, Mate E. Maros, Fabian Siegel, Thomas Ganslandt
Description
Summary:Despite the emergence of mobile health and the success of deep learning (DL), deploying production-ready DL models to resource-limited devices remains challenging. Especially, during inference time, the speed of DL models becomes relevant. We aimed to accelerate inference time for Gram-stained analysis, which is a tedious and manual task involving microorganism detection on whole slide images. Three DL models were optimized in three steps: transfer learning, pruning and quantization and then evaluated on two Android smartphones. Most convolutional layers (≥80%) had to be retrained for adaptation to the Gram-stained classification task. The combination of pruning and quantization demonstrated its utility to reduce the model size and inference time without compromising model quality. Pruning mainly contributed to model size reduction by 15×, while quantization reduced inference time by 3× and decreased model size by 4×. The combination of two reduced the baseline model by an overall factor of 46×. Optimized models were smaller than 6 MB and were able to process one image in <0.6 s on a Galaxy S10. Our findings demonstrate that methods for model compression are highly relevant for the successful deployment of DL solutions to resource-limited devices.
Item Description:Gesehen am 17.07.2023
Physical Description:Online Resource
ISSN:2227-9059
DOI:10.3390/biomedicines10112808