Volltext-Downloads (blau) und Frontdoor-Views (grau)

Advancing On-Device Neural Network Training with TinyPropv2: Dynamic, Sparse, and Efficient Backpropagation

  • This study introduces EmbeddedTrain, an innovative algorithm optimized for on-device learning in deep neural networks, specifically designed for low-power microcontroller units. EmbeddedTrain refines sparse backpropagation by dynamically adjusting the level of sparity, including the ability to selectively skip training steps. This feature significantly lowers computational effort withoutThis study introduces EmbeddedTrain, an innovative algorithm optimized for on-device learning in deep neural networks, specifically designed for low-power microcontroller units. EmbeddedTrain refines sparse backpropagation by dynamically adjusting the level of sparity, including the ability to selectively skip training steps. This feature significantly lowers computational effort without substantially compromising accuracy. Our comprehensive evaluation across diverse datasets—CIFAR 10, CIFAR100, Flower, Food, Speech Command, MNIST, HAR, and DCASE2020—reveals that EmbeddedTrain achieves near-parity with full training methods, with an average accuracy drop of only around 1% in most cases. For instance, against full training, EmbeddedTrain’s accuracy drop is minimal, for example, only 0.82% on CIFAR 10 and 1.07% on CIFAR100. In terms of computational effort, EmbeddedTrain shows a marked reduction, requiring as little as 10% of the computational effort needed for full training in some scenarios, and consistently outperforms other sparse training methodologies. These findings underscore EmbeddedTrain’s capacity to efficiently manage computational resources while maintaining high accuracy, positioning it as an advantageous solution for advanced embedded device applications in the IoT ecosystem.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/9586
Bibliografische Angaben
Title (English):Advancing On-Device Neural Network Training with TinyPropv2: Dynamic, Sparse, and Efficient Backpropagation
Conference:International Joint Conference on Neural Networks (30 June - 05 July 2024 : Yokohama, Japan)
Author:Marcus Rüb, Axel SikoraStaff MemberORCiDGND, Daniel Mueller-Gritschneder
Year of Publication:2024
Publisher:IEEE
First Page:1
Last Page:8
Parent Title (English):IJCNN 2024 : Conference Proceedings
ISBN:979-8-3503-5931-2 (Elektronisch)
ISBN:979-8-3503-5932-9 (Print on Demand)
ISSN:2161-4407 (Elektronisch)
ISSN:2161-4393 (Print on Demand)
DOI:https://doi.org/10.1109/IJCNN60899.2024.10650122
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Collections of the Offenburg University:Bibliografie
Formale Angaben
Relevance for "Jahresbericht über Forschungsleistungen":5-fach | Konferenzbeitrag
Open Access: Closed 
Licence (German):License LogoUrheberrechtlich geschützt