Volltext-Downloads (blau) und Frontdoor-Views (grau)
  • search hit 11 of 145
Back to Result List

TinyProp - Adaptive Sparse Backpropagation for Efficient TinyML On-device Learning

  • Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases.Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/8302
Bibliografische Angaben
Title (English):TinyProp - Adaptive Sparse Backpropagation for Efficient TinyML On-device Learning
Conference:International Conference on Artificial Intelligence and Power Engineering (1. : 3rd to 5th March 2023 : Toyko, Japan)
Author:Marcus Rüb, Daniel Maier, Daniel Mueller-Gritschneder, Axel SikoraStaff MemberORCiDGND
Year of Publication:2023
Date of first Publication:2023/08/17
Place of publication:Tokyo
First Page:1
Last Page:7
DOI:https://doi.org/10.48550/arXiv.2308.09201
Language:English
Inhaltliche Informationen
Institutes:Forschung / ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik
Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Institutes:Bibliografie
Tag:Neural networks; TinyML; efficient training; sparse backpropagation
Formale Angaben
Relevance:Konferenzbeitrag: h5-Index < 30
Open Access: Open Access 
 Diamond 
Licence (German):License LogoCreative Commons - CC BY-NC-ND - Namensnennung - Nicht kommerziell - Keine Bearbeitungen 4.0 International