Volltext-Downloads (blau) und Frontdoor-Views (grau)

A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption

  • A new algorithm for incremental learning in the context of Tiny Machine learning (TinyML) is presented, which is optimized for low-performance and energy efficient embedded devices. TinyML is an emerging field that deploys machine learning models on resource-constrained devices such as microcontrollers, enabling intelligent applications like voice recognition, anomaly detection, predictiveA new algorithm for incremental learning in the context of Tiny Machine learning (TinyML) is presented, which is optimized for low-performance and energy efficient embedded devices. TinyML is an emerging field that deploys machine learning models on resource-constrained devices such as microcontrollers, enabling intelligent applications like voice recognition, anomaly detection, predictive maintenance, and sensor data processing in environments where traditional machine learning models are not feasible. The algorithm solve the challenge of catastrophic forgetting through the use of knowledge distillation to create a small, distilled dataset. The novelty of the method is that the size of the model can be adjusted dynamically, so that the complexity of the model can be adapted to the requirements of the task. This offers a solution for incremental learning in resource-constrained environments, where both model size and computational efficiency are critical factors. Results show that the proposed algorithm offers a promising approach for TinyML incremental learning on embedded devices. The algorithm was tested on five datasets including: CIFAR10, MNIST, CORE50, HAR, Speech Commands. The findings indicated that, despite using only 43% of Floating Point Operations (FLOPs) compared to a larger fixed model, the algorithm experienced a negligible accuracy loss of just 1%. In addition, the presented method is memory efficient. While state-of-the-art incremental learning is usually very memory intensive, the method requires only 1% of the original data set.show moreshow less

Export metadata

Additional Services

Search Google Scholar

Statistics

frontdoor_oas
Metadaten
Document Type:Conference Proceeding
Conference Type:Konferenzartikel
Zitierlink: https://opus.hs-offenburg.de/9587
Bibliografische Angaben
Title (English):A Continual and Incremental Learning Approach for TinyML On-device Training Using Dataset Distillation and Model Size Adaption
Conference:International Conference on Industrial Cyber-Physical Systems (7. : 12-15 May 2024 : St. Louis, MO, USA)
Author:Marcus Rüb, Philipp Tuchel, Axel SikoraStaff MemberORCiDGND, Daniel Mueller-Gritschneder
Year of Publication:2024
Publisher:IEEE
First Page:1
Last Page:8
Parent Title (English):2024 IEEE 7th International Conference on Industrial Cyber-Physical Systems (ICPS)
ISBN:979-8-3503-6301-2 (Elektronisch)
ISBN:979-8-3503-6302-9 (Print on Demand)
ISSN:2769-3899 (Elektronisch)
ISSN:2769-3902 (Print on Demand)
DOI:https://doi.org/10.1109/ICPS59941.2024.10639989
Language:English
Inhaltliche Informationen
Institutes:Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019)
Research:ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik
Collections of the Offenburg University:Bibliografie
Tag:Edge AI; Embedded AI; Neural networks; efficient training
Formale Angaben
Relevance for "Jahresbericht über Forschungsleistungen":1-fach | Konferenzbeitrag
Open Access: Closed 
Licence (German):License LogoUrheberrechtlich geschützt