Refine
Document Type
Conference Type
- Konferenzartikel (2)
- Konferenz-Abstract (1)
Language
- English (3)
Has Fulltext
- no (3) (remove)
Is part of the Bibliography
- yes (3) (remove)
Keywords
- Federated Learning (1)
- Neural networks (1)
- Predictive Maintenance (1)
- TinyML (1)
- Unsupervised Learning (1)
- Variational Autoencoders (1)
- algorithm-based data analysis (1)
- dickkopf 3 (1)
- efficient training (1)
- sparse backpropagation (1)
Institute
Open Access
- Open Access (3)
- Bronze (1)
- Diamond (1)
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
Objective: Dickkopf 3 (DKK3) has been identified as a urinary biomarker. Values above 4000 pg/mg creatinine (Cr) were linked with a higher risk of short-term decline of kidney function (J Am Soc Nephrol 29: 2722–2733). However, as of today, there is little experience with DKK3 as a risk marker in everyday clinical practice. We used algorithm-based data analysis to evaluate the potential dependence of DKK3 in a cohort from a large single center in Germany.
Method: DKK3 was measured in all CKD patients in our center October 1 st 2018 till Dec. 31 2019, together with calculated GFR (eGFR) and urinary albumin/creatinine ratio (UACR). Kidney transplant patients were excluded. Until the end of follow-up Dec 31 st 2021, repeated measurements were performed for all parameters. Data analysis was performed using MD-Explorer (BioArtProducts, Rostock, Germany) and Python with multiple libraries. Linear regression models were applied in patients for DKK3, eGFR and UACR. Comparison of the models was performed with a twosided Kolmogorov-Smirnov test.
Results: 1206 DKK3 measurements were performed in 1103 patients (621 male, age 70yrs, eGFR 29,41 ml/min/1.73qm, UACR 800 mg/g). 134 patients died during follow-up. DKK3 mean was 2905 pg/mg Cr (max. 20000, 75 % percentile 3800). 121 pts had DKK3 > 4000. At the end of follow-up 7 % of patients with DKK3 < 4000 (initial eGFR 17.6) versus 39.6 % of patients with DDK3 > 4000 (initial eGFR 15.7) underwent dialysis. Compared to eGFR and UACR at baseline, DKK3 > 4000 performed best to predict eGFR loss over the next 12 months.
Conclusion: In this cohort of CKD patients, DKK3 > 4000 at baseline predicted the eGFR slope better than eGFR or UACR at baseline. DKK3 > 4000 reflected a higher risk of progression towards ESRD in patients with similar baseline eGFR levels.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.