Refine
Document Type
- Conference Proceeding (26)
- Article (unreviewed) (16)
- Article (reviewed) (14)
- Doctoral Thesis (5)
- Bachelor Thesis (3)
- Master's Thesis (3)
- Letter to Editor (1)
Conference Type
- Konferenzartikel (21)
- Konferenz-Abstract (3)
- Sonstiges (1)
Language
- English (68) (remove)
Keywords
- Deep Leaning (3)
- Deep Learning (3)
- machine learning (3)
- Advanced Footwear Technology (2)
- Alexander von Humboldt (2)
- Artificial Intelligence (2)
- Biomechanics (2)
- Deep learning (2)
- Exercise Science (2)
- Export (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (24)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (21)
- INES - Institut für nachhaltige Energiesysteme (13)
- Fakultät Wirtschaft (W) (11)
- Fakultät Medien (M) (ab 22.04.2021) (9)
- IMLA - Institute for Machine Learning and Analytics (9)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (4)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (3)
- ACI - Affective and Cognitive Institute (2)
- IfTI - Institute for Trade and Innovation (2)
Open Access
- Diamond (68) (remove)
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
This thesis evaluates and compares current Full-Stack JavaScript Technologies. Through extensive research on the state of the art of JavaScript and its related frameworks, different aspects of FullStack Development are analysed to judge the popularity of technologies.
The language JavaScript and the idea of Full-Stack Development are presented with the functionality of different frameworks. The JavaScript runtime Node.js was examined and marked as the most influential JavaScript technology, which opened up many opportunities.
As technology stacks MERN, MEAN and MEVN were investigated, featuring the base technologies Node.js, MongoDB and Express.js. It was discovered that front-end frameworks have the most influence on which variant of Full-Stack can be chosen. Comparison criteria between the technology stacks were the learning curve, the maintainability, modularity and media integration. These criteria were extracted from research and a questionnaire conducted with students of the University of Applied Sciences Offenburg.
For the purposes of testing and experiencing a Full-Stack JavaScript application, the game RemArrow, based on the 1979s game Simon, was designed and implemented. The comparison with predefined criteria shows the result that the MERN stack with React.js is the best to learn and promises the most potential. Arising JavaScript technologies and their popularity are very dependent on the industry and skill set of the developer.
In conclusion, it can be established that the concept of Full-Stack Development is currently very interesting and more than just a trend. It has potential of becoming a new kind of web development, and part of the curriculum taught at universities. Expert knowledge is needed but there is a high demand and much potential for Full-Stack JavaScript Developers.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features which address the tactile experience of working in real-world labs.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
Garbage in, Garbage out: How does ambiguity in data affect state-of-the-art pedestrian detection?
(2024)
This thesis investigates the critical role of data quality in computer vision, particularly in the realm of pedestrian detection. The proliferation of deep learning methods has emphasised the importance of large datasets for model training, while the quality of these datasets is equally crucial. Ambiguity in annotations, arising from factors like mislabelling, inaccurate bounding box geometry and annotator disagreements, poses significant challenges to the reliability and robustness of the pedestrian detection models and their evaluation. This work aims to explore the effects of ambiguous data on model performance with a focus on identifying and separating ambiguous instances, employing an ambiguity measure utilizing annotator estimations of object visibility and identity. Through accurate experimentation and analysis, trade-offs between data cleanliness and representativeness, noise removal and retention of valuable data emerged, elucidating their impact on performance metrics like the log average miss-rate, recall and precision. Furthermore, a strong correlation between ambiguity and occlusion was discovered with higher ambiguity corresponding to greater occlusion prevalence. The EuroCity Persons dataset served as the primary dataset, revealing a significant proportion of ambiguous instances with approximately 8.6% ambiguity in the training dataset and 7.3% in the validation set. Results demonstrated that removing ambiguous data improves the log average miss-rate, particularly by reducing the false positive detections. Augmentation of the training data with samples from neighbouring classes enhanced the recall but diminished precision. Error correction of wrong false positives and false negatives significantly impacts model evaluation results, as evidenced by shifts in the ECP leaderboard rankings. By systematically addressing ambiguity, this thesis lays the foundation for enhancing the reliability of computer vision systems in real-world applications, motivating the prioritisation of developing robust strategies to identify, quantify and address ambiguity.
This work addresses the conceptualization, design, and implementation of an Application Programming Interface (API) for the Common Security Advisory Framework (CSAF) 2.0, introducing another method for distributing CSAF documents in addition to two already existing methods. These don't allow the use of flexible queries as well as filtering, which makes it difficult for operators of software and hardware to use CSAF. An API is intended to simplify this process and thus advance the automation goal of CSAF.
First, it is evaluated whether the current standard allows the implementation of an API. Any conflicts are highlighted and suggestions for standard adaptations are made. Based on these results, the API is designed to meet the previously defined requirements. Subsequently, a proof of concept is successfully developed according to the design and extensively tested with specially prepared test data. Finally, the results and the necessary standard adjustments are summarized and justified.
The conceptual design and the implementation were successfully completed. However, during the implementation of the proof of concept, some routes could not be fully implemented.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
Robust scheduling problem is a major decision problem that is addressed in the literature, especially for remanufacturing systems; this problem is complex because of the high uncertainty and complex constraints involved. Generally, the existing approaches are dedicated to specific processes and do not enable the quick and efficient generation and evaluation of schedules. With the emergence of the Industry 4.0 paradigm, data availability is now considered an opportunity to facilitate the decision-making process. In this study, a data-driven decisionmaking process is proposed to treat the robust scheduling problem of remanufacturing systems in uncertain environments. In particular, this process generates simulation models based on a data-driven modeling approach. A robustness evaluation approach is proposed to answer several decision questions. An application of the decision process in an industrial case of a remanufacturing system is presented herein, illustrating the impact of robustness evaluation results on real-life decisions.
Electrochemical pressure impedance spectroscopy (EPIS) is an emerging tool for the diagnosis of polymer electrolyte membrane fuel cells (PEMFC). It is based on analyzing the frequency response of the cell voltage with respect to an excitation of the gas-phase pressure. Several experimental studies in the past decade have shown the complexity of EPIS signals, and so far there is no agreement on the interpretation of EPIS features. The present study contributes to shed light into the physicochemical origin of EPIS features, by using a combination of pseudo-two-dimensional modeling and analytical interpretation. Using static simulations, the contributions of cathode equilibrium potential, cathode overpotential, and membrane resistance on the quasi-static EPIS response are quantified. Using model reduction, the EPIS responses of individual dynamic processes are predicted and compared to the response of the full model. We show that the EPIS signal of the PEMFC studied here is dominated by the humidifier. The signal is further analyzed by using transfer functions between various internal cell states and the outlet pressure excitation. We show that the EPIS response of the humidifier is caused by an oscillating oxygen molar fraction due to an oscillating mass flow rate.
Electrochemical pressure impedance spectroscopy (EPIS) has received the attention of researchers as a method to study mass transport processes in polymer electrolyte mem-brane fuel cells (PEMFC). It is based on analyzing the cell voltage response to a harmonic excitation of the gas phase pressure in the frequency domain. Several experiments with a single-cell fuel cell have shown that the spectra contain information in the frequency range typical for mass transport processes and are sensitive to specific operating condi-tions and structural fuel cell parameters. To further benefit from the observed features, it is essential to identify why they occur, which to date has not yet been accomplished. The aim of the present work, therefore, is to identify causal links between internal processes and the corresponding EPIS features.
To this end, the study follows a model-based approach, which allows the analysis of inter-nal states that are not experimentally accessible. The PEMFC model is a pseudo-2D model, which connects the mass transport along the gas channel with the mass transport through the membrane electrode assembly. A modeling novelty is the consideration of the gas vol-ume inside the humidifier upstream the fuel cell inlet, which proves to be crucial for the reproduction of EPIS. The PEMFC model is parametrized to a 100 cm² single cell of the French project partner, who provided the experimental EPIS results reproduced and in-terpreted in the present study.
The simulated EPIS results show a good agreement with the experiments at current den-sities ≤ 0.4 A cm–2, where they allow a further analysis of the observed features. At the lowest excitation frequency of 1 mHz, the dynamic cell voltage response approaches the static pressure-voltage response. In the simulated frequency range between 1 mHz – 100 Hz, the cell voltage oscillation is found to strongly correlate with the partial pressure oscillation of oxygen, whereas the influence of the water pressure is limited to the low frequency region.
The two prominent EPIS features, namely the strong increase of the cell voltage oscillation and the increase of phase shift with frequency, can be traced back via the oxygen pressure to the oscillation of the inlet flow rate. The phenomenon of the oscillating inlet flow rate is a consequence of the pressure change of the gas phase inside the humidifier and in-creases with frequency. This important finding enables the interpretation of experimen-tally observed EPIS trends for a variation of operational and structural fuel cell parame-ters by tracing them back to the influence of the oscillating inlet flow rate.
The separate simulation of the time-dependent processes of the PEMFC model through model reduction shows their individual influence on EPIS. The sluggish process of the wa-ter uptake by the membrane is visible below 0.1 Hz, while the charge and discharge of the double layer becomes visible above 1 Hz. The gas transport through the gas diffusion layer is only visible above 100 Hz. The simulation of the gas transport through the gas channel
without consideration of the humidifier becomes visible above 1 Hz. With consideration of the humidifier the gas transport through the gas channel is visible throughout the fre-quency range. The strong similarity of the spectra considering the humidifier with the spectra of the full model setup shows the dominant influence of the humidifier on EPIS.
A promising observation is the change in the amplitude relationship between the cell volt-age and the oxygen partial pressure oscillation as a function of the oxygen concentration in the catalyst layer. At a frequency where the influence of oxygen pressure on the cell voltage is dominant, for example at 1 Hz, the amplitude of the cell voltage oscillation could be used to indirectly measure the oxygen concentration in the catalyst layer.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
This thesis explores the feasibility and optimization of a solar-thermal sorption system mainly designed to provide cooling but also capable of heating functionalities. Through the development of a black-box model using Python programming, the study delves into the system's performance under various operation modes. Simulation results reveal the effectiveness of adaptive control strategies and pre-heating stages in optimizing efficiency, particularly in cooling modes. In heating assessments, superior performance is observed when utilizing the outdoor coil as the heat source for the heat pump. Challenges related to operational temperature bands are addressed, proposing parallel connections of the heat pump and outdoor coil to enhance performance. Future research directions include refining regression models and incorporating real-time measurement data for improved accuracy, as well as extending simulation duration for comprehensive evaluations. This study contributes valuable insights into the system’s capabilities and applications, laying the groundwork for advancements in heat-driven integrated sustainable energy systems.
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
eLetter zum Artikel "Condiciones neuropsi-quiátricas y probable causa de muerte de Maurice Ravel" von Gómez-Carvajal AM, Botero-Meneses JS, Palacios-Espinosa X und Palacios-Sánchez L., veröffentlicht in Iatreia 35(3), Seite 341-8 (DOI: https://doi.org/10.17533/udea.iatreia.154).
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.