Refine
Year of publication
Document Type
- Conference Proceeding (29)
- Article (reviewed) (19)
- Article (unreviewed) (17)
- Doctoral Thesis (5)
- Report (4)
- Letter to Editor (1)
- Study Thesis (1)
Conference Type
- Konferenzartikel (23)
- Konferenz-Abstract (3)
- Konferenz-Poster (1)
- Sonstiges (1)
Is part of the Bibliography
- yes (76) (remove)
Keywords
- Deep Leaning (3)
- Deep Learning (3)
- Hochschuldidaktik (3)
- machine learning (3)
- Advanced Footwear Technology (2)
- Alexander von Humboldt (2)
- Artificial Intelligence (2)
- Biomechanics (2)
- Digitaler Zwilling (2)
- Exercise Science (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (25)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (24)
- INES - Institut für nachhaltige Energiesysteme (13)
- Fakultät Medien (M) (ab 22.04.2021) (12)
- Fakultät Wirtschaft (W) (11)
- IMLA - Institute for Machine Learning and Analytics (9)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (4)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (3)
- ACI - Affective and Cognitive Institute (2)
- IfTI - Institute for Trade and Innovation (2)
Open Access
- Diamond (76) (remove)
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
Running shoes were categorized either as motion control, cushioned, or minimal footwear in the past. Today, these categories blur and are not as clearly defined. Moreover, with the advances in manufacturing processes, it is possible to create individualized running shoes that incorporate features that meet individual biomechanical and experiential needs. However, specific ways to individualize footwear to reduce individual injury risk are poorly understood. Therefore, the purpose of this scoping review was to provide an overview of (1) footwear design features that have the potential for individualization; (2) human biomechanical variability as a theoretical foundation for individualization; (3) the literature on the differential responses to footwear design features between selected groups of individuals. These purposes focus exclusively on reducing running-related risk factors for overuse injuries. We included studies in the English language on adults that analyzed: (1) potential interaction effects between footwear design features and subgroups of runners or covariates (e.g., age, gender) for running-related biomechanical risk factors or injury incidences; (2) footwear perception for a systematically modified footwear design feature. Most of the included articles (n = 107) analyzed male runners. Several footwear design features (e.g., midsole characteristics, upper, outsole profile) show potential for individualization. However, the overall body of literature addressing individualized footwear solutions and the potential to reduce biomechanical risk factors is limited. Future studies should leverage more extensive data collections considering relevant covariates and subgroups while systematically modifying isolated footwear design features to inform footwear individualization.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Public export credits and trade insurance require a global framework of institutions, rules and regulations to avoid subsidies and a race to the bottom. The extensive modernisation of the Arrangement on Officially Supported Export Credits (Arrangement) of the Organisation for Economic Co-operation and Development intends to re-level the playing field. This Practitioner Commentary describes the demand for adequate government interventions, considers the need for the reform and discusses key aspects of the new Arrangement. We argue that there is a breakthrough in several important areas such as tenors, repayment terms and green finance. However, we also find that the modernisation falls short in areas such as the interplay between different rulebooks, pre-shipment instruments' regulations and climate action.
The impact of the circular economy on sustainable development: A European panel data approach
(2022)
The circular economy (CE) has attracted considerable attention because of its potential to help achieve sustainable development (SD). This paper presents a comprehensive analysis of the effect of the CE on the three dimensions of SD at the country level. We analysed the impact of each CE source of value (renewable energy, reuse, repair, recycling) and the influence of an overall factor-analysis-derived measure of the CE on the economic, environmental and social dimensions of SD. The aim was to compare the individual impacts and outcomes of the CE and its sources of value in a single study. Panel data analysis was performed using a sample of 25 European countries for the period 2010 to 2019. The findings show a major impact of the CE on achieving SD, which has positive
effects on the economy, environment and society. However, the results show that the impact of each CE value source on the three SD dimensions varies. While renewable energies and reuse reduce the impact on the environment, recycling has no effect, and repair increases GHG emissions. However, repair is the only CE source with a positive economic impact at the country level. Finally, renewable energy, repair and recycling reduce unemployment. Decision makers should conduct impact analysis to design suitable, efficient and targeted measures depending on each country's specific objectives.
The Humboldt Portal has been designed and implemented as part of an ongoing research project to develop an information system on the Internet to share the documents and rare books of Alexander von Humboldt, a 19th century German scientist and explorer, who viewed the natural world holistically and described the harmony of nature among the diversity of the physical world. Even after more than two centuries he is admired for his ability to see the natural world and human nature in the context of a complex network of relationships. The design and implementation of the Humboldt Portal are also oriented to support further research on Humboldt’s intellectual perspective.
Although all of Humboldt's works can be found on the internet as digitized documents, the complexity and internal inter-connectivity of his vision of nature cannot be adequately represented only by digitized papers or scanned documents in digital libraries.
As a consequence a specific portal of the Humboldt's documents was developed, which extends the standards of digital libraries and offers a technical approach for the adequate presentation of highly interconnected data.
Due to the continuous scientific and literary research, new insights and requirements for the digital presentation of Humboldt documents are constantly emerging, so that this article only provides a summary of the concepts realized at now. Consequently, the design and implementation of the Humboldt Portal is both: a consequence of a continuing research project and oriented to support more research on Humboldt´s intellectual holistic perspective, which was an anticipation to the System Approach of the last Century.
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
The accurate diagnosis of state of charge (SOC) and state of health (SOH) is of utmost importance for battery users and for battery manufacturers. State diagnosis is commonly based on measuring battery current and using it in Coulomb counters or as input for a current-controlled model. Here we introduce a new algorithm based on measuring battery voltage and using it as input for a voltage-controlled model. We demonstrate the algorithm using fresh and pre-aged lithium-ion battery single cells operated under well-defined laboratory conditions on full cycles, shallow cycles, and a dynamic battery electric vehicle load profile. We show that both SOC and SOH are accurately estimated using a simple equivalent circuit model. The new algorithm is self-calibrating, is robust with respect to cell aging, allows to estimate SOH from arbitrary load profiles, and is numerically simpler than state-of-the-art model-based methods.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
In the last years, social robots have become a trending topic. Indeed, robots which communicate with us and mimic human behavior patterns are fascinating. However, while there is a massive body of research on their design and acceptance in different fields of application, their market potential has been rarely investigated. As their future integration in society may have a vast disruptive potential, this work aims at shedding light on the market potential, focusing on the assistive health domain. A study with 197 persons from Italy (age: M = 67.87; SD = 8.87) and Germany (age: M = 62.15; SD = 6.14) investigates cultural acceptance, desired functionalities, and purchase preferences. The participants filled in a questionnaire after watching a video illustrating some examples of social robots. Surprisingly, the individual perception of health status, social status as well as nationality did hardly influence the attitude towards social robots, although the German group was somewhat more reluctant to the idea of using them. Instead, there were significant correlations with most dimensions of the Almere model (like perceived enjoyment, sociability, usefulness and trustworthiness). Also, technology acceptance resulted strongly correlated with the individual readiness to invest money. However, as most persons consider social robots as “Assistive Technological Devices” (ATDs), they expected that their provision should mirror the usual practices followed in the two Countries for such devices. Thus, to facilitate social robots’ future visibility and adoption by both individuals and health care organisations, policy makers would need to start integrating them into official ATDs databases.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
The use of biochar is an important tool to improve soil fertility, reduce the negative environmental impacts of agriculture, and build up terrestrial carbon sinks. However, crop yield increases by biochar amendment were not shown consistently for fertile soils under temperate climate. Recent studies show that biochar is more likely to increase crop yields when applied in combination with nutrients to prepare biochar-based fertilizers. Here, we focused on the root-zone amendment of biochar combined with mineral fertilizers in a greenhouse trial with white cabbage (Brassica oleracea convar. Capitata var. Alba) cultivated in a nutrient-rich silt loam soil originating from the temperate climate zone (Bavaria, Germany). Biochar was applied at a low dosage (1.3 t ha−1). The biochar was placed either as a concentrated hotspot below the seedling or it was mixed into the soil in the root zone representing a mixture of biochar and soil in the planting basin. The nitrogen fertilizer (ammonium nitrate or urea) was either applied on the soil surface or loaded onto the biochar representing a nitrogen-enhanced biochar. On average, a 12% yield increase in dry cabbage heads was achieved with biochar plus fertilizer compared to the fertilized control without biochar. Most consistent positive yield responses were observed with a hotspot root-zone application of nitrogen-enhanced biochar, showing a maximum 21% dry cabbage-head yield increase. Belowground biomass and root-architecture suggested a decrease in the fine root content in these treatments compared to treatments without biochar and with soil-mixed biochar. We conclude that the hotspot amendment of a nitrogen-enhanced biochar in the root zone can optimize the growth of white cabbage by providing a nutrient depot in close proximity to the plant, enabling efficient nutrient supply. The amendment of low doses in the root zone of annual crops could become an economically interesting application option for biochar in the temperate climate zone.
The contribution of the RoofKIT student team to the SDE 21/22 competition is the extension of an existing café in Wuppertal, Germany, to create new functions and living space for the building with simultaneous energetic upgrading. A demonstration unit is built representing a small cut-out of this extension. The developed energy concept was thoroughly simulated by the student team in seminars using Modelica. The system uses mainly solar energy via PVT collectors as the heat source for a brine-water heat pump (space heating and hot water). Energy storage (thermal and electrical) is installed to decouple generation and consumption. Simulation results confirm that carbon neutrality is achieved for the building operation, consuming and generating around 60 kWh/m2a.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.