Refine
Year of publication
Document Type
- Conference Proceeding (29)
- Article (reviewed) (19)
- Article (unreviewed) (17)
- Bachelor Thesis (10)
- Master's Thesis (6)
- Doctoral Thesis (5)
- Report (4)
- Letter to Editor (1)
- Study Thesis (1)
Conference Type
- Konferenzartikel (23)
- Konferenz-Abstract (3)
- Konferenz-Poster (1)
- Sonstiges (1)
Keywords
- Deep Leaning (3)
- Deep Learning (3)
- Hochschuldidaktik (3)
- machine learning (3)
- Advanced Footwear Technology (2)
- Alexander von Humboldt (2)
- Artificial Intelligence (2)
- Biomechanics (2)
- Computersicherheit (2)
- Deep learning (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (31)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (27)
- Fakultät Medien (M) (ab 22.04.2021) (19)
- INES - Institut für nachhaltige Energiesysteme (13)
- Fakultät Wirtschaft (W) (12)
- IMLA - Institute for Machine Learning and Analytics (9)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (4)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (3)
- ACI - Affective and Cognitive Institute (2)
- IfTI - Institute for Trade and Innovation (2)
Open Access
- Diamond (92) (remove)
Garbage in, Garbage out: How does ambiguity in data affect state-of-the-art pedestrian detection?
(2024)
This thesis investigates the critical role of data quality in computer vision, particularly in the realm of pedestrian detection. The proliferation of deep learning methods has emphasised the importance of large datasets for model training, while the quality of these datasets is equally crucial. Ambiguity in annotations, arising from factors like mislabelling, inaccurate bounding box geometry and annotator disagreements, poses significant challenges to the reliability and robustness of the pedestrian detection models and their evaluation. This work aims to explore the effects of ambiguous data on model performance with a focus on identifying and separating ambiguous instances, employing an ambiguity measure utilizing annotator estimations of object visibility and identity. Through accurate experimentation and analysis, trade-offs between data cleanliness and representativeness, noise removal and retention of valuable data emerged, elucidating their impact on performance metrics like the log average miss-rate, recall and precision. Furthermore, a strong correlation between ambiguity and occlusion was discovered with higher ambiguity corresponding to greater occlusion prevalence. The EuroCity Persons dataset served as the primary dataset, revealing a significant proportion of ambiguous instances with approximately 8.6% ambiguity in the training dataset and 7.3% in the validation set. Results demonstrated that removing ambiguous data improves the log average miss-rate, particularly by reducing the false positive detections. Augmentation of the training data with samples from neighbouring classes enhanced the recall but diminished precision. Error correction of wrong false positives and false negatives significantly impacts model evaluation results, as evidenced by shifts in the ECP leaderboard rankings. By systematically addressing ambiguity, this thesis lays the foundation for enhancing the reliability of computer vision systems in real-world applications, motivating the prioritisation of developing robust strategies to identify, quantify and address ambiguity.
Die vorliegende Arbeit beschäftigt sich mit der Nutzung von Reinforcement Learning in der Informationsbeschaffungs-Phase eines Penetration Tests. Es werden Kernprobleme in den bisherigen Ansätzen anderer das Thema betreffender wissenschaftlicher Arbeiten analysiert und praktische Lösungsansätze für diese bisherigen Hindernisse vorgestellt und implementiert. Die Arbeit zeigt damit eine beispielhafte Implementierung eines Reinforcement Learning Agenten zur Automatisierung der Informationsbeschaffungs-Phase eines Penetration Tests und stellt Lösungen für existierende Probleme in diesem Bereich dar.
Eingebettet wird diese wissenschaftliche Arbeit in die Anforderungen der Herrenknecht AG hinsichtlich der Absicherung des Tunnelbohrmaschinen-Netzwerks. Dabei werden praktische Ergebnisse des eigen entwickelten Reinforcement Learning Modells im Tunnelbohrmaschinen-Test-Netzwerk der Herrenknecht AG vorgestellt.
With the expansion of IoT devices in many aspects of our life, the security of such systems has become an important challenge. Unlike conventional computer systems, any IoT security solution should consider the constraints of these systems such as computational capability, memory, connectivity, and power consumption limitations. Physical Unclonable Functions (PUFs) with their special characteristics were introduced to satisfy the security needs while respecting the mentioned constraints. They exploit the uncontrollable and reproducible variations of the underlying component for security applications such as identification, authentication, and communication security. Since IoT devices are typically low cost, it is important to reuse existing elements in their hardware (for instance sensors, ADCs, etc.) instead of adding extra costs for the PUF hardware. Micro-electromechanical system (MEMS) devices are widely used in IoT systems as sensors and actuators. In this thesis, a comprehensive study of the potential application of MEMS devices as PUF primitives is provided. MEMS PUF leverages the uncontrollable variations in the parameters of MEMS elements to derive secure keys for cryptographic applications. Experimental and simulation results show that our proposed MEMS PUFs are capable of generating enough entropy for a complex key generation, while their responses show low fluctuations in different environmental conditions.
Keeping in mind that the PUF responses are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In the second part of this thesis, we elaborate on different key generation schemes and their advantages and drawbacks. We propose the PUF output positioning (POP) and integer linear programming (ILP) methods, which are novel methods for grouping the PUF outputs in order to maximize the extracted entropy. To implement these methods, the key enrollment and key generation algorithms are presented. The proposed methods are then evaluated by applying on the responses of the MEMS PUF, where it can be practically shown that the proposed method outperforms other existing PUF key generation methods.
The final part of this thesis is dedicated to the application of the MEMS PUF as a security solution for IoT systems. We select the mutual authentication of IoT devices and their backend system, and propose two lightweight authentication protocols based on MEMS PUFs. The presented protocols undergo a comprehensive security analysis to show their eligibility to be used in IoT systems. As the result, the output of this thesis is a lightweight security solution based on MEMS PUFs, which introduces a very low overhead on the cost of the hardware.
This report examines exporters’ challenges and possible solutions for public intervention to promote foreign trade. Based on fieldwork conducted in Georgia, we explore which policy approaches can help to stimulate Georgian exports further. Our outcomes show that exporters face substantial barriers such as navigating complex trade regulations, lack of knowledge about target markets, trade finance gaps, as well as new export promotion programs (EPPs) in competitor countries. Other upper-middle-income countries can learn from our results that exporters can significantly benefit from a comprehensive export promotion strategy combined with an ecosystem-based “team” approach. EPPs related to awareness and capacity building in Georgia should be part of this strategy, focusing on challenges such as a lack of knowledge about trade practices and international business skills. Other EPPs must help to mitigate related market failures, as information gathering is costly, and firms have no incentive to share this information with competitors. Furthermore, targeted marketing support and customer matchmaking can answer Georgian exporters’ challenges, such as lack of market access and low sector visibility. Our results also show that public intervention through financial support and risk mitigation is essential for firms with an international orientation. The high-quality, rich outcomes provide significant value for other upper-middle-income countries by exploring the example of Georgia’s contemporary circumstances in an in-depth manner based on extensive interviews and document analysis. Limitations include that our work primarily relies on qualitative data and further research could involve a quantitative study with a diverse range of sectors.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
Decarbonisation Strategies in Energy Systems Modelling: APV and e-tractors as Flexibility Assets
(2023)
This work presents an analysis of the impact of introducing Agrophotovoltaic technologies and electric tractors into Germany’s energy system. Agrophotovoltaics involves installing photovoltaic systems in agricultural areas, allowing for dual usage of the land for both energy generation and food production. Electric tractors, which are agricultural machinery powered by electric motors, can also function as energy storage units, providing flexibility to the grid. The analysis includes a sensitivity study to understand how the availability of agricultural land influences Agrophotovoltaic investments, followed by the examination of various scenarios that involve converting diesel tractors to electric tractors. These scenarios are based on the current CO2 emission reduction targets set by the German Government, aiming for a 65% reduction below 1990 levels by 2030 and achieving zero emissions by 2045. The results indicate that approximately 3% of available agricultural land is necessary to establish a viable energy mix in Germany. Furthermore, the expansion of electric tractors tends to reduce the overall system costs and enhances the energy-cost-efficiency of Agrophotovoltaic investments.
Vorhofflimmern ist die häufigste tachykarde Herzrhythmusstörung weltweit. Dabei verliert das Herz seinen normofrequenten Sinusrhythmus und schlägt nicht mehr regelmäßig, sondern zu schnell und unregelmäßig. Vorhofflimmern ist normalerweise keine lebensbedrohliche Herzrhythmusstörung, aber es kann zu einem Schlaganfall führen. Die Ursache dieser Herzrhythmusstörung sind die Kreisende bzw. die fokalen Erregungen im linken Atrium, die hauptsächliche aus einer oder mehreren Pulmonalvenen kommen. Die übliche Therapieverfahren des Vorhofflimmerns ist die Pulmonalvenenisolation.
Diese Bachelorthesis beschäftigt sich daher mit der Modellierung unterschiedlicher linksatrialer Fokus-Modelle und intrakardialer Elektrodenkatheter für die Diagnostik und Terminierung von Vorhofflimmern mittels Pulmonalvenenisolation im Offenburger Herzrhythmusmodell nach Schalk, Krämer und Benke, welches in CST
Studio Suite realisiert wurde.
Zu Beginn wurden die verschiedenen linksatrialen fokalen Flimmerquellen modelliert und daraufhin simuliert. Hierbei wurde jeweils eine Simulation mit linksatrialen fokalen Flimmerquellen, die aus einzelnen, dualen oder allen vier Pulmonalvenen kommen, durchgeführt. Es wurde ebenfalls eine weitere Simulation mit Biosignalen (aus der Realität) erstellt. Mit diesen Simulationen konnte nun der elektrische Erregungsablauf sichtbar gemacht werden. Daraufhin wurden die Katheter für die Diagnostik und für die Pulmonalvenenisolation modelliert und in das bestehende Offenburger Herzrhythmusmodell integriert. Bei den Diagnostik-Kathetern handelte es sich um 10-polige Lasso® Katheter, zwei Varianten von PentaRay® NAV eco Katheter und 4-polige Diagnostik-Katheter „OSYPKA FINDER pure®“. Ablationskatheter sind zwei Varianten von Pentaspline Basket pose Katheter und HELIOSTAR™ Ablation Ballon. Abschließend wurden verschiedene Varianten von Isolationsverfahren der Pulmonalvenen modelliert und daraufhin die linksatrialen fokalen Flimmerquellen nach der Isolation der Pulmonalvenen simuliert.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Recent advances in spiked shoe design, characterized by increased longitudinal stiffness, thicker midsole foams, and reconfigured geometry are considered to improve sprint performance. However, so far there is no empirical data on the effects of advanced spikes technology on maximal sprinting speed (MSS) published yet. Consequently, we assessed MSS via ‘flying 30m’ sprints of 44 trained male (PR: 10.32 s - 12.08 s) and female (PR: 11.56 s - 14.18 s) athletes, wearing both traditional and advanced spikes in a randomized, repeated measures design. The results revealed a statistically significant increase in MSS by 1.21% on average when using advanced spikes technology. Notably, 87% of participants showed improved MSS with the use of advanced spikes. A cluster analysis unveiled that athletes with higher MSS may benefit to a greater extent. However, individual responses varied widely, suggesting the influence of multiple factors that need detailed exploration. Therefore, coaches and athletes are advised to interpret the promising performance enhancements cautiously and evaluate the appropriateness of the advanced spike technology for their athletes critically.
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
The increasing diffusion of rapidly developing AI technologies led to the idea of the experiment to combine TRIZ-based automated idea generation with the natural language processing tool ChatGPT, using the chatbot to interpret the automatically generated elementary solution principles. The article explores the opportunities and benefits of a novel AI-enhanced approach to teaching systematic innovation, analyses the learning experience, identifies the factors that affect students' innovation and problem-solving performance, and highlights the main difficulties students face, especially in interdisciplinary problems.
Inner Congo
(2023)
This research-creation project, part of the DE\GLOBALIZE artistic research cycle presented at the #IFM2022 Conference, investigates the complexities of Congo violence, care, and colonialism. Drawing on Michel Serres' metaphor of the great estuaries, the study explores the topology of interactive documentaries, blending theory, emotion, and personal experiences. Accessible through the interactive web documentation at http://deglobalize.com, the platform offers a media-archaeological archive for speculative ethnography, enabling the forensic processing of single documents in line with actor-network theory.
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
Enhancing engineering creativity with automated formulation of elementary solution principles
(2023)
The paper describes a method for the automated formulation of elementary creative stimuli for product or process design at different levels of abstraction and in different engineering domains. The experimental study evaluates the impact of structured automated idea generation on inventive thinking in engineering design and compares it with previous experimental studies in educational and industrial settings. The outlook highlights the benefits of using automated ideation in the context of AI-assisted invention and innovation.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
In der Geschichte »Die Schule« (Originaltitel: ,,The fun they had“) von 1954 beschreibt der russisch-amerikanische Wissenschaftler und Science fiction Autor Isaac Asimov, wie die Schule der Zukunft im Jahr 2157 aussieht – oder genauer: dass es gar keine Schulen mehr gibt. Jedes Kind hat neben seinem Kinderzimmer im Elternhaus einen kleinen Schulraum, in dem es von einem mechanischen Lehrer (einer Maschine mit Bildschirm und einem Schlitz zum Einwerfen der Hausaufgaben) unterrichtet wird. Diese Lehrmaschine ist perfekt auf die Fähigkeiten des einzelnen Kindes eingestellt und kann es optimal beschulen. Nur: Maschinen können kaputt gehen. Die elfjährige Margie wird von ihrem mechanischen Lehrer wieder und wieder in Geographie abgefragt, aber jedes Mal schlechter benotet. Das sieht die Mutter und ruft den Schulinspektor, um den mechanischen Lehrer zu reparieren.
Die Visualisierung von Programmabläufen ist ein zentraler Aspekt für Programmieranfänger, um das Verständnis von Codeabläufen zu erleichtern und den Einstieg in der Softwareentwicklung zu unterstützen. In dieser Masterthesis wird ein speziell auf die Bedürfnisse von Einsteigern zugeschnittenes generisches Framework vorgestellt, wobei der Fokus auf einer einfachen, verständlichen aber auch korrekten Darstellung der Programmausführung liegt. Das Framework integriert das Debugger Adapter Protocol, um den Debugger unterschiedlicher Sprachen ansprechen und verwenden zu können.
In dieser Arbeit werden zunächst die Anforderungen für das generische Framework diskutiert. Anschließend werden bestehende Ansätze zur Visualisierung von Programmabläufen ausführlich untersucht und analysiert. Die Implementierung des Frameworks wird daraufhin detailliert beschrieben, wobei besonderer Wert auf die Erweiterbarkeit unterschiedlicher Sprachen gelegt wird.
Um die Eignung des Frameworks zu evaluieren, werden mehrere Aufgaben aus dem ersten Modul mit der jeweiligen Programmiersprache des Studiengangs Angewandte Informatik der Hochschule Offenburg betrachtet. Die Ergebnisse zeigen, dass das Framework mit den Aufgaben umgehen und diese korrekt und verständlich darstellen kann.
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like Computer Vision (CV), Neural Language Processing (NLP), and Reinforcement Learning (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and secondorder information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for Generative Adversarial Networks (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and neural architecture search (NAS) through the framework of group sparsity. Group sparsity is achieved through ℓ2,1-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
Selbsttests in Lernmanagementsystemen (LMS) ermöglichen es Studierenden, den eigenen Lernfortschritt einzuschätzen. Im Gegensatz zur Einreichung und Korrektur vollständig ausformulierter Aufgabenlösungen nutzen LMS überwiegend die Eingabe der Lösung im Antwort-Auswahl-Verfahren (Single-Choice). Nach didaktischen Ansatz „Physik durch Informatik“ geben die Lernenden stattdessen ihre Aufgabenlösungen in einer Programmiersprache ins LMS ein, was eine automatisierte Rückmeldung erleichtert und das Erreichen einer höheren Kompetenzstufe fördert. Es wurden zehn LMS-Selbsttests erstellt, bei denen die Lösungen zu einer Lehrbuch-Aufgabenstellung jeweils durch Eingabe in einer Programmiersprache und von einer Kontrollgruppe im Antwort-Auswahl-Verfahren abgefragt wurden. Ergebnisse aus dem ersten Einsatz dieser Selbsttests für die Lehrveranstaltung Physik im Studiengang Biotechnologie werden vorgestellt.
Künstliche Intelligenz (KI) durchdringt unser Leben immer stärker. Studierende werden im Alltag und an Hochschulen zunehmend mit KI-Anwendungen konfrontiert. An der Hochschule Offenburg werden deshalb KI-bezogene Lehrangebote curricular verankert, um Studierende im Erwerb von KI-Kompetenz zu unterstützen.
Der Beitrag stellt ein Konzept für die Entwicklung von Lehrveranstaltungen nach der Idee des pädagogischen Makings zur Förderung von KI-Kompetenz in der Hochschullehre vor. Konkretisiert wird das Konzept anhand eines Moduls zum Thema Chatbots, dessen Lehrinhalte interdisziplinär aus verschiedenen Perspektiven ausgearbeitet werden.
Go ist eine 2009 veröffentlichte Programmiersprache mit einem statischen Typsystem. Seit Version 1.18 sind auch Generics ein Teil der Sprache. Deren Übersetzung wurde im de facto Standard-Compiler mittels Monomorphisierung umgesetzt. Diese bringt neben einigen Vorteilen auch Nachteile mit sich. Aus diesem Grund beschäftigt sich diese Arbeit mit einer alternativen Übersetzungsstrategie für Generics in Go und implementiert diese in einem neuen Compiler für Featherweight Generic Go, einem Subset von Go. Zum Schluss steht damit ein nahezu funktionierender Compiler, welcher schließlich Racket-Code ausgibt. Eine Evaluierung der Performanz der Übersetzungsstrategie ist allerdings noch ausstehend.
The variable refrigerant flow system is one of the best heating, ventilation, and air conditioning systems (HVAC) thanks to its ability to provide thermal comfort inside buildings. But, at the same time, these systems are considered one of the most energy-consuming systems in the building sector. Thus, it is crucial to well size the system according to the building’s cooling and heating needs and the indoor temperature fluctuations. Although many researchers have studied the optimization of the building energy performance considering heating or cooling needs, using air handling units, radiant floor heating, and direct expansion valves, few studies have considered the use of multi-objective optimization using only the thermostat setpoints of VRF systems for both cooling and heating needs. Thus, the main aim of this study is to conduct a sensitivity analysis and a multi-objective optimization strategy for a residential building containing a variable refrigerant flow system, to evaluate the effect of the building performance on energy consumption and improve the building energy efficiency. The numerical model was based on the EnergyPlus, jEPlus, and jEPlus+EA simulation engines. The approach used in this paper has allowed us to reach significant quantitative energy saving by varying the cooling and heating setpoints and scheduling scenarios. It should be stressed that this approach could be applied to several HVAC systems to reduce energy-building consumption.
In den letzten Jahren haben Recommender Systeme zunehmend an Bedeutung gewonnen. Diese Systeme sind meist für Bereiche des E-Commerce konzipiert und berücksichtigen oftmals nicht den aktuellen Kontext der nutzenden Person. Recommender Systeme können allerdings nicht nur im E-Commerce zum Einsatz kommen, sondern finden ihren Anwendungszweck auch im Gesundheitswesen. Ziel dieser Bachelorarbeit ist es, ein Recommender System zu entwickeln, das den aktuellen Kontext der nutzenden Person (Chatverlauf, demografische Daten) besser berücksichtigen kann. Dazu befasst sich diese Arbeit mit der Konzeption und prototypischen Umsetzung eines kontextsensitiven Recommender Systems für einen bereits existierenden Chatbot aus dem Gesundheitswesen. Das in dieser Arbeit konzipierte und entwickelte Recommender System soll Mitarbeitende aus dem Gesundheits- und Sozialwesen entlasten und ihnen hilfreiche sowie thematisch sinnvolle Informationen zur Verfügung stellen. Basierend auf festgelegten Anforderungen wurde ein Konzept für das Recommender System entwickelt und zu Teilen als Prototyp umgesetzt. Abschließend wurde der Prototyp im Hinblick auf die Anforderungen evaluiert. Zudem fand eine technische Evaluation und eine Evaluation mithilfe von Anwendenden statt, welche den implementierten Prototypen bereits existierenden Systemen gegenüberstellte. Die von dem Prototyp empfohlenen Textausschnitte erzielten in der Evaluation mit nutzenden Personen eine thematisch signifikant höhere Übereinstimmung mit den Chatdaten.
In den letzten Jahren entstand ein regelrechter Hype um das Thema Kryptowährungen und sie sind in Gesellschaft, Politik und Wirtschaft kaum noch wegzudenken. Trotz der hohen Volatilität und des Risikos für Investoren werden Kryptowährungen teilweise auch als eine Alternative für herkömmliche Währungen angesehen. Daher stellt sich die Frage, ob die Einstellung zu Kryptowährungen und auch das Investitionsverhalten auf einer sorgfältigen Auseinandersetzung von Argumenten basiert. Dafür wurde die Forschungsfrage: „Basieren Kaufverhalten und Einstellung zu Kryptowährungen eher auf der intensiven Auseinandersetzung mit tatsächlichen Argumenten oder auf oberflächlichen Reizen?“ aufgestellt. Zur Untersuchung dieser wurde eine Umfrage mit 283 Teilnehmenden durchgeführt. Auf Basis theoretischer Überlegungen zur Einstellungsforschung durch das Elaboration-Likelihood-Model wurde ein Fragebogen entworfen, der den Einfluss der Elaboration auf die Einstellung und das Kaufverhalten empirisch messbar machen sollte. Durch eine Kausalanalyse mittels Strukturgleichungsmodell konnte ein teilweise signifikanter Einfluss von elaborationsbestimmenden Größen auf Einstellung und Kaufverhalten festgestellt werden. Eine Überprüfung der Gütekriterien des Fragebogens mittels explorativer und konfirmatorischer Faktoranalyse ergab in Hinsicht auf Reliabilität und Validität jedoch keine zufriedenstellenden Ergebnisse. Die Ergebnisse des Kausalmodells sollten deswegen mit Vorsicht betrachtet werden. In weiterführenden Forschungen könnte die Struktur der durch den Fragebogen erhobenen Konstrukte für Elaboration und Einstellung überarbeitet werden, um eine bessere Reliabilität und Validität zu erreichen und somit genauere Aussagen über die eigentlichen Beziehungen der Konstrukte treffen zu können.
Public export credits and trade insurance require a global framework of institutions, rules and regulations to avoid subsidies and a race to the bottom. The extensive modernisation of the Arrangement on Officially Supported Export Credits (Arrangement) of the Organisation for Economic Co-operation and Development intends to re-level the playing field. This Practitioner Commentary describes the demand for adequate government interventions, considers the need for the reform and discusses key aspects of the new Arrangement. We argue that there is a breakthrough in several important areas such as tenors, repayment terms and green finance. However, we also find that the modernisation falls short in areas such as the interplay between different rulebooks, pre-shipment instruments' regulations and climate action.
Additive manufacturing enables the production of lightweight and resilient components with extensive design freedom. In the low-cost sector, material extrusion (e.g. Fused Deposition Modeling - FDM) has been the main method used to date. Thus, robust 3D printers and inexpensive 3D materials (polymer filaments) can be used. However, the printing times for FDM are very long and the quality of the dimensions and surfaces is limited. Recently, new processes from the field of Vat polymerization have entered the market. For example, masked stereolithography (mSLA) offers a significant improvement in component quality and build speed through the use of resins and large-area curing at still reasonable costs. Currently, there is only limited knowledge available on the optimal design of components using this young process. In this contribution, design guidelines are developed to determine the possibilities and limitations of mSLA from a design point of view. For this purpose, a number of test geometries are designed and investigated to obtain systematic insights into important design features, such as wall thickness, grooves and holes. In addition, typical problems in additive manufacturing, such as the design of overhangs and fits or the hollowing of components, are investigated. The evaluation of practical 3D printing tests thus provides important parameters that can be transferred to design guidelines of components for additive manufacturing using mSLA.
eLetter zum Artikel "Condiciones neuropsi-quiátricas y probable causa de muerte de Maurice Ravel" von Gómez-Carvajal AM, Botero-Meneses JS, Palacios-Espinosa X und Palacios-Sánchez L., veröffentlicht in Iatreia 35(3), Seite 341-8 (DOI: https://doi.org/10.17533/udea.iatreia.154).
A balcony photovoltaic (PV) system, also known as a micro-PV system, is a small PV system consisting of one or two solar modules with an output of 100–600 Wp and a corresponding inverter that uses standard plugs to feed the renewable energy into the house grid. In the present study we demonstrate the integration of a commercial lithium-ion battery into a commercial micro-PV system. We firstly show simulations over one year with one second time resolution which we use to assess the influence of battery and PV size on self-consumption, self-sufficiency and the annual cost savings. We then develop and operate experimental setups using two different architectures for integrating the battery into the micro-PV system. In the passive hybrid architecture, the battery is in parallel electrical connection to the PV module. In the active hybrid architecture, an additional DC-DC converter is used. Both architectures include measures to avoid maximum power point tracking of the battery by the module inverter. Resulting PV/battery/inverter systems with 300 Wp PV and 555 Wh battery were tested in continuous operation over three days under real solar irradiance conditions. Both architectures were able to maintain stable operation and demonstrate the shift of PV energy from the day into the night. System efficiencies were observed comparable to a reference system without battery. This study therefore demonstrates the feasibility of both active and passive coupling architectures.
Electrochemical pressure impedance spectroscopy (EPIS) is an emerging tool for the diagnosis of polymer electrolyte membrane fuel cells (PEMFC). It is based on analyzing the frequency response of the cell voltage with respect to an excitation of the gas-phase pressure. Several experimental studies in the past decade have shown the complexity of EPIS signals, and so far there is no agreement on the interpretation of EPIS features. The present study contributes to shed light into the physicochemical origin of EPIS features, by using a combination of pseudo-two-dimensional modeling and analytical interpretation. Using static simulations, the contributions of cathode equilibrium potential, cathode overpotential, and membrane resistance on the quasi-static EPIS response are quantified. Using model reduction, the EPIS responses of individual dynamic processes are predicted and compared to the response of the full model. We show that the EPIS signal of the PEMFC studied here is dominated by the humidifier. The signal is further analyzed by using transfer functions between various internal cell states and the outlet pressure excitation. We show that the EPIS response of the humidifier is caused by an oscillating oxygen molar fraction due to an oscillating mass flow rate.
Electrochemical pressure impedance spectroscopy (EPIS) has received the attention of researchers as a method to study mass transport processes in polymer electrolyte mem-brane fuel cells (PEMFC). It is based on analyzing the cell voltage response to a harmonic excitation of the gas phase pressure in the frequency domain. Several experiments with a single-cell fuel cell have shown that the spectra contain information in the frequency range typical for mass transport processes and are sensitive to specific operating condi-tions and structural fuel cell parameters. To further benefit from the observed features, it is essential to identify why they occur, which to date has not yet been accomplished. The aim of the present work, therefore, is to identify causal links between internal processes and the corresponding EPIS features.
To this end, the study follows a model-based approach, which allows the analysis of inter-nal states that are not experimentally accessible. The PEMFC model is a pseudo-2D model, which connects the mass transport along the gas channel with the mass transport through the membrane electrode assembly. A modeling novelty is the consideration of the gas vol-ume inside the humidifier upstream the fuel cell inlet, which proves to be crucial for the reproduction of EPIS. The PEMFC model is parametrized to a 100 cm² single cell of the French project partner, who provided the experimental EPIS results reproduced and in-terpreted in the present study.
The simulated EPIS results show a good agreement with the experiments at current den-sities ≤ 0.4 A cm–2, where they allow a further analysis of the observed features. At the lowest excitation frequency of 1 mHz, the dynamic cell voltage response approaches the static pressure-voltage response. In the simulated frequency range between 1 mHz – 100 Hz, the cell voltage oscillation is found to strongly correlate with the partial pressure oscillation of oxygen, whereas the influence of the water pressure is limited to the low frequency region.
The two prominent EPIS features, namely the strong increase of the cell voltage oscillation and the increase of phase shift with frequency, can be traced back via the oxygen pressure to the oscillation of the inlet flow rate. The phenomenon of the oscillating inlet flow rate is a consequence of the pressure change of the gas phase inside the humidifier and in-creases with frequency. This important finding enables the interpretation of experimen-tally observed EPIS trends for a variation of operational and structural fuel cell parame-ters by tracing them back to the influence of the oscillating inlet flow rate.
The separate simulation of the time-dependent processes of the PEMFC model through model reduction shows their individual influence on EPIS. The sluggish process of the wa-ter uptake by the membrane is visible below 0.1 Hz, while the charge and discharge of the double layer becomes visible above 1 Hz. The gas transport through the gas diffusion layer is only visible above 100 Hz. The simulation of the gas transport through the gas channel
without consideration of the humidifier becomes visible above 1 Hz. With consideration of the humidifier the gas transport through the gas channel is visible throughout the fre-quency range. The strong similarity of the spectra considering the humidifier with the spectra of the full model setup shows the dominant influence of the humidifier on EPIS.
A promising observation is the change in the amplitude relationship between the cell volt-age and the oxygen partial pressure oscillation as a function of the oxygen concentration in the catalyst layer. At a frequency where the influence of oxygen pressure on the cell voltage is dominant, for example at 1 Hz, the amplitude of the cell voltage oscillation could be used to indirectly measure the oxygen concentration in the catalyst layer.
Das Ziel des Projekts PRYSTINE war es, eine fehlertolerante 360°-Rundumwahrnehmung für das hochautomatisierte Fahren in städtischen und ländlichen Umgebungen, auf Basis einer robusten Radar- und Lidar-Sensorfusion sowie Kontrollfunktionen, zu realisieren.
Im Teilvorhaben "Entwurf der Systemarchitektur von Radarsensoren auf Grundlage identifizierter Szenarien" stand die Entwicklung eines zukunftsfähigen RF-CMOS basierten Radarsystems im Fokus, das sich durch eine hohe Robustheit und Fehlertoleranz bei gleichzeitiger Reduktion der Kosten, Chipfläche und Leistungsaufnahme auszeichnet.
Darin war die Hochschule Offenburg sowohl an der Spezifizierung und am Entwurf einer Systemarchitektur für einen neuartigen RF-CMOS basierten Radarchip als auch an der anschließenden Untersuchung und Validierung des im Projekt realisierten hochauflösenden Radarsensors beteiligt.
Total Cost of Ownership (TCO) is a key tool to have a complete understanding of the costs associated with an investment, as it allows to analyze not only the initial acquisition costs, but also the long-term costs related to operation, maintenance, depreciation, and other factors. In the context of the cement industry, TCO is especially important due to the complexity of the production processes and the wide variety of components and machinery involved in the process.
For this reason, a TCO analysis for the cement industry has been conducted in this study, with the objective of showing the different components of the cost of production. This analysis will allow the reader to gain knowledge about these costs, in the industrial model will be to make informed decisions on the adoption of technologies and practices that will allow them to reduce costs in the long run and improve their operational efficiency.
In particular, this study pursues to give visibility to technologies and practices that enable the reduction of carbon emissions in cement production, thus contributing to the sustainability of industry and the protection of the environment. By being at the forefront of sustainability issues, the cement industry can contribute to the achievement of environmentally friendly technologies and enable the development of people and industry.
The Oxyfuel technology has been selected as a carbon capture solution for the cement industry due to its practical application, low costs, and practical adaptation to non-capture processes. The adoption of this technology allows for a significant reduction in CO2 emissions, which is a crucial factor in achieving sustainability in the cement manufacturing process.
Carbon capture storage technologies represent a high investment, although these technologies increase the cost of production, the application of Oxyfuel technology is one of the most economically viable as the cheapest technology per capture according to the comparison. However, this price increase is a technical advantage as the carbon capture efficiency of this technology reaches 90%. This level of efficiency leads to a decrease in taxes for the generation of CO2 emissions, making the cement manufacturing process sustainable.
Gamification wird in vielen Bereichen, die auch den Bildungssektor einschließen, zur Motivations- und Leistungssteigerung eingesetzt. Dieser Beitrag beschreibt das Design, die Umsetzung und Evaluierung eines Gamification-Konzeptes für die Vorlesung „Software Engineering" an der Hochschule Offenburg. Gamification soll nach Intention der Lehrenden eine kontinuierliche und tiefergehende Auseinandersetzung mit den Themen der Vorlesung forcieren sowie einen positiven Einfluss auf die Motivation der Studierenden haben, um den Lernprozess zu unterstützen. Zentral für das Gamification-Design sind dabei eine freiwillige Teilnahme, die Wahrnehmung der Bedeutung der Lerninhalte und ein zielorientierter Einsatz von Gamification-Elementen. Das entwickelte Konzept wurde in der Lernplattform Moodle realisiert, über drei Semester eingesetzt und parallel evaluiert. Die Ergebnisse dieser Evaluierungen zeigen, dass die Studierenden den gamifizierten Kurs intensiv und oft über das gesamte Semester nutzten und aus eigenem Antrieb eine Vielzahl von Übungen absolvierten.
Das Thema dieser Masterthesis lautet „Camera Stream Solution – Marktübersicht, Lösungsansätze, Prototyp“. Mit dieser Arbeit wird eine Videostreaming-Lösung für die Herrenknecht-Plattform CONNECTED realisiert. Dabei geht es um die Bildschirmaufnahme von Navigations- und Steuerungsbildschirmen auf Tunnelbohrmaschinen und die Übertragung dieser Aufnahmen in die Cloud. Letztlich wird ermöglicht die Aufnahmen in nahezu Echtzeit als Videostream in einem Videoplayer wiederzugeben.
Zu Beginn werden die Grundlagen zur Datenübertragung im Internet sowie zum Streaming erläutert. Im Anschluss wird eine Marktübersicht verschiedener Streaming-Komponenten gegeben sowie einige Lösungsansätze vorgestellt und anhand ausgewählter Kriterien verglichen. Im nächsten Schritt wird die Implementierung eines Prototyps behandelt. Dieser nutzt unter anderem ffmpeg für die Bildschirmaufnahme und die Kodierung sowie die Streaming-Protokolle RTMP (Real Time Messaging Protocol) und HLS (HTTP Live Streaming). Zur Realisierung der Architektur gehört auch die Entwicklung einer REST-API und eines REST-Clients in C#.
Mit dem Projekt wird eine „echte“ Streaming-Lösung für die Kundenplattform CONNECTED entwickelt, die einen Videostream mit 24 Bildern pro Sekunde bietet, um die bisherige Darstellung von Screenshots auf der Plattform zu ersetzen.
The Humboldt Portal has been designed and implemented as part of an ongoing research project to develop an information system on the Internet to share the documents and rare books of Alexander von Humboldt, a 19th century German scientist and explorer, who viewed the natural world holistically and described the harmony of nature among the diversity of the physical world. Even after more than two centuries he is admired for his ability to see the natural world and human nature in the context of a complex network of relationships. The design and implementation of the Humboldt Portal are also oriented to support further research on Humboldt’s intellectual perspective.
Although all of Humboldt's works can be found on the internet as digitized documents, the complexity and internal inter-connectivity of his vision of nature cannot be adequately represented only by digitized papers or scanned documents in digital libraries.
As a consequence a specific portal of the Humboldt's documents was developed, which extends the standards of digital libraries and offers a technical approach for the adequate presentation of highly interconnected data.
Due to the continuous scientific and literary research, new insights and requirements for the digital presentation of Humboldt documents are constantly emerging, so that this article only provides a summary of the concepts realized at now. Consequently, the design and implementation of the Humboldt Portal is both: a consequence of a continuing research project and oriented to support more research on Humboldt´s intellectual holistic perspective, which was an anticipation to the System Approach of the last Century.
Alexander von Humboldt, a German scientist and explorer of the 19th century, viewed the natural world holistically and described the harmony of nature among the diversity of the physical world as a conjoining between all physical disciplines. He noted in his diary: “Everything is interconnectedness.”
The main feature of Humboldt’s pioneering work was later named “Humboldtian science”, meaning the accurate study of interconnected real phenomena in order to find a definite law and a dynamic cause.
Following Humboldt's idea of nature, an Internet edition of his works must preserve the author’s original intention, retain an awareness of all relevant works, and still adhere to the requirements of scholarly edition.
At the present time, however, the highly unconventional form of his publications has undermined the awareness and a comprehensive study of Humboldt’s works.
Digital libraries should supply dynamic links to sources, maps, images, graphs and relevant texts. New forms of interaction and synthesis between humanistic texts and scientific observation need to be created.
Information technology is the only way to do justice to the broad range of visions, descriptions and the idea of nature of Humboldt’s legacy. It finally leads to virtual research environments as an adequate concept to redesign our digital archives, not only for Humboldt’s documents, but for all interconnected data.
Automatic Identification of Travel Locations in Rare Books - Object Oriented Information Management
(2017)
The digital content of the Internet is growing exponentially and mass digitization of printed media opens access to literature, in particular the genre of travel literature from the 18th and 19th century, which consists of diaries or travel books describing routes, observations or inspirations. The identification of described locations in the digital text is a long-standing challenge which requires information technology to supply dynamic links to sources by new forms of interaction and synthesis between humanistic texts and scientific observations.
Using object oriented information technology, a prototype of a software tool is developed which makes it possible to automatically identify geographic locations and travel routes mentioned in rare books. The information objects contain properties such as names and classification codes for populated places, streams, mountains and regions. Together, with the latitudes and longitudes of every single location, it is possible to geo-reference this information in order that all processed and filtered datasets can be displayed by a map application. This method has already been used in the Humboldt Digital Library to present Alexander von Humboldt’s maps and was tested in a case study to prove the correctness and reliability of the automatic identification of locations based on the work of Alexander von Humboldt and Johann Wolfgang von Goethe.
The results reveal numerous errors due to misspellings, change of location names, equality of terms and location names. But on the other hand it becomes very clear that results of the automatic object detection and recognition can be improved by error-free and comprehensive sources. As a result an increase in quality and usability of the service can be expected, accompanied by more options to detect unknown locations in the descriptions of rare books.
In the 19th century Alexander von Humboldt explored the nature and was conceived a new vision of nature that still influences the way we understand the new world. Humboldt believed in the importance of accurate measurements and precise description of observations. His vision of nature included not only facts but also emotions.
Nowadays smart solutions will be developed by using computer technology, which will influence our relationship to nature, our handling of the complexity and diversity of nature itself and the technological influences on the society. Could we avoid a new form of “Colonialism”, when a network of super computers will create a smarter world?
Learning to Walk With Toes
(2020)
This paper explains how a model-free (with respect to the robot model and the behavior to learn) approach can facilitate learning to walk from scratch. It is applied to a simulated Nao robot with toes. Results show an improvement of 30% in speed compared to a model without toes and also compared to our model-based approach, but with less stability.
In modernen Industrieautomatisierungssysteme kann die IT-Sicherheit nicht mehr ignoriert werden. Um dem Datenverkehr Schutz zu bieten, sind kryptografische Schutzmaßnahmen notwendig. Eine gängige Schutzmaßnahme ist die Verwendung von digitalen Zertifikaten zur Autorisierung und Authentifizierung. Um Zertifikate sicher und geregelt auf Endgeräte zu bringen, ist jedoch eine Public-Key-Infrastructure notwendig. Solche PKIs sind bisher wenig im Umfeld der Industrieautomatisierung untersucht. Das Institut für verlässliche Embedded-Systems der Hochschule Offenburg bietet hierfür eine mögliche Lösung, welche auf einer zentralen Einheit, genannt Credentialing Entity, basiert. Ein Demonstrator dieses Konzepts wurde bereits in den weit verbreiteten Systemprogrammier-sprachen C und C++ implementiert.
Im Rahmen dieser Arbeit wird die Verwendung der modernen speichersicheren Programmiersprache Rust in der Systemprogrammierung als Alternative zu den Domänenführern C/C++ am Beispiel der Implementierung der Credentialing-Entity untersucht. Hierbei werden Aspekte wie die Vorzüge Rusts, dessen Ökosystem und Interoperabilität mit den Marktführern C/C++ untersucht.
Diese Bachelorthesis befasst sich mit der Testung eines an der TU München entwickelten Biosignalverstärkers zur Registrierung von auditorisch evozierten Potentialen. Ziel dieses Projekts ist die Charakterisierung dieses Verstärkers. Dabei soll geprüft werden, ob der Verstärker AEPs registrieren und um verstellbare Faktoren verstärken kann. Dafür wurde eine MATLAB – Software implementiert, die es erlaubt über eine Soundkarte akustische Signale mittels Kopfhörer auszugeben und zeitgleich die vom Verstärker registrierten Potentiale einzulesen, zu Mitteln und sie grafisch darzustellen.
Erste Versuche wurden mit der Loop Back Box von Interacoustics, einem Schwingkreis, der einen künstlichen Patienten simuliert, durchgeführt. Diese Versuchsreihen zeigten, dass reale Signale gemessen werden. Anschließend konnten Probandenmessungen mit dem Verstärker und Referenzmessungen mit der Eclipse von Interacoustics durchgeführt werden. Bei sämtlichen Messreihen zeigte sich im Vergleich der beiden Systeme hohe Ähnlichkeit der Kurvenverläufe. Insbesondere das zeitliche Auftreten der Jewett V, der größten gemessenen Amplitude, war nahezu identisch. Allerdings stimmen die Amplitudenwerte nicht überein. Während die Amplitude der Jewett V bei Messungen mit der Eclipse um die 1µV erreichte, war die Amplitude beim Verstärker nur ein bis zwei Nanovolt groß. Damit ist die Verstärkung um ein tausendfaches geringer als bei der Eclipse.
Anhand der gewonnenen Erkenntnisse konnten Hardware technische Optimierungen evaluiert und diskutiert werden.
Planung, Bau und Inbetriebnahme einer Anlage zur Entspannungsverdampfung im Technikums-Maßstab
(2023)
Das Ziel dieser Arbeit ist es, eine Anlage im Technikums-Maßstab für den Prozess der kontinuierliche Entspannungsverdampfung zu planen, zu bauen und in Betrieb zu nehmen. Ausgangspunkt der Arbeit ist die Entspannungsverdampfungs-Anlage „EVERDA“ der Hochschule Offenburg, welche in Batch-Fahrweise betrieben wurde.
Zur Erreichung dieses Ziels wurden die folgenden Punkte bearbeitet:
• Aufstellung von thermodynamischen Berechnungsgleichungen.
• Beschreibung der Entspannungsverdampfung in Batch-Fahrweise.
• Bewertung von Anlagenkomponenten der EVERDA in Batch-Fahrweise hinsichtlich ihrer Wiederverwendbarkeit für den kontinuierlichen Betrieb.
• Aufstellung der Rahmenbedingung für den kontinuierlichen Betrieb.
• Erstellung und Beschreibung eines detaillierten Anlagenkonzeptes für den kontinuierlichen Betrieb.
• Durchführung von Prozess-Simulationen mit dem neuen Konzept.
• Auslegung der Komponenten auf Basis der Rahmenbedingung und den Ergebnissen der Simulation.
• Entwurf und Aufbau der elektrischen Verschaltung zur Anlagensteuerung.
• Bau und Inbetriebnahme der Anlage.
DE\GLOBALIZE
(2022)
The artistic research cycle DE\GLOBALIZE is a media ecological search movement for the terrestrial. After examining matters of fact in India (2014-18), matters of concern in Egypt (2016-2019) and matters of care in the Upper Rhine (2018-22), the focus turns toward matters of violence in the Congo (2022). From matter to mater, mother-earth, the garden to exploitation. From science, water and climate to migration, oppression and extermination.
The long-term research is accessible through interactive web documentation. The platform serves as a continuous media-archaeological archive for a speculative ethnography. The relational structure of the videographic essay is enabling the forensic processing of single documents in the sense of the actor-network theory.
The subject of the presentation at IFM is a field trip to the Congo planned for March 2022, which will focus on the ambivalence of violence and care in collaboration with local artists. The field trip is based on the postcolonial reflection luderitzcargo by the author from 1996, in which a freight container was transformed into a translocal cinema in Namibia.
Through the journey to Congo, a group of media artists, a psychotherapist, a theater dramaturg, a filmmaker and a philosopher intend to explore the political, technological and psycho-geographic borders. By artistic interventions with locals, we want to interfere with relational string figures as part of the new Earth Politics. They are focusing on the displaced consumption of resources which are hard-fought and guarantee prosperity in the global north. The so-called ghost acreages are repressed and justified as part of a civilizational mission. With this trip, we want to confront our self-lies with the ones of our hosts. We want to confront ourselves with the foreign, the dark and the displaced ghosts within ourselves. In the presentation at the #IFM2022 Conference, the platform DE\GLOBALIZE will be problematized itself as an example of epistemic violence for the ethnographic memory of (Western) knowledge.
We are not the missionaries but the perplexed travellers. In our search movement, we are dealing with psychoanalysis, video, performance and trance. As disoriented white men we try the reversal of Black Skin and White Mask by Franz Fanon without blackfacing. We will not only care about the sensitivity of our skin but that of our g/hosts and the one of mother earth.
Running shoes were categorized either as motion control, cushioned, or minimal footwear in the past. Today, these categories blur and are not as clearly defined. Moreover, with the advances in manufacturing processes, it is possible to create individualized running shoes that incorporate features that meet individual biomechanical and experiential needs. However, specific ways to individualize footwear to reduce individual injury risk are poorly understood. Therefore, the purpose of this scoping review was to provide an overview of (1) footwear design features that have the potential for individualization; (2) human biomechanical variability as a theoretical foundation for individualization; (3) the literature on the differential responses to footwear design features between selected groups of individuals. These purposes focus exclusively on reducing running-related risk factors for overuse injuries. We included studies in the English language on adults that analyzed: (1) potential interaction effects between footwear design features and subgroups of runners or covariates (e.g., age, gender) for running-related biomechanical risk factors or injury incidences; (2) footwear perception for a systematically modified footwear design feature. Most of the included articles (n = 107) analyzed male runners. Several footwear design features (e.g., midsole characteristics, upper, outsole profile) show potential for individualization. However, the overall body of literature addressing individualized footwear solutions and the potential to reduce biomechanical risk factors is limited. Future studies should leverage more extensive data collections considering relevant covariates and subgroups while systematically modifying isolated footwear design features to inform footwear individualization.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features that address the tactile experience of working in real-world labs.
The impact of the circular economy on sustainable development: A European panel data approach
(2022)
The circular economy (CE) has attracted considerable attention because of its potential to help achieve sustainable development (SD). This paper presents a comprehensive analysis of the effect of the CE on the three dimensions of SD at the country level. We analysed the impact of each CE source of value (renewable energy, reuse, repair, recycling) and the influence of an overall factor-analysis-derived measure of the CE on the economic, environmental and social dimensions of SD. The aim was to compare the individual impacts and outcomes of the CE and its sources of value in a single study. Panel data analysis was performed using a sample of 25 European countries for the period 2010 to 2019. The findings show a major impact of the CE on achieving SD, which has positive
effects on the economy, environment and society. However, the results show that the impact of each CE value source on the three SD dimensions varies. While renewable energies and reuse reduce the impact on the environment, recycling has no effect, and repair increases GHG emissions. However, repair is the only CE source with a positive economic impact at the country level. Finally, renewable energy, repair and recycling reduce unemployment. Decision makers should conduct impact analysis to design suitable, efficient and targeted measures depending on each country's specific objectives.
In this paper, we study the runtime performance of symmetric cryptographic algorithms on an embedded ARM Cortex-M4 platform. Symmetric cryptographic algorithms can serve to protect the integrity and optionally, if supported by the algorithm, the confidentiality of data. A broad range of well-established algorithms exists, where the different algorithms typically have different properties and come with different computational complexity. On deeply embedded systems, the overhead imposed by cryptographic operations may be significant. We execute the algorithms AES-GCM, ChaCha20-Poly1305, HMAC-SHA256, KMAC, and SipHash on an STM32 embedded microcontroller and benchmark the execution times of the algorithms as a function of the input lengths.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features which address the tactile experience of working in real-world labs.
The integration of additive manufacturing processes into the teaching of students is an important prerequisite for the further dissemination of this new technology. In this context, the DfAM is of particular importance. For this reason, this paper presents an approach in which a connection is made between methodical product development and practical implementation by AM. Using a model racing car as an example, students independently develop significant improvements of particular assemblies. A final evaluation shows that the students have significantly improved their skills and competencies.
In the development of new vehicles, increasing customer comfort requirements and rising safety regulations often result in an increase in weight. Nevertheless, in order to be able to meet the demand for reduced fuel consumption, it is necessary within product development process to implement complex and filigree lightweight structures. This contribution therefore addresses the potential of generatively developed components for fiber-reinforced additive manufacturing (FRAM). Currently, several commercial systems for this application are available on the market. Therefore, a comparison of the systems is first made to determine a suitable system. Then, a highly stressed and safety-relevant chassis component of a race car is generatively designed and manufactured using FRAM. A matrix with short fiber reinforcement and additional long fiber reinforcement with carbon fibers is applied. Finally, tensile tests are carried out to check the mechanical properties. In addition, relevant properties such as weight and cost are obtained in order to be able to compare them with conventionally developed and manufactured components.
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
This paper has the objective of creating a framework for a different cultural dimension of corporate entrepreneurship leading to corporate entrepreneurial culture (CEC). The analysis of CEC is based on a review of existing concepts of organisational culture and entrepreneurship. They are combined to create a framework of CEC, including macro- and microlevels and examples of subcultures. Core ideas of the framework are validated by qualitative interviews with ten experts. The identified organisational category of the CEC framework is defined by the levels of micro-cultures or subcultures and includes the upper levels of the hierarchy, including the industry level. Geographic categories such as regional or national culture are also part of the system. The individual category of the CEC framework is characterised by competencies (including aspects such as motivation, creativity, mobilising others, coping with uncertainty, teamwork and social competencies) and entrepreneurial personalities. The results of the interviews show the importance of these individual competencies for a lively CEC. The different levels, such as national and professional cultures, as a dimension of the organisational category of the framework are also confirmed by the interviews. The findings indicate that the individual category of CEC could be used for job satisfaction or engagement and the degree of CEC of an organisation could be defined and developed by the organisational category. The identified framework contributes to an understanding of this complex topic and supports companies in the implementation of entrepreneurial ideas in different organisational contexts.
Konzeption und Erstellung einer Informationsplattform zur Darstellung von Smart Home Anwendungen
(2022)
Gegenstand dieser Bachelorarbeit ist die Konzeption und Erstellung einer digitalen Informati-onsplattform zur Darstellung von Smart Home Anwendungen und deren Funktionsweise.
Als Grundlage hierfür dient der von der Hahn-Schickard-Gesellschaft für angewandte For-schung errichtete und betriebene „Smarte Caravan (SmaC)“, in welchen zu Demonstrationszwecken einige Sensoren sowie Aktoren aus dem Smart Home Bereich eingebaut wurden.
Für die Erstellung der digitalen Informationsplattform wird daher zunächst eine Analyse sowie ein Vergleich der verschiedenen Möglichkeiten zur interaktiven Darstellung von Informationsinhalten durchgeführt.
Anschließend wird eine Aufstellung von Anwendungen und Szenarien gesammelt, welche in der interaktiven Darstellung integriert werden sollen.
Letztendlich liegt eine interaktive Smart Home Umgebung vor, mit der verschiedene Funktionalitäten des Smart Home erlebbar werden und Informationen darüber gewonnen werden können.
The contribution of the RoofKIT student team to the SDE 21/22 competition is the extension of an existing café in Wuppertal, Germany, to create new functions and living space for the building with simultaneous energetic upgrading. A demonstration unit is built representing a small cut-out of this extension. The developed energy concept was thoroughly simulated by the student team in seminars using Modelica. The system uses mainly solar energy via PVT collectors as the heat source for a brine-water heat pump (space heating and hot water). Energy storage (thermal and electrical) is installed to decouple generation and consumption. Simulation results confirm that carbon neutrality is achieved for the building operation, consuming and generating around 60 kWh/m2a.
Robust scheduling problem is a major decision problem that is addressed in the literature, especially for remanufacturing systems; this problem is complex because of the high uncertainty and complex constraints involved. Generally, the existing approaches are dedicated to specific processes and do not enable the quick and efficient generation and evaluation of schedules. With the emergence of the Industry 4.0 paradigm, data availability is now considered an opportunity to facilitate the decision-making process. In this study, a data-driven decisionmaking process is proposed to treat the robust scheduling problem of remanufacturing systems in uncertain environments. In particular, this process generates simulation models based on a data-driven modeling approach. A robustness evaluation approach is proposed to answer several decision questions. An application of the decision process in an industrial case of a remanufacturing system is presented herein, illustrating the impact of robustness evaluation results on real-life decisions.
Auswirkung eines Importstopps russischer Energieträger auf die Klimaschutzziele in Deutschland
(2022)
Ein Importstopp russischer Energieträger nach Deutschland wird derzeit vermehrt diskutiert. Wir wollen die Diskussion unterstützen, indem wir einen Weg zeigen, wie das Elektrizitätssystem in Deutschland kurzfristig mit geringen Energieimporten auskommt und welche Maßnahmen notwendig sind, um die Klimaschutzziele trotzdem einzuhalten. Die Ergebnisse eines solchen Energiewendeszenarios mit reduzierter Importabhängigkeit werden mit dem Energiesystemmodell MyPyPSA-Ger berechnet. Die wichtigsten Erkenntnisse sind, dass ein zügiger Ausbau Erneuerbarer Energien und von Speichertechnologien • die Abhängigkeit des deutschen Elektrizitätssystems von Energieimporten deutlich reduziert. • auch langfristig keine wesentlichen Importe der Energieträger Erdgas, Steinkohle und Mineralöl nach sich zieht. • über die Klimaziele der Bundesregierung hinaus das 1,5-Grad-Ziel im Elektrizitätssystem erreicht wird.
In the last years, social robots have become a trending topic. Indeed, robots which communicate with us and mimic human behavior patterns are fascinating. However, while there is a massive body of research on their design and acceptance in different fields of application, their market potential has been rarely investigated. As their future integration in society may have a vast disruptive potential, this work aims at shedding light on the market potential, focusing on the assistive health domain. A study with 197 persons from Italy (age: M = 67.87; SD = 8.87) and Germany (age: M = 62.15; SD = 6.14) investigates cultural acceptance, desired functionalities, and purchase preferences. The participants filled in a questionnaire after watching a video illustrating some examples of social robots. Surprisingly, the individual perception of health status, social status as well as nationality did hardly influence the attitude towards social robots, although the German group was somewhat more reluctant to the idea of using them. Instead, there were significant correlations with most dimensions of the Almere model (like perceived enjoyment, sociability, usefulness and trustworthiness). Also, technology acceptance resulted strongly correlated with the individual readiness to invest money. However, as most persons consider social robots as “Assistive Technological Devices” (ATDs), they expected that their provision should mirror the usual practices followed in the two Countries for such devices. Thus, to facilitate social robots’ future visibility and adoption by both individuals and health care organisations, policy makers would need to start integrating them into official ATDs databases.
When people with hearing loss are provided with different devices in each ear, these devices usually have different processing latencies. This leads to static temporal offsets between both ears in the order of several milliseconds. This thesis measured effects of such offsets in stimulation timing on mechanisms of binaural hearing, such as sound localization and speech understanding in noise in hearing-impaired and normal-hearing listeners.
Nowadays decarbonisation of the energy system is one of the main concerns for most governments. Renewable energy technologies, such as rooftop photovoltaic systems and home battery storage systems, are changing the energy system to be more decentralised. As a consequence, new ways of energy business models are emerging, e.g., peer-to-peer energy trading. This new concept provides an online marketplace where direct energy exchange can occur between its participants. The purpose of this study is to conduct a content analysis of the existing literature, ongoing research projects, and companies related to peer-to-peer energy trading. From this review, a summary of the most important aspects and journal papers is assessed, discussed, and classified. It was found that the different energy market types were named in various ways and a proposal for standard language for the several peer-to-peer market types and the different actors involved is suggested. Additionally, by grouping the most important attributes from peer-to-peer energy trading projects, an assessment of the entry barrier and scalability potential is performed by using a characterisation matrix.
Lithium-ion batteries show strongly nonlinear behaviour regarding the battery current and state of charge. Therefore, the modelling of lithium-ion batteries is complex. Combining physical and data-driven models in a grey-box model can simplify the modelling. Our focus is on using neural networks, especially neural ordinary differential equations, for grey-box modelling of lithium-ion batteries. A simple equivalent circuit model serves as a basis for the grey-box model. Unknown parameters and dependencies are then replaced by learnable parameters and neural networks. We use experimental full-cycle data and data from pulse tests of a lithium iron phosphate cell to train the model. Finally, we test the model against two dynamic load profiles: one consisting of half cycles and one dynamic load profile representing a home-storage system. The dynamic response of the battery is well captured by the model.
Die Corona-Semester erforderten die Übertragung der Brückenkurse Mathematik in ein digitales Lehr-format. Gerade beim Studieneinstieg spielen persönliche Unterstützung und soziale Eingebundenheit für Studierende eine besonders wichtige Rolle. Deshalb lag die besondere Herausforderung bei der Übertragung in ein digitales Format darin, die wegfallenden üblichen Kennenlern- und Kommunika-tionsmöglichkeiten, die sich in Präsenzformaten beispielsweise in den Pausen oder im Gespräch mit den Sitznachbarn ergeben, zu kompensieren. Vorliegender Beitrag stellt vor, inwieweit der Transfer in ein digitales Format gelungen ist. Das digitale Brückenkurskonzept wurde in ein didaktisches Entwurfsmuster übertragen, um durch die strukturierte und nachvollziehbare Darstellung den Transfer und die Vergleichbarkeit der Ergebnisse zu erleichtern.
Physik durch Informatik
(2022)
Selbsttests in Lernmanagementsystemen (LMS) ermöglichen es Studierenden, den eigenen Lernfortschritt einzuschätzen. Das didaktische Konzept Physik durch Informatik (PDI) ist charakterisiert durch die Nutzung einer Programmiersprache zur Lösungseingabe bei Mathematik und Physik-Aufgaben. Im Gegensatz zur Lösungseingabe durch Zahlenwerte oder im Antwort-Auswahl-Verfahren erfordert die Implementierung einer Lösung in einer Programmiersprache eine höhere Kompetenzstufe.
This work addresses the conceptualization, design, and implementation of an Application Programming Interface (API) for the Common Security Advisory Framework (CSAF) 2.0, introducing another method for distributing CSAF documents in addition to two already existing methods. These don't allow the use of flexible queries as well as filtering, which makes it difficult for operators of software and hardware to use CSAF. An API is intended to simplify this process and thus advance the automation goal of CSAF.
First, it is evaluated whether the current standard allows the implementation of an API. Any conflicts are highlighted and suggestions for standard adaptations are made. Based on these results, the API is designed to meet the previously defined requirements. Subsequently, a proof of concept is successfully developed according to the design and extensively tested with specially prepared test data. Finally, the results and the necessary standard adjustments are summarized and justified.
The conceptual design and the implementation were successfully completed. However, during the implementation of the proof of concept, some routes could not be fully implemented.
This thesis evaluates and compares current Full-Stack JavaScript Technologies. Through extensive research on the state of the art of JavaScript and its related frameworks, different aspects of FullStack Development are analysed to judge the popularity of technologies.
The language JavaScript and the idea of Full-Stack Development are presented with the functionality of different frameworks. The JavaScript runtime Node.js was examined and marked as the most influential JavaScript technology, which opened up many opportunities.
As technology stacks MERN, MEAN and MEVN were investigated, featuring the base technologies Node.js, MongoDB and Express.js. It was discovered that front-end frameworks have the most influence on which variant of Full-Stack can be chosen. Comparison criteria between the technology stacks were the learning curve, the maintainability, modularity and media integration. These criteria were extracted from research and a questionnaire conducted with students of the University of Applied Sciences Offenburg.
For the purposes of testing and experiencing a Full-Stack JavaScript application, the game RemArrow, based on the 1979s game Simon, was designed and implemented. The comparison with predefined criteria shows the result that the MERN stack with React.js is the best to learn and promises the most potential. Arising JavaScript technologies and their popularity are very dependent on the industry and skill set of the developer.
In conclusion, it can be established that the concept of Full-Stack Development is currently very interesting and more than just a trend. It has potential of becoming a new kind of web development, and part of the curriculum taught at universities. Expert knowledge is needed but there is a high demand and much potential for Full-Stack JavaScript Developers.
The accurate diagnosis of state of charge (SOC) and state of health (SOH) is of utmost importance for battery users and for battery manufacturers. State diagnosis is commonly based on measuring battery current and using it in Coulomb counters or as input for a current-controlled model. Here we introduce a new algorithm based on measuring battery voltage and using it as input for a voltage-controlled model. We demonstrate the algorithm using fresh and pre-aged lithium-ion battery single cells operated under well-defined laboratory conditions on full cycles, shallow cycles, and a dynamic battery electric vehicle load profile. We show that both SOC and SOH are accurately estimated using a simple equivalent circuit model. The new algorithm is self-calibrating, is robust with respect to cell aging, allows to estimate SOH from arbitrary load profiles, and is numerically simpler than state-of-the-art model-based methods.
Smart Home Security
(2022)
Interoperabilität zwischen Kommunikationsstandards im Smart Home Umfeld wird durch die steigende Anzahl an Geräten und Herstellern zu einer immer größeren Herausforderung. Nutzer müssen genau darauf achten, dass keine Probleme durch die Nutzung mehrerer Kommunikationsstandards entstehen. Oft sind nur Geräte desselben Herstellers oder einer Gruppe von Herstellern vollständig kompatibel, sodass zwangsweise eine Bindung zu den Herstellern aufgebaut wird. Zusätzlich haben die bereits etablierten Standards zahlreiche bekannte Sicherheitslücken, die bei einer unsauberen Implementierung von Angreifern ausgenutzt werden können.
Der neue Kommunikationsstandard Matter der Connectivity Standards Alliance (CSA) verspricht, diese Probleme zu lösen. Matter basiert auf den bereits existierenden Protokollen WiFi, Bluetooth und Thread und zählt bereits viele der großen Smart Home Hersteller, wie Google, Amazon, Apple, Philips und Signify, zu seinen Partnern. Außerdem soll die Sicherheit Matters für den Endnutzer ein fundamentaler Grundsatz in der Entwicklung sein. Die finale Veröffentlichung des Matter-Standards wird laut der CSA im Herbst 2022 erwartet.
Diese Bachelorthesis hat zum Ziel, Matter anhand des Entwurfs und der bereits öffentlichen Referenzimplementierung auf Schwachstellen im Bereich der Informationssicherheit zu untersuchen. Zur Bewertung der konzeptionellen Sicherheitsmaßnahmen werden unter anderem die Funktionsweise, die Einschätzung der Bedrohungslage, die gewählten Sicherheitsprinzipien für die Entwicklung und die Rolle des Datenschutzes betrachtet.
Im Anschluss werden einerseits bestehende Sicherheitslücken und Schwachstellen in den genutzten Kommunikationsprotokollen betrachtet, aber auch praktische Angriffe gegen Matter werden auf Basis der Betrachtung durchgeführt. Dazu werden sowohl ein Replay Attack als auch ein Deauthentication Attack gegen die Referenzimplementierung
durchgeführt.
Abschließend soll die Frage geklärt werden, ob Matter ein ausreichendes Maß an Sicherheit bieten und für Nutzer einen Vorteil schaffen kann.
The use of biochar is an important tool to improve soil fertility, reduce the negative environmental impacts of agriculture, and build up terrestrial carbon sinks. However, crop yield increases by biochar amendment were not shown consistently for fertile soils under temperate climate. Recent studies show that biochar is more likely to increase crop yields when applied in combination with nutrients to prepare biochar-based fertilizers. Here, we focused on the root-zone amendment of biochar combined with mineral fertilizers in a greenhouse trial with white cabbage (Brassica oleracea convar. Capitata var. Alba) cultivated in a nutrient-rich silt loam soil originating from the temperate climate zone (Bavaria, Germany). Biochar was applied at a low dosage (1.3 t ha−1). The biochar was placed either as a concentrated hotspot below the seedling or it was mixed into the soil in the root zone representing a mixture of biochar and soil in the planting basin. The nitrogen fertilizer (ammonium nitrate or urea) was either applied on the soil surface or loaded onto the biochar representing a nitrogen-enhanced biochar. On average, a 12% yield increase in dry cabbage heads was achieved with biochar plus fertilizer compared to the fertilized control without biochar. Most consistent positive yield responses were observed with a hotspot root-zone application of nitrogen-enhanced biochar, showing a maximum 21% dry cabbage-head yield increase. Belowground biomass and root-architecture suggested a decrease in the fine root content in these treatments compared to treatments without biochar and with soil-mixed biochar. We conclude that the hotspot amendment of a nitrogen-enhanced biochar in the root zone can optimize the growth of white cabbage by providing a nutrient depot in close proximity to the plant, enabling efficient nutrient supply. The amendment of low doses in the root zone of annual crops could become an economically interesting application option for biochar in the temperate climate zone.