Refine
Year of publication
- 2023 (154) (remove)
Document Type
- Conference Proceeding (95)
- Article (reviewed) (24)
- Article (unreviewed) (19)
- Part of a Book (9)
- Doctoral Thesis (3)
- Patent (2)
- Book (1)
- Letter to Editor (1)
Conference Type
- Konferenzartikel (73)
- Konferenz-Abstract (18)
- Sonstiges (2)
- Konferenz-Poster (1)
- Konferenzband (1)
Language
- English (154) (remove)
Has Fulltext
- no (154) (remove)
Is part of the Bibliography
- yes (154)
Keywords
- Biomechanik (9)
- Deep Leaning (9)
- Deep Learning (4)
- Export (4)
- Additive Manufacturing (3)
- ECA (3)
- Renewable Energy (3)
- RoboCup (3)
- Trade (3)
- Wärmepumpe (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (59)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (46)
- Fakultät Wirtschaft (W) (36)
- IMLA - Institute for Machine Learning and Analytics (25)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (16)
- INES - Institut für nachhaltige Energiesysteme (15)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (15)
- Fakultät Medien (M) (ab 22.04.2021) (10)
- IfTI - Institute for Trade and Innovation (7)
- IUAS - Institute for Unmanned Aerial Systems (3)
With recent developments in the Ukrainian-Russian conflict, many are discussing about Germany’s dependency on fossil fuel imports in its energy system, and how can the country proceed with reducing that dependency. With its wide-ranging consumption sectors, the electricity sector comes as the perfect choice to start with. Recent reports showed that the German federal government is already intending to have a fully renewable electricity by 2035 while exploiting all possible clean power options. This was published in the federal government’s climate emergency program (Easter Package) in early 2022. The aim of this package is to initiate a rapid transition and decarbonization of the electricity sector. The Easter Package expects an enormous growth of renewable energies to a completely new level, with already at least 80% renewable gross energy consumption, with extensive and broad deployment of different generation technologies on various scales. This paper will discuss this ambitious plan and outline some insights into this huge and rapidly increasing step, and show how much will Germany need in order to achieve this huge milestone towards a fully green supply of the electricity sector. Different scenarios and shares of renewables will be investigated in order to elaborate on preponed climate-neutral goal of the electricity sector by 2035. The results pointed out some promising aspects in achieving a 100% renewable power, with massive investments in both generation and storage technologies.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
Femtosecond (fs) time-resolved magneto-optics is applied to investigate laser-excited ultrafast dynamics of one-dimensional nickel gratings on fused silica and silicon substrates for a wide range of periodicities Λ = 400–1500 nm. Multiple surface acoustic modes with frequencies up to a few tens of GHz are generated. Nanoscale acoustic wavelengths Λ/n have been identified as nth-spatial harmonics of Rayleigh surface acoustic wave (SAW) and surface skimming longitudinal wave (SSLW), with acoustic frequencies and lifetimes being in agreement with theoretical calculations. Resonant magnetoelastic excitation of the ferromagnetic resonance (FMR) by SAW’s third spatial harmonic, and, most interestingly fingerprints of the parametric resonance at 1/2 SAW frequency have been observed. Numerical solutions of Landau–Lifshitz–Gilbert (LLG) equation magnetoelastically driven by complex polychromatic acoustic fields quantitatively reproduce all resonances at once. Thus, our results provide a solid experimental and theoretical base for a quantitative understanding of ultrafast fs-laser-driven magnetoacoustics and tailoring the magnetic-grating-based metasurfaces at the nanoscale.
Purpose
Although start-ups have gained increasing scholarly attention, we lack sufficient understanding of their entrepreneurial strategic posture (ESP) in emerging economies. The purpose of this study is to examine the processes of ESP of new technology venture start-ups (NTVs) in an emerging market context.
Design/methodology/approach
In line with grounded theory guidelines and the inductive research traditions, the authors adopted a qualitative approach involving 42 in-depth semi-structured interviews with Ghanaian NTV entrepreneurs to gain a comprehensive analysis at the micro-level on the entrepreneurs' strategic posturing. A systematic procedure for data analysis was adopted.
Findings
From the authors' analysis of Ghanaian NTVs, the authors derived a three-stage model to elucidate the nature and process of ESP Phase 1 spotting and exploiting market opportunities, Phase II identifying initial advantages and Phase III ascertaining and responding to change.
Originality/value
The study contributes to advancing research on ESP by explicating the process through which informal ties and networks are utilised by NTVs and NTVs' founders to overcome extreme resource constraints and information vacuums in contexts of institutional voids. The authors depart from past studies in demonstrating how such ties can be harnessed in spotting and exploiting market opportunities by NTVs. On this basis, the paper makes original contributions to ESP theory and practice.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
In this paper, the J-integral is derived for temperature-dependent elastic–plastic materials described by incremental plasticity. It is implemented using the equivalent domain integral method for assessment of three-dimensional cracks based on results of finite-element calculations. The J-integral considers contributions from inhomogeneous temperature fields and temperature-dependent elastic and plastic material properties as well as from gradients in the plastic strains and the hardening variables. Different energy densities are considered, the Helmholtz free energy and the stress-working density, providing a physical meaning of the J-integral as a fracture criteria for crack growth. Results obtained for a plate with two different crack configurations each loaded by a cool-down thermal shock show domain-independence of the incremental J-integral for different energy densities even for high temperature gradients and significant temperature-dependence of the yield stress and the hardening exponent in the presence of large scale yielding. Hence, the derived J-integral is an appropriate parameter for the assessment of cracks in thermomechanically loaded components.
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
Team description papers of magmaOffenburg are incremental in the sense that each year we address a different topic of our team and the tools around our team. In this year’s team description paper we focus on the architecture of the software. It is a main factor for being able to keep the code maintainable even after 15 years of development. We also describe how we make sure that the code follows this architecture.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2 mittels methanogener Mikroorganismen durch Umsetzung von H2 und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2 enthaltenden Gases in das Medium der Begasungskolonne, eine Abführeinrichtung zum Abführen eines CH4 enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2 in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium in der Begasungskolonne ein H2 enthaltendes Gas zugeführt wird.
Landing heel first has been associated with elevated external knee abduction moments (KAM), thereby potentially increasing the risk of sustaining a non-contact ACL injury. Apart from the foot strike angle, knee valgus angle (VAL) and vertical center of mass velocity at initial ground contact (IC) have been associated with increased KAM in females across different sidestep cuts. While real-time biofeedback training has been proven effective for gait retraining [4], the highly dynamic, non-cyclical nature of cutting maneuvers makes real-time feedback unsuitable and alternative approaches necessary. This study aimed at assessing the efficacy of immediate software-aided feedback on cutting technique in reducing KAM during handball-specific cutting maneuvers.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
The use of renewable energy sources for heating and cooling in buildings today offers the best opportunities to avoid the use of fossil fuels and the associated climate-damaging emissions. However, unlike fossil fuels, renewable energy sources such as solar radiation are not available at the push of a button, but occur uncontrollably depending on weather conditions, the location of the building and the time of year. Their use is free of charge. However, complex converters and systems usually have to be installed in order to use them. These must be carefully planned and operated in order to avoid unnecessary costs and to generate the maximum possible yield. The regenerative energy systems are usually integrated into existing conventional systems. When designing the control and regulation equipment, it is crucial to design the automation of the systems in such a way that primarily renewable energy sources are used and the share of fossil energy sources is minimized.
Automation devices or automation stations (AS) take on the task of controlling, regulating, monitoring and, if necessary, optimising building systems and their system components (e.g. pumps, compressors, fans) based on recorded process variables. For this purpose, a wide range of control and regulation methods are used, starting with simple on/off controllers, through classic PID controllers, to higher-order controllers such as adaptive, model-predictive, knowledge-based or adaptive controllers.
Starting with a brief introduction to automation technology (Sect. 7.1), the chapter goes into the structure and functionality of the usual compact controllers using the application examples of solar thermal systems and heat pump systems (Sect. 7.2). Finally, the integration of system automation into a higher-level building automation system and into the building management system is described using specific application examples (Sect. 7.3).
This central book chapter now details the implementation of automation of solar domestic hot water systems, solar assisted building heating, rooms, solar cooling systems, heat pump heating systems, geothermal systems and thermally activated building component systems. Hydraulic and automation diagrams are used to explain how the automation of these systems works. A detailed insight into the engineering and technical interrelationships involved in the use of these systems, as well as the use of simulation tools, enables effective control and regulation. System characteristic curves and systematic procedures support the automation engineer in his tasks.
Renewable energy sources such as solar radiation, geothermal heat and ambient heat are available for energy conversion. With the help of special converters, these resources can be put to use. These include solar collectors, geothermal probes and chillers. They collect the energy and convert it to a temperature level high enough to be suitable for heat purposes. In the case of refrigeration machines, a distinction is made between electrically and thermally driven machines.
In this paper we present the concept of the "KI-Labor Südbaden" to support regional companies in the use of AI technologies. The approach is based on the "Periodic Table of AI" and is extended with both new dimensions for sustainability, and the impact of AI on the working environment. It is illustrated on the basis of three real-world use cases: 1. The detection of humans with lowresolution infrared (IR) images for collaborative robotics; 2. The use of machine data from specifically designed vehicles; 3. State-of-the-art Large Language Models (LLMs) applied to internal company documents. We explain the use cases, thereby demonstrating how to apply the Periodic Table of AI to structure AI applications.
Marketing and sales have high expectations of new methods such as Big Data, artificial intelligence, machine learning, and predictive analytics. But following the “garbage in—garbage out” principle, the results leave much to be desired. The reason is often insufficient quality in the underlying customer data. This article sheds light on this problem using the data quality and value pyramid as an example. The higher up the value-added pyramid the data is located, the higher its quality and the more value it generates for a company. In addition, we show how the use of monitoring systems, such as a data quality scorecard, makes data quality visible and improvements measurable. In this way, the actual value of data for companies becomes obvious and manageable.
There is an ongoing debate about the use and scope of Clayton M. Christensen´s idea of disruptive innovation, including the question of whether it is a management buzz phrase or a valuable theory. This discussion considers the general question of how innovation in the field of management theories and concepts finds its way to the different target groups. This conceptual paper combines the different concepts of the creation and dissemination of management trends in a basic framework based on a short review of models for the dissemination of management ideas. This framework allows an analysis of the character of new management ideas like disruptive innovation. By measuring the impact of the theory on the academic sphere using a bibliometric statistic of the number of academic publications on Google scholar and Scopus and a meta-analysis of research papers, we show the significant influence of disruptive innovation beyond pure management fads.
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like Computer Vision (CV), Neural Language Processing (NLP), and Reinforcement Learning (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and secondorder information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for Generative Adversarial Networks (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and neural architecture search (NAS) through the framework of group sparsity. Group sparsity is achieved through ℓ2,1-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
The paper will focus on the activities of the International Year of Light and Optical Technologies 2015 (IYL) with their impact in life, science, art, culture, education and outreach as well as the importance in promoting the objectives for sustainable development. It describes our activities carried out in the run-up to or during the IYL, as well as reports on the generic projects that led to the success of the IYL. The success of the IYL is illustrated by examples and statistics. Relating to the potential and success of the IYL, the impact and the genesis of the International Day of Light (IDL) is presented. Impressions from the opening ceremony of the IYL in Paris at UNESCO headquarters and the Inaugural Ceremony of the IDL will then be covered. A second focus is placed on the interdisciplinary media projects realized by the students of our university dedicated to these events. Finally, an analysis of the impact and legacy of IYL and IDL will be presented.
In this paper we report on further success of our work to develop a multi-method energy optimization which works with a digital twin concept. The twin concept serves to replicate production processes of different kinds of production companies, including complex energy systems and test market interactions to then use them for model predictive optimizing. The presented work finally reports about the performed flexibility assessment leading to a flexibility audit with a list of measures and the impact of energy optimizations made related to interactions with the local power grid i.e., the exchange node of the low voltage distribution grid. The analysis and continuous exploration of flexibilities as well as the exchange with energy markets require a “guide” leading to continuous optimization with a further tool like the Flexibility Survey and Control Panel helping decision-making processes on the day-ahead horizon for real production plants or the investment planning to improve machinery, staff schedules and production
infrastructure.
The present work describes an extension of current slope estimation for parameter estimation of permanent magnet synchronous machines operated at inverters. The area of operation for current slope estimation in the individual switching states of the inverter is limited due to measurement noise, bandwidth limitation of the current sensors and the commutation processes of the inverter's switching operations. Therefore, a minimum duration of each switching state is necessary, limiting the final area of operation of a robust current slope estimation. This paper presents an extension of existing current slope estimation algorithms resulting in a greater area of operation and a more robust estimation result.
In order to attract new students, German universities must provide quick and easy access to relevant information. A chatbot can help increase the efficiency in academic advising for prospective students. In this study we evaluate the acceptance and effects of chatbots in German student-university communication. We conducted a qualitative UX-Study with the chatbot prototype of Offenburg University of Applied Sciences (HSO), in order to determine which features are particularly relevant and which requirements are made by the users. The results show that acceptance increases if the chatbot offers quick and adequate assistance, furthermore, our participants preferred an informal communication style and valued friendly and helpful personality traits for chatbots.
The technique of laser ultrasonics perfectly meets the need for noncontact, noninvasive, nondestructive mechanical probing of nanometer- to millimeter-size samples. However, this technique is limited to the excitation of low-amplitude strains, below the threshold for optical damage of the sample. In the context of strain engineering of materials, alternative optical techniques enabling the excitation of high-amplitude strains in a nondestructive optical regime are needed. We introduce here a nondestructive method for laser-shock wave generation based on additive superposition of multiple laser-excited strain waves. This technique enables strain generation up to mechanical failure of a sample at pump laser fluences below optical ablation or melting thresholds. We demonstrate the ability to generate nonlinear surface acoustic waves (SAWs) in Nb-SrTiO3 substrates, with associated strains in the percent range and pressures up to 3 GPa at 1 kHz repetition rate and close to 10 GPa for several hundred shocks. This study paves the way for the investigation of a host of high-strain SAW-induced phenomena, including phase transitions in conventional and quantum materials, plasticity and a myriad of material failure modes, chemistry and other effects in bulk samples, thin layers, and two-dimensional materials.
Polyarticulated active prostheses constitute a promising solution for upper limb amputees. The bottleneck for their adoption though, is the lack of intuitive control. In this context, machine learning algorithms based on pattern recognition from electromyographic (EMG) signals represent a great opportunity for naturally operating prosthetic devices, but their performance is strongly affected by the selection of input features. In this study, we investigated different combinations of 13 EMG-derived features obtained from EMG signals of healthy individuals performing upper limb movements and tested their performance for movement classification using an Artificial Neural Network. We found that input data (i.e., the set of input features) can be reduced by more than 50% without any loss in accuracy, while diminishing the computing time required to train the classifier. Our results indicate that input features must be properly selected in order to optimize prosthetic control.
During pyrolysis, biomass is carbonised in the absence of oxygen to produce biochar with heat and/or electricity as co-products making pyrolysis one of the promising negative emission technologies to reach climate goals worldwide. This paper presents a simplified representation of pyrolysis and analyses the impact of this technology on the energy system. Results show that the use of pyrolysis can allow getting zero emissions with lower costs by making changes in the unit commitment of the power plants, e.g. conventional power plants are used differently, as the emissions will be compensated by biochar. Additionally, the process of pyrolysis can enhance the flexibility of energy systems, as it shows a correlation between the electricity generated by pyrolysis and the hydrogen installation capacity, being hydrogen used less when pyrolysis appears. The results indicate that pyrolysis, which is available on the market, integrates well into the energy system with a promising potential to sequester carbon.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
Energy efficiency and hygrothermal performance of hemp clay walls for Moroccan residential buildings
(2023)
Hemp-based building envelopes have gained significant popularity in developed countries, and now the trend of constructing houses with hemp-clay blocks is spreading to developing countries like Morocco. Investigating the hygrothermal behavior of such structures under actual climate conditions is essential for advancing and promoting this sustainable practice. This paper presents an in-depth experimental characterization of a commercial hemp-clay brick that has been exposed to the outdoor environment for four years, in addition to field measurements on a building scale demonstration prototype. Additionally, the study simulates 17 representative cities to assess the hygrothermal performance and energy-saving potential in each of Morocco's six existing climate zones, using the EnergyPlus engine. The experimental campaign's findings demonstrate excellent indoor air temperature and relative humidity regulation within the hemp-clay wall building, leading to satisfactory levels of thermal comfort within hemp-clay wall buildings. This is attributed to the material's good thermal conductivity and excellent moisture buffering capacity (found to be 0.31 W/mK and 2.25 g/m2%RH), respectively). The energy simulation findings also point to significant energy savings, with cooling and heating energy reductions ranging from 27.7% to 47.5% and 33.7% to 79.8%, respectively, as compared to traditional Moroccan buildings.
To improve the building’s energy efficiency many parameters should be assessed considering the building envelope, energy loads, occupation, and HVAC systems. Fenestration is among the most important variables impacting residential building indoor temperatures. So, it is crucial to use the most optimal energy-efficient window glazing in buildings to reduce energy consumption and at the same time provide visual daylight comfort and thermal comfort. Many studies have focused on the improvement of building energy efficiency focusing on the building envelope or the heating, ventilation, and cooling systems. But just a few studies have focused on studying the effect of glazing on building energy consumption. Thus, this paper aims to study the influence of different glazing types on the building’s heating and cooling energy consumption. A real case study building located under a semi-arid climate was used. The building energy model has been conducted using the OpenStudio simulation engine. Building indoor temperature was calibrated using ASHRAE’s statistical indices. Then a comparative analysis was conducted using seven different types of windows including single, double, and triple glazing filled with air and argon. Tripleglazed and double-glazed windows with argon space offer 37% and 32% of annual energy savings. It should be stressed that the methodology developed in this paper could be useful for further studies to improve building energy efficiency using optimal window glazing.
Seismic data processing relies on multiples attenuation to improve inversion and interpretation. Radon-based algorithms are often used for multiples and primaries discrimination. Deep learning, based on convolutional neural networks (CNNs), has shown encouraging applications for demultiple that could mitigate Radon-based challenges. In this work, we investigate new strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. Moreover, we investigate two distinctive training methods for all the strategies: UNet based on minimum absolute error (L1) training, and adversarial training (GAN-UNet). We test the trained models with the different strategies and methods on 400 synthetic data. We found that training to predict multiples, including the primaries …
An important step in seismic data processing to improve inversion and interpretation is multiples attenuation. Radon-based algorithms are often used for discriminating primaries and multiples. Recently, deep learning (DL), based on convolutional neural networks (CNNs) has shown promising results in demultiple that could mitigate the challenges of Radon-based methods. In this work, we investigate new different strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. We evaluate the performance of the CNNs trained with the different strategies on 400 clean and noisy synthetic data, considering 3 metrics. We found that training a CNN to predict the multiples and then subtracting them from the input image is the most effective strategy for demultiple. Furthermore, including the primaries labels as a constraint during the training of multiples prediction improves the results. Finally, we test the strategies on a field dataset. The CNNs trained with different strategies report competitive results on real data compared with Radon demultiple. As a result, effectively trained CNN models can potentially replace Radon-based demultiple in existing workflows.
In this work the nonlinear behavior of layered surface acoustic wave (SAW) resonators is studied with the help of finite element (FE) computations. The full calculations depend strongly on the availability of accurate tensor data. While there are accurate material data for linear computations, the complete sets of higher-order material constants, needed for nonlinear simulations, are still not available for relevant materials. To overcome this problem, scaling factors were used for each available nonlinear tensor. The approach here considers piezoelectricity, dielectricity, electrostriction, and elasticity constants up to the fourth order. These factors act as a phenomenological estimate for incomplete tensor data. Since no set of fourth-order material constants for LiTaO3 is available, an isotropic approximation for the fourth-order elastic constants was applied. As a result, it was found that the fourth-order elastic tensor is dominated by one-fourth order Lamé constant. With the help of the FE model, derived in two different, but equivalent ways, we investigate the nonlinear behavior of a SAW resonator with a layered material stack. The focus was set to third-order nonlinearity. Accordingly, the modeling approach is validated using measurements of third-order effects in test resonators. In addition, the acoustic field distribution is analyzed.
Sustainable Production
(2023)
Plastics are used today in many areas of the automotive, aerospace and mechanical engineering industries due to their lightweight potential and ease of processing. Additive manufacturing is applied more and more frequently, as it offers a high degree of design freedom and eliminates the need for complex tools. However, the application of additively manufactured components made of plastics have so far been limited due to their comparatively low strength. For this reason, processes that offer additional reinforcement of the plastic matrix using fibers made of high-strength materials have been developed. However, these components represent a composite of different materials produced on the basis of fossil raw materials, which are difficult to recycle and generally not biodegradable.
Therefore, this paper will explore the potential for new composite materials whose matrix consists of a bio-based plastic. In this investigation, it is assumed that the matrix is reinforced with a fibrous material made of natural fiber to significantly increase the strength. This potential material should offer a lightweight yet strong structure and be biodegradable after use under controlled conditions. Therefore, the state of the art in the use of bio-based materials in 3D printing is first presented. In order to determine the economic boundary conditions, the growth potentials for bio-based materials are analyzed. Also, the recycling prospects for bio-based plastics will also be highlighted. The greenhouse gas emissions and land use to be expected when using bio-based materials are also estimated. Finally, the degradability of the composites is discussed.
Currently, immersive technologies are enjoying great popularity. This trend is reflected in technological advances and the emergence of new products for the mass market, such as augmented reality glasses. The range of applications for immersive technologies is growing with more efficient and affordable technologies and student adoption. Especially in education, the use will improve existing learning methods. Immersive application use visual, audio and haptic sensors to fully engage the user in a virtual environment. This impression is reinforced with the help of realistic visualizations and the opportunity for interaction. In particular, Augmented reality is characterized by a high degree of integration between reality and the inserted virtual objects. An augmented interactive simulation for the determination of the specific charge of an electron will be used as an example to demonstrate how such immersion can be created for users. A virtual Helmholtz coil is used to measure and calculate the e/m constant. The voltage at the cathode for generating the electron beam, but also the voltage of the homogeneous magnetic field for deflecting the electron beam, can be variably controlled by haptic user input. Based on these voltages, an immersive virtual electron beam is calculated and visualized. In this paper, the authors present the conceptual steps of this immersive application and address the challenges associated with designing and developing an augmented and interactive simulation.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
It is common practice to apply padding prior to convolution operations to preserve the resolution of feature-maps in Convolutional Neural Networks (CNN). While many alternatives exist, this is often achieved by adding a border of zeros around the inputs. In this work, we show that adversarial attacks often result in perturbation anomalies at the image boundaries, which are the areas where padding is used. Consequently, we aim to provide an analysis of the interplay between padding and adversarial attacks and seek an answer to the question of how different padding modes (or their absence) affect adversarial robustness in various scenarios.
Neural networks have a number of shortcomings. Amongst the severest ones is the sensitivity to distribution shifts which allows models to be easily fooled into wrong predictions by small perturbations to inputs that are often imperceivable to humans and do not have to carry semantic meaning. Adversarial training poses a partial solution to address this issue by training models on worst-case perturbations. Yet, recent work has also pointed out that the reasoning in neural networks is different from humans. Humans identify objects by shape, while neural nets mainly employ texture cues. Exemplarily, a model trained on photographs will likely fail to generalize to datasets containing sketches. Interestingly, it was also shown that adversarial training seems to favorably increase the shift toward shape bias. In this work, we revisit this observation and provide an extensive analysis of this effect on various architectures, the common L_2-and L_-training, and Transformer-based models. Further, we provide a possible explanation for this phenomenon from a frequency perspective.
In this contribution, we present a novel 3D printed multi-material, electromagnetic vibration harvester. The harvester is based on a cantilever design and utilizes an embedded constantan wire within a matrix of polyethylene terephthalate glycol (PETG). A prototype has been manufactured with a combination of a fused filament fabrication (FFF) printer and a robot with a custom-made tool.
We revisit the quantitative analysis of the ultrafast magnetoacoustic experiment in a freestanding nickel thin film by Kim and Bigot [J.-W. Kim and J.-Y. Bigot, Phys. Rev. B 95, 144422 (2017)] by applying our recently proposed approach of magnetic and acoustic eigenmode decomposition. We show that the application of our modeling to the analysis of time-resolved reflectivity measurements allows for the determination of amplitudes and lifetimes of standing perpendicular acoustic phonon resonances with unprecedented accuracy. The acoustic damping is found to scale as ∝ω2 for frequencies up to 80 GHz, and the peak amplitudes reach 10−3. The experimentally measured magnetization dynamics for different orientations of an external magnetic field agrees well with numerical solutions of magnetoelastically driven magnon harmonic oscillators. Symmetry-based selection rules for magnon-phonon interactions predicted by our modeling approach allow for the unambiguous discrimination between spatially uniform and nonuniform modes, as confirmed by comparing the resonantly enhanced magnetoelastic dynamics simultaneously measured on opposite sides of the film. Moreover, the separation of timescales for (early) rising and (late) decreasing precession amplitudes provide access to magnetic (Gilbert) and acoustic damping parameters in a single measurement.
While most ultrafast time-resolved optical pump-probe experiments in magnetic materials reveal the spatially homogeneous magnetization dynamics of ferromagnetic resonance (FMR), here we explore the magneto-elastic generation of GHz-to-THz frequency spin waves (exchange magnons). Using analytical magnon oscillator equations, we apply time-domain and frequency-domain approaches to quantify the results of ultrafast time-resolved optical pump-probe experiments in free-standing ferromagnetic thin films. Simulations show excellent agreement with the experiment, provide acoustic and magnetic (Gilbert) damping constants and highlight the role of symmetry-based selection rules in phonon-magnon interactions. The analysis is extended to hybrid multilayer structures to explore the limits of resonant phonon-magnon interactions up to THz frequencies.
In this paper, we propose an approach for gait phase detection for flat and inclined surfaces that can be used for an ankle-foot orthosis and the humanoid robot Sweaty. To cover different use cases, we use a rule-based algorithm. This offers the required flexibility and real-time capability. The inputs of the algorithm are inertial measurement unit and ankle joint angle signals. We show that the gait phases with the orthosis worn by a human participant and with Sweaty are reliably recognized by the algorithm under the condition of adapted transition conditions. E.g., the specificity for human gait on flat surfaces is 92 %. For the robot Sweaty, 95 % results in fully recognized gait cycles. Furthermore, the algorithm also allows the determination of the inclination angle of the ramp. The sensors of the orthosis provide 6.9 and that of the robot Sweaty 7.7 when walking onto the reference ramp with slope angle 7.9.