Refine
Year of publication
Document Type
- Article (unreviewed) (124) (remove)
Language
- English (124) (remove)
Is part of the Bibliography
- yes (124)
Keywords
- Dünnschichtchromatographie (4)
- Export (4)
- Machine Learning (4)
- Biogas (3)
- Deep Learning (3)
- Innovation (3)
- Kommunikation (3)
- Trade (3)
- Ultraschall (3)
- Advanced Footwear Technology (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (35)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (32)
- Fakultät Wirtschaft (W) (25)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (21)
- IMLA - Institute for Machine Learning and Analytics (15)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (9)
- IfTI - Institute for Trade and Innovation (8)
- INES - Institut für nachhaltige Energiesysteme (6)
- IUAS - Institute for Unmanned Aerial Systems (4)
- ACI - Affective and Cognitive Institute (2)
Open Access
- Open Access (58)
- Closed Access (19)
- Diamond (16)
- Bronze (8)
- Gold (1)
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
The design of control systems in large-scale CPV power plants will be more challenging in the future. Reasons are the increasing size of power plants, the requirements of grid operators, new functions, and new technological trends in industrial automation or communication technology. Concepts and products from fixed-mounted PV can only partly be adopted since control systems for sun-tracking installations are considerable more complex due to the higher quantity of controllable entities. The objective of this paper is to deliver design considerations for next generation control systems. Therefore, the work identifies new applications of future control systems categorized into operation, monitoring and maintenance domains. The key-requirements of the technical system and the application layer are identified. In the resulting section, new strategies such as a more decentralized architecture are proposed and design criteria are derived. The contribution of this paper should allow manufacturers and research institutes to consider the design criteria in current development and to place further research on new functions and control strategies precisely.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
All business is local
(2016)
With economic weight shifting toward net zero, now is the time for ECAs, Exim-Banks, and PRIs to lead. Despite previous success, aligning global economic governance to climate goals requires additional activities across export finance and investment insurance institutions. The new research project initiated by Oxford University, ClimateWorks Foundation, and Mission 2020 including other practitioners and academics from institutions such as Atradius DSB, Columbia University, EDC, FMO and Offenburg University focuses on reshaping future trade and investment governance in light of climate action. The idea of a ‘Berne Union Net Zero Club’ is an important item in a potential package of reforms. This can include realigning mandates and corporate strategies, principles of intervention, as well as ECA, Exim-Bank and PRI operating models in order to accelerate net zero transformation. Full transparency regarding Berne Union members’ activities would be an excellent starting point. We invite all interested parties in the sector to come together to chart our own path to net zero
Electrolyte-Gated Field-Effect Transistors Based on Oxide Semiconductors: Fabrication and Modeling
(2017)
Objective: This paper deals with the design and the optimization of mechatronic devices.
Introduction: Comparing with existing works, the design approach presented in this paper aims to integrate optimization in the design phase of complex mechatronic systems in order to increase the efficiency of this method.
Methods: To solve this problem, a novel mechatronic system design approach has been developed in order to take the multidisciplinary aspect and to consider optimization as a tool that can be used within the embodiment design process to build mechatronic solutions from a set of solution concepts designed with innovative or routine design methods.
Conclusions: This approach has then been applied to the design and optimization of a wind turbine system that can be implemented to autonomously supply a mountain cottage.
This paper focuses on the effects of differential mode delay (DMD) on the bandwidth of multimode optical fibres. First an analytical solution for the computation of the differential mode time delay is presented. The electrical field of each mode is calculated by the numerical solution of the Helmholtz equation. Based on this solution the modal power distribution as well as the fibre's impulse response under different launching conditions can be obtained.
Next, the refractive-index profile of two fibres is modelled on the basis of DMD measurements. It is shown that these measurements provide enough information to predict the fibre's propagation characteristics under different launch conditions (excitation conditions).
Geothermal Energy in Germany
(2009)
During pyrolysis, biomass is carbonised in the absence of oxygen to produce biochar with heat and/or electricity as co-products making pyrolysis one of the promising negative emission technologies to reach climate goals worldwide. This paper presents a simplified representation of pyrolysis and analyses the impact of this technology on the energy system. Results show that the use of pyrolysis can allow getting zero emissions with lower costs by making changes in the unit commitment of the power plants, e.g. conventional power plants are used differently, as the emissions will be compensated by biochar. Additionally, the process of pyrolysis can enhance the flexibility of energy systems, as it shows a correlation between the electricity generated by pyrolysis and the hydrogen installation capacity, being hydrogen used less when pyrolysis appears. The results indicate that pyrolysis, which is available on the market, integrates well into the energy system with a promising potential to sequester carbon.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our Attribute Transfer Inpainting Generative Adversarial Network (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications. However, most of existing approaches are based on deep learning and require an abundant amount of labeled data to produce good results, therefore limiting their applicability. In the same vein, recent advances in meta-learning have led to successful implementations with limited available data, allowing so-called few-shot learning.
In this paper, we address this limitation of supervised methods, by proposing a novel approach based on GANs. These are trained in a meta-training manner, which allows them to perform image-to-image translations using just a few labeled samples from a new target class. This work empirically demonstrates the potential of training a GAN for few shot image-to-image translation on hair color attribute synthesis tasks, opening the door to further research on generative transfer learning.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Our media-artistic performances and installations, INTERCORPOREAL SPLITS (2010–2013), BUZZ (2014–2015), W ASTELAND (2015–2016), as well as our new collaboration with Bruno Latour , DE\GLOBALIZE (2018–2020), are not just about polyphony. Here, however, we rediscover them under this heading, thus giving them a new twist, while mapping out issues, mechanisms and functional modes of the polyphonic.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
The energy supply of Offenburg University of Applied Sciences (HS OG) was changed from separate generation to trigeneration in 2007/2008. Trigeneration was installed for supplying heat, cooling and electrical power at HS OG. In this paper, trigeneration process and its modes of operation along with the layout of the energy facility at HS OG were described. Special emphasis was given to the operation schemes and control strategies of the operation modes: winter mode, transition mode and summer mode. The components used in the energy facility were also outlined. Monitoring and data analysis of the energy system was carried out after the commissioning of trigeneration in the period from 2008 to 2011. Thus, valuable performance data was obtained.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
We introduce an open source python framework named PHS-Parallel Hyperparameter Search to enable hyperparameter optimization on numerous compute instances of any arbitrary python function. This is achieved with minimal modifications inside the target function. Possible applications appear in expensive to evaluate numerical computations which strongly depend on hyperparameters such as machine learning. Bayesian optimization is chosen as a sample efficient method to propose the next query set of parameters.
Pressure dynamics in metal-oxygen (metal-air) batteries: a case study on sodium superoxide cells
(2014)
Electrochemical reactions in metal–oxygen batteries come along with the consumption or release of gaseous oxygen. We present a novel methodology for investigating electrode reactions and transport phenomena in metal–oxygen batteries by measuring the pressure dynamics in an enclosed gas reservoir above the oxygen electrode. The methodology is exemplified by a room-temperature sodium–oxygen battery forming sodium superoxide (NaO2) in an electrolyte of diethylene glycol dimethyl ether (diglyme) and sodium trifluoromethanesulfonate (NaOSO2CF3, NaOTf). The experiments are supported by microkinetic simulations with a one-dimensional multiphysics continuum model. During galvanostatic cycling over 30 cycles, a constant oxygen consumption/release rate is observed upon discharge/charge. The number of transferred electrons per oxygen molecule is calculated to 1.01 ± 0.02 and 1.03 ± 0.02 for discharge and charge, respectively, confirming the nature of the oxygen reaction product as superoxide O2–. The same ratio is observed in cyclic voltammetry experiments with low scan rate (<1 mV/s). However, at higher scan rates, the ratio increases as a result of oxygen transport limitations in the electrolyte. We introduce electrochemical pressure impedance spectroscopy (EPIS) for simultaneously analyzing current, voltage, and pressure of electrochemical cells. Pressure recording significantly increases the sensitivity of impedance toward oxygen transport properties of the porous electrode systems. In addition, we report experimental data on the diffusion coefficient and solubility of oxygen in electrolyte solutions as important parameters for the microkinetic models.
Phenolic compounds, such as flavonoids and phenolic acids, are very important substances that occur in various medicinal plants. They show different pharmacological activities which might be useful in the therapy of many diseases. Phenolic compounds have achieved an increasing interest over the last years because these compounds are easily oxidized and, thus, act as strong antioxidants. We present the chemiluminescence of different phenolic compounds measured directly on high-performance thin-layer chromatography LiChrospher® plates using the oxalic acid derivative bis(2,4,6-trichlorophenyl) oxalate (TCPO) in conjunction with H2O2. Our results indicate that chemiluminescence intensity increases with an ascending number of phenolic groups in the molecule. The method can be used to detect phenolic compounds in beverages like coffee, tea, and wine.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
In order to make material design processes more efficient in the future, the underlying multidimensional process parameter spaces must be systematically explored using digitalisation techniques such as machine learning (ML) and digital simulation. In this paper we shortly review essential concepts for the digitalisation of electrodeposition processes with a special focus on chromium plating from trivalent electrolytes.
Additive manufacturing (AM) and in particular the application of 3D multi material printing offers completely new production technologies thanks to the degree of freedom in design and the simultaneous processing of several materials in one component. Today's CAD systems for product development are volume-based and therefore cannot adequately implement the multi-material approach. Voxel-based CAD systems offer the advantage that a component can be divided into many voxels and different materials and functions can be assigned to these voxels. In this contribution two voxel-based CAD systems will be analyzed in order to simplify the AM on voxel level with different materials. Thus, a number of suitable criteria for evaluating voxel-based CAD systems are being developed and applied. The results of a technical-economic comparison show the differences between the voxel-based systems and disclose their disadvantages compared to conventional CAD systems. In order to overcome these disadvantages, a new method is therefore presented as an approach that enables the voxelization of a component in a simple way based on a conventional CAD model. The process chain of this new method is demonstrated using a typical component from product design. The results of this implementation of the new method are illustrated and analyzed.
Specific prototypes of sedimentation field flow fractionation devices (SdFFF) have been developed with relative success for cell sorting. However, no data are available to compare these apparatus with commercial ones. In order to compare with other devices mainly used for non-biological species, biocompatible systems were used for standard particle (latex: 3–10 μm of different size dispersities) separation development. In order to enhance size dependent separations, channels of reduced thickness were used (80 and 100 μm) and channel/carrier-phase equilibration procedures were necessary. For sample injection, the use of inlet tubing linked to the FFF accumulation wall, common for cell sorting, can be extended to latex species when they are eluted in the Steric Hyperlayer elution mode. It avoids any primary relaxation steps (stop flow injection procedure) simplifying series of elution processing. Mixtures composed of four different monodispersed latex beads can be eluted in 6 min with 100 μm channel thickness.
The three wavelength extinction method (3-WEM) was applied for the on-line particle analysisof suspensions of monodisperse latex beads and polydisperse metal oxide particles of industrialinterest. Comparative measurements were performed by photon correlation spectroscopy (PCS). Thedata of latex particles obtained by 3-WEM and PCS are in good agreement with the manufacture’svalues. Also, the values of oxide particles measured by means of the two techniques are in reasonableagreement despite of the irregular particle shape.Discrepancies are observed by comparing the oxideparticle size results with those of scanning electron microscopy, which is due to the broad sampledistributions and shape irregularities.
The M-Bus protocol (EN13757) is in widespread use for metering applications within home area and neighborhood area networks, but lacks a strict specification. This may lead to incompatibilities in real-life installations and to problems in the deployment of new M-Bus networks. This paper presents the development of a novel testbed to emulate physical Metering Bus (M-Bus) networks with different topologies and to allow the flexible verification of real M-Bus devices in real-world scenarios. The testbed is designed to support device manufacturers and service technicians in test and analysis of their devices within a specific network before their installation. The testbed is fully programmable, allowing flexible changes of network topologies, cable lengths and types. Itis easy to use, as only the master and the slaves devices have to be physically connected. This allows to autonomously perform multiple tests, including automated regression tests. The testbed is available to other researchers and developers. We invite companies and research institutions to use this M-Bus testbed to increase the common knowledge and real-world experience.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.