Refine
Year of publication
Document Type
- Article (unreviewed) (124) (remove)
Language
- English (124) (remove)
Is part of the Bibliography
- yes (124) (remove)
Keywords
- Dünnschichtchromatographie (4)
- Export (4)
- Machine Learning (4)
- Biogas (3)
- Deep Learning (3)
- Innovation (3)
- Kommunikation (3)
- Trade (3)
- Ultraschall (3)
- Advanced Footwear Technology (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (35)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (32)
- Fakultät Wirtschaft (W) (25)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (21)
- IMLA - Institute for Machine Learning and Analytics (15)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (9)
- IfTI - Institute for Trade and Innovation (8)
- INES - Institut für nachhaltige Energiesysteme (6)
- IUAS - Institute for Unmanned Aerial Systems (4)
- ACI - Affective and Cognitive Institute (2)
Open Access
- Open Access (58)
- Closed Access (19)
- Diamond (16)
- Bronze (8)
- Gold (1)
A benchmark analysis of Long Range (LoRaTM) Communication at 2.45 Ghz for safety applications
(2014)
The Raman spectra from the chemical compounds toluene and cyclohexane obtained using a Fourier Transform (FT)-Raman spectrometer prototype have been contrasted with the Raman spectra of these same materials collected with two different commercial FT-Raman devices. The FT-Raman spectrometer consist of a Michelson interferometer, a self-designed photon counter and a reference photo-detector. The evaluation methodology of the spectral information, contrary to the commercial devices that commonly use the zero-crossing method, is carried out by re-sampling the Raman scattering and by accurately extracting the optical path information of the Michelson interferometer. The FTRaman arrangement has been built using conventional parts without disregarding the spectral frequency precision that usually such a FTRaman instruments deliver. No additional complex hardware components or costly software modules have been included in this FT-Raman device. The main Raman lines from the spectra obtained with the three FT-Raman devices have been compared with the Raman lines from the standard Raman spectra of these two materials. The values obtained using the FT-Raman spectrometer prototype have shown a frequency accuracy comparable to that obtained with the commercial devices without facing the need for a large investment. Although the proposed FT-Raman prototype cannot be directly compared to the last generation of FT-Raman spectrometers from the commercial manufacturers, such a device could give an opportunity to users that require high frequency precision in their spectral analysis and are provided with rather scarce resources.
The M-Bus protocol (EN13757) is in widespread use for metering applications within home area and neighborhood area networks, but lacks a strict specification. This may lead to incompatibilities in real-life installations and to problems in the deployment of new M-Bus networks. This paper presents the development of a novel testbed to emulate physical Metering Bus (M-Bus) networks with different topologies and to allow the flexible verification of real M-Bus devices in real-world scenarios. The testbed is designed to support device manufacturers and service technicians in test and analysis of their devices within a specific network before their installation. The testbed is fully programmable, allowing flexible changes of network topologies, cable lengths and types. Itis easy to use, as only the master and the slaves devices have to be physically connected. This allows to autonomously perform multiple tests, including automated regression tests. The testbed is available to other researchers and developers. We invite companies and research institutions to use this M-Bus testbed to increase the common knowledge and real-world experience.
A Survey of Channel Measurements and Models for Current and Future Railway Communication Systems
(2016)
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
We present a videodensitometric quantification method for methadone in syrup, separated by thin-layer chromatography (TLC). The quantification is based on a derivation reaction with Dragendorf reagent. Measurements were carried out using a 16-bit flatbed scanner. The range of linearity covers two magnitudes of power using the Kubelka-Munk expression for data transformation. The separation method is inexpensive, fast, and reliable.
The Advanced Innovation Design Approach is a holistic methodology for enhancing innovative and competitive capability of industrial companies. AIDA can be considered as an open mindset, an individually adaptable range of strongest innovation techniques such as comprehensive front-end innovation process, advanced innovation methods, best tools and methods of the TRIZ methodology, organizational measures for accelerating innovation, IT-solutions for Computer-Aided Innovation, and other innovation methods, elaborated in the recent decade in the industry and academia
Recent advances in spiked shoe design, characterized by increased longitudinal stiffness, thicker midsole foams, and reconfigured geometry are considered to improve sprint performance. However, so far there is no empirical data on the effects of advanced spikes technology on maximal sprinting speed (MSS) published yet. Consequently, we assessed MSS via ‘flying 30m’ sprints of 44 trained male (PR: 10.32 s - 12.08 s) and female (PR: 11.56 s - 14.18 s) athletes, wearing both traditional and advanced spikes in a randomized, repeated measures design. The results revealed a statistically significant increase in MSS by 1.21% on average when using advanced spikes technology. Notably, 87% of participants showed improved MSS with the use of advanced spikes. A cluster analysis unveiled that athletes with higher MSS may benefit to a greater extent. However, individual responses varied widely, suggesting the influence of multiple factors that need detailed exploration. Therefore, coaches and athletes are advised to interpret the promising performance enhancements cautiously and evaluate the appropriateness of the advanced spike technology for their athletes critically.
Financing trade and development sustainably will be crucial for Africa. Enhanced collaboration between multilateral development banks, development finance institutions and ECAs could greatly enhance intra-regional trade. Furthermore, setting up a ‘level playing field’ on the continent will allow governments to make strategic interventions for successful export credits and trade finance solutions, fostering growth through trade. African trade is already showing signs of rebounding from the coronavirus- induced recession. Through concerted, co-operative and continent-wide efforts, drawing on the knowledge and resources of all types of institutions and policy experts, Africa will continue to grow confidently and quickly into its increasingly important role as an engine of economic growth and global trade.
All business is local
(2016)
Objective: This paper deals with the design and the optimization of mechatronic devices.
Introduction: Comparing with existing works, the design approach presented in this paper aims to integrate optimization in the design phase of complex mechatronic systems in order to increase the efficiency of this method.
Methods: To solve this problem, a novel mechatronic system design approach has been developed in order to take the multidisciplinary aspect and to consider optimization as a tool that can be used within the embodiment design process to build mechatronic solutions from a set of solution concepts designed with innovative or routine design methods.
Conclusions: This approach has then been applied to the design and optimization of a wind turbine system that can be implemented to autonomously supply a mountain cottage.
The communication technologies for automatic me-ter reading (smart metering) and for energy production and distribution networks (smart grid) have the potential to be one of the first really highly scaled machine-to-machine-(M2M)-applications. During the last years two very promising devel-opments around the wireless part of smart grid communication were initialized, which possibly have an impact on the markets far beyond Europe and far beyond energy automation. Besides the specifications of the Open Metering System (OMS) Group, the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, BSI) has designed a protection profile (PP) and a technical directive (TR) for the communication unit of an intelligent measurement sys-tem (smart meter gateway), which were released in March 2013. This design uses state-of-the-art technologies and prescribes their implementation in real-life systems. At first sight the expenditures for the prescribed solutions seem to be significant. But in the long run, this path is inevitable and comes with strategic advantages.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
In the modern knowledge-based and digital economy, the value of knowledge is growing relative to other assets and new intellectual property is being created at an ever-increasing rate. Therefore, the ability to find non-trivial solutions, systematically generate new concepts, and create intellectual property rapidly become crucial to achieving competitive advantage and leveraging the intellectual potential of organizations.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
The energy supply of Offenburg University of Applied Sciences (HS OG) was changed from separate generation to trigeneration in 2007/2008. Trigeneration was installed for supplying heat, cooling and electrical power at HS OG. In this paper, trigeneration process and its modes of operation along with the layout of the energy facility at HS OG were described. Special emphasis was given to the operation schemes and control strategies of the operation modes: winter mode, transition mode and summer mode. The components used in the energy facility were also outlined. Monitoring and data analysis of the energy system was carried out after the commissioning of trigeneration in the period from 2008 to 2011. Thus, valuable performance data was obtained.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
An interlaboratory comparison was carried out to evaluate the effectiveness of a method based on HPTLC in which reagent-free derivatization is followed by UV/fluorescence detection. The method was tested for the determination of sucralose (C12H19C13O8; (2R,3R,4R,5S,6R)-2-[(2R,3S,4S,5S)-2,5-bis(chloromethyl)-3,4-dihydroxyoxolan-2-yl]oxy-5-chloro-6-hydroxymethyl)oxane-3, 4-diol; CAS Registry No. 56038-13-2) in carbonated and still beverages at the proposed European regulatory limits. For still beverages, a portion of the sample was diluted with methanol-water. For carbonated beverages, a portion of the sample was degassed in an ultrasonic bath before dilution. Turbid beverages were filtered after dilution through an HPLC syringe filter. The separation of sucralose was performed by direct application on amino-bonded (NH2) silica gel HPTLC plates (no cleanup needed) with the mobile phase acetonitrile-water. Sucralose was determined after reagent-free derivatization at 190 degrees C; it was quantified by measurements of both UV absorption and fluorescence. The samples, both spiked and containing sucralose, were sent to 14 laboratories in five different countries. Test portions of a sample found to contain no sucralose were spiked at levels of 30.5, 100.7, and 299 mg/L. Recoveries ranged from 104.3 to 124.6% and averaged 112% for determination by UV detection; recoveries ranged from 98.4 to 101.3% and averaged 99.9% for determination by fluorescence detection. On the basis of the results for spiked samples (blind duplicates at three levels), as well as sucralose-containing samples (blind duplicates at three levels and one split level), the values for the RSDr ranged from 10.3 to 31.4% for determinations by UV detection and from 8.9 to 15.9% for determinations by fluorescence detection. The values for the RSDR values ranged from 13.5 to 31.4% for determinations by UV detection and from 8.9 to 20.7% for determinations by fluorescence detection.
(1) Background: Little is known about the baroque composer Domenico Scarlatti (1685-1757), whose life was centred behind closed doors at the royal court in Spain. There are no reports about his illnesses. From his compositions, mainly for harpsichord, an outstanding virtuosity can be read. (2) Case Presentation: In this case report, the only known oil painting of Domenico Scarlatti is presented, on which he is about 50 years old. In it one recognizes conspicuous hands with hints of watch glass nails and drumstick fingers. (3) Discussion: Whether Scarlatti had chronic hypoxia of peripheral body regions as a sign of, e.g., bronchial cancer or a severe heart disease, is not known. (4) Conclusions: The above-mentioned signs recorded in the oil painting, even if they were not interpretable at that time, are clearly represented and recorded for us and are open to diagnostic discussion from today's point of view.
In an extensive research project, we have assessed the application of different service models by export credit agencies (ECAs) and export-import banks (EXIMs). We conducted interviews with 35 representatives of ECAs and EXIMs from 27 countries. The question guiding this study is: How do ECAs and EXIMs adopt public service models for supporting exporters? We conducted a holistic multiple case study, investigating if and how these organisations apply public service models developed by Schedler and Guenduez, and which roles of the state are relevant. We find that there is a variety of different service models used by ECAs and EXIMs, and that the service model approaches have great potential to learn from each other and innovate existing services.
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
In thin-layer chromatography the development step distributes the sample throughout the layer, a process which strongly affects the reflection signals. The essential requirement for quantitative thinlayer chromatography is not a constant sample concentration but constant sample distribution in each sample spot. This makes evaporation of the mobile phase extremely important, because all tracks of a TLC plate must be dried uniformly. This paper shows that quantitative TLC is possible even if the concentration of the sample is not constant throughout the layer or if the distribution of the sample is not known. With uniform sample distribution, classical Kubelka-Munk theory is valid for isotropic scattering only. In the absence of this constraint classical Kubelka-Munk theory must be extended to situations where scattering is asymmetric. This can be achieved by modification of the original Kubelka-Munk equation. Extended theory is presented which is not only capable of describing asymmetrical scattering in TLC layers but also includes a formula for absorption and fluorescence in diode-array TLC. With this new theory all different formulas for diode-array thin-layer chromatographic evaluation are combined in one expression.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Electrolyte-Gated Field-Effect Transistors Based on Oxide Semiconductors: Fabrication and Modeling
(2017)
Bluetooth Low Energy extends the Bluetooth standard in version 4.0 for ultra-low energy applications through the extensive usage of low-power sleeping periods, which inherently difficult in frequency hopping technologies. This paper gives an introduction into the specifics of the Bluetooth Low Energy protocol, shows a sample implementation, where an embedded device is controlled by an Android smart phone, and shows the results of timing and current consumption measurements.
Our media-artistic performances and installations, INTERCORPOREAL SPLITS (2010–2013), BUZZ (2014–2015), W ASTELAND (2015–2016), as well as our new collaboration with Bruno Latour , DE\GLOBALIZE (2018–2020), are not just about polyphony. Here, however, we rediscover them under this heading, thus giving them a new twist, while mapping out issues, mechanisms and functional modes of the polyphonic.
Specific prototypes of sedimentation field flow fractionation devices (SdFFF) have been developed with relative success for cell sorting. However, no data are available to compare these apparatus with commercial ones. In order to compare with other devices mainly used for non-biological species, biocompatible systems were used for standard particle (latex: 3–10 μm of different size dispersities) separation development. In order to enhance size dependent separations, channels of reduced thickness were used (80 and 100 μm) and channel/carrier-phase equilibration procedures were necessary. For sample injection, the use of inlet tubing linked to the FFF accumulation wall, common for cell sorting, can be extended to latex species when they are eluted in the Steric Hyperlayer elution mode. It avoids any primary relaxation steps (stop flow injection procedure) simplifying series of elution processing. Mixtures composed of four different monodispersed latex beads can be eluted in 6 min with 100 μm channel thickness.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Diode-array planar chromatography is a versatile tool for identification of pharmaceutical substances In this paper thirty-three compounds with benzodiazepine properties were investigated and the separating conditions for silica gel HPTLC plates and three mobile phases were optimized. Diode-array HPTLC makes it possible to identify all the compounds with high certainty down to a level of 20 ng. An algorithm for spectral recognition which is combined with R F values from the three separation steps into one fit factor is presented. This set of data is unique for each of the compounds investigated and enables unequivocal identification. The method is rapid, inexpensive, and sensitive down to a level of 20 ng mL −1.
Formal Description of Inductive Air Interfaces Using Thévenin's Theorem and Numerical Analysis
(2014)
With the development of new integrated circuits to interface radio frequency identification protocols, inductive air interfaces have become more and more important. Near field communication is not only able to communicate, but also possible to transfer power wirelessly and to build up passive devices for logistical and medical applications. In this way, the power management on the transponder becomes more and more relevant. A designer has to optimize power consumption as well as energy harvesting from the magnetic field. This paper discusses a model with simple equations to improve transponder antenna matching. Furthermore, a new numerical analysis technique is presented to calculate the coupling factors, inductions, and magnetic fields of multiantenna systems.