Refine
Year of publication
Document Type
- Article (unreviewed) (124) (remove)
Language
- English (124) (remove)
Is part of the Bibliography
- yes (124) (remove)
Keywords
- Dünnschichtchromatographie (4)
- Export (4)
- Machine Learning (4)
- Biogas (3)
- Deep Learning (3)
- Innovation (3)
- Kommunikation (3)
- Trade (3)
- Ultraschall (3)
- Advanced Footwear Technology (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (35)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (32)
- Fakultät Wirtschaft (W) (25)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (21)
- IMLA - Institute for Machine Learning and Analytics (15)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (9)
- IfTI - Institute for Trade and Innovation (8)
- INES - Institut für nachhaltige Energiesysteme (6)
- IUAS - Institute for Unmanned Aerial Systems (4)
- ACI - Affective and Cognitive Institute (2)
Open Access
- Open Access (58)
- Closed Access (19)
- Diamond (16)
- Bronze (8)
- Gold (1)
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
Objective: To identify and evaluate the evidence of the most relevant running-related risk factors (RRRFs) for running-related overuse injuries (ROIs) and to suggest future research directions.
Design: Systematic review considering prospective and retrospective studies. (PROSPERO_ID: 236832)
Data sources: Pubmed. Connected Papers. The search was performed in February 2021.
Eligibility criteria: English language. Studies on participants whose primary sport is running addressing the risk for the seven most common ROIs and at least one kinematic, kinetic (including pressure measurements), or electromyographic RRRF. An RRRF needed to be identified in at least one prospective or two retrospective studies.
Results: Sixty-two articles fulfilled our eligibility criteria. Levels of evidence for specific ROIs ranged from conflicting to moderate evidence. Running populations and methods applied varied considerably between studies. While some RRRFs appeared for several ROIs, most RRRFs were specific for a particular ROI. The biomechanical measurements performed in many studies would have allowed for consideration of many more RRRFs than have been reported, highlighting a potential for more effective data usage in the future.
Conclusion: This study offers a comprehensive overview of RRRFs for the most common ROIs, which might serve as a starting point to develop ROI-specific risk profiles of individual runners. Future work should use macroscopic (big data) approaches involving long-term data collections in the real world and microscopic approaches involving precise stress calculations using recent developments in biomechanical modelling. However, consensus on data collection standards (including the quantification of workload and stress tolerance variables and the reporting of injuries) is warranted.
A benchmark analysis of Long Range (LoRaTM) Communication at 2.45 Ghz for safety applications
(2014)
Rotation of an elastic medium gives rise to a shift of frequency of its acoustic modes, i.e., the time-period vibrations that exist in it. This frequency shift is investigated by applying perturbation theory in the regime of small ratios of the rotation velocity and the frequency of the acoustic mode. In an expansion of the relative frequency shift in powers of this ratio, upper bounds are derived for the first-order and the second-order terms. The derivation of the theoretical upper bounds of the first-order term is presented for linear vibration modes as well as for stable nonlinear vibrations with periodic time dependence that can be represented by a Fourier series.
Mice and rats make up 95% of all animals used in medical research and drug discovery and development. Monitoring of physiological functions such as ECG, blood pressure, and body temperature over the entire period of an experiment is often required. Restraining of the animals in order to obtain this data can cause great inconvenience. The use of telemetric systems solves this problem and provides more reliable results. However, these devices are mostly equipped with batteries, which limit the time of operation or they use passive power supplies, which affects the operating range. The semi-passive telemetric implant being presented is based on RFID technology and overcomes these obstacles. The device is inductively powered using the magnetic field of a common RFID reader device underneath the cage, but is also able to operate for several hours autonomously. Being independent from the battery capacity, it is possible to use the implant over a long period of time or to re-use the device several times in different animals, thus avoiding the disadvantages of existing systems and reducing the costs of purchase and refurbishment.
Formal Description of Inductive Air Interfaces Using Thévenin's Theorem and Numerical Analysis
(2014)
With the development of new integrated circuits to interface radio frequency identification protocols, inductive air interfaces have become more and more important. Near field communication is not only able to communicate, but also possible to transfer power wirelessly and to build up passive devices for logistical and medical applications. In this way, the power management on the transponder becomes more and more relevant. A designer has to optimize power consumption as well as energy harvesting from the magnetic field. This paper discusses a model with simple equations to improve transponder antenna matching. Furthermore, a new numerical analysis technique is presented to calculate the coupling factors, inductions, and magnetic fields of multiantenna systems.
A Survey of Channel Measurements and Models for Current and Future Railway Communication Systems
(2016)
Bluetooth Low Energy extends the Bluetooth standard in version 4.0 for ultra-low energy applications through the extensive usage of low-power sleeping periods, which inherently difficult in frequency hopping technologies. This paper gives an introduction into the specifics of the Bluetooth Low Energy protocol, shows a sample implementation, where an embedded device is controlled by an Android smart phone, and shows the results of timing and current consumption measurements.
In the area of cloud computing, judging the fulfillment of service-level agreements on a technical level is gaining more and more importance. To support this we introduce privacy preserving set relations as inclusiveness and disjointness based ao Bloom filters. We propose to compose them in a slightly different way by applying a keyed hash function. Besides discussing the correctness of set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. Indeed, our solution proposes to bring another layer of privacy on the sizes. We are in particular interested how the overlapping bits of a Bloom filter impact the privacy level of our approach. We concretely apply our solution to a use case of cloud security audit on access control and present our results with real-world parameters.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
An interlaboratory comparison was carried out to evaluate the effectiveness of a method based on HPTLC in which reagent-free derivatization is followed by UV/fluorescence detection. The method was tested for the determination of sucralose (C12H19C13O8; (2R,3R,4R,5S,6R)-2-[(2R,3S,4S,5S)-2,5-bis(chloromethyl)-3,4-dihydroxyoxolan-2-yl]oxy-5-chloro-6-hydroxymethyl)oxane-3, 4-diol; CAS Registry No. 56038-13-2) in carbonated and still beverages at the proposed European regulatory limits. For still beverages, a portion of the sample was diluted with methanol-water. For carbonated beverages, a portion of the sample was degassed in an ultrasonic bath before dilution. Turbid beverages were filtered after dilution through an HPLC syringe filter. The separation of sucralose was performed by direct application on amino-bonded (NH2) silica gel HPTLC plates (no cleanup needed) with the mobile phase acetonitrile-water. Sucralose was determined after reagent-free derivatization at 190 degrees C; it was quantified by measurements of both UV absorption and fluorescence. The samples, both spiked and containing sucralose, were sent to 14 laboratories in five different countries. Test portions of a sample found to contain no sucralose were spiked at levels of 30.5, 100.7, and 299 mg/L. Recoveries ranged from 104.3 to 124.6% and averaged 112% for determination by UV detection; recoveries ranged from 98.4 to 101.3% and averaged 99.9% for determination by fluorescence detection. On the basis of the results for spiked samples (blind duplicates at three levels), as well as sucralose-containing samples (blind duplicates at three levels and one split level), the values for the RSDr ranged from 10.3 to 31.4% for determinations by UV detection and from 8.9 to 15.9% for determinations by fluorescence detection. The values for the RSDR values ranged from 13.5 to 31.4% for determinations by UV detection and from 8.9 to 20.7% for determinations by fluorescence detection.
We present a videodensitometric quantification method for methadone in syrup, separated by thin-layer chromatography (TLC). The quantification is based on a derivation reaction with Dragendorf reagent. Measurements were carried out using a 16-bit flatbed scanner. The range of linearity covers two magnitudes of power using the Kubelka-Munk expression for data transformation. The separation method is inexpensive, fast, and reliable.
Diode-array planar chromatography is a versatile tool for identification of pharmaceutical substances In this paper thirty-three compounds with benzodiazepine properties were investigated and the separating conditions for silica gel HPTLC plates and three mobile phases were optimized. Diode-array HPTLC makes it possible to identify all the compounds with high certainty down to a level of 20 ng. An algorithm for spectral recognition which is combined with R F values from the three separation steps into one fit factor is presented. This set of data is unique for each of the compounds investigated and enables unequivocal identification. The method is rapid, inexpensive, and sensitive down to a level of 20 ng mL −1.
In thin-layer chromatography the development step distributes the sample throughout the layer, a process which strongly affects the reflection signals. The essential requirement for quantitative thinlayer chromatography is not a constant sample concentration but constant sample distribution in each sample spot. This makes evaporation of the mobile phase extremely important, because all tracks of a TLC plate must be dried uniformly. This paper shows that quantitative TLC is possible even if the concentration of the sample is not constant throughout the layer or if the distribution of the sample is not known. With uniform sample distribution, classical Kubelka-Munk theory is valid for isotropic scattering only. In the absence of this constraint classical Kubelka-Munk theory must be extended to situations where scattering is asymmetric. This can be achieved by modification of the original Kubelka-Munk equation. Extended theory is presented which is not only capable of describing asymmetrical scattering in TLC layers but also includes a formula for absorption and fluorescence in diode-array TLC. With this new theory all different formulas for diode-array thin-layer chromatographic evaluation are combined in one expression.
The communication technologies for automatic me-ter reading (smart metering) and for energy production and distribution networks (smart grid) have the potential to be one of the first really highly scaled machine-to-machine-(M2M)-applications. During the last years two very promising devel-opments around the wireless part of smart grid communication were initialized, which possibly have an impact on the markets far beyond Europe and far beyond energy automation. Besides the specifications of the Open Metering System (OMS) Group, the German Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, BSI) has designed a protection profile (PP) and a technical directive (TR) for the communication unit of an intelligent measurement sys-tem (smart meter gateway), which were released in March 2013. This design uses state-of-the-art technologies and prescribes their implementation in real-life systems. At first sight the expenditures for the prescribed solutions seem to be significant. But in the long run, this path is inevitable and comes with strategic advantages.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
In contrast to their traditional, non-interactive counterparts, interactive dynamic visualisations allow users to adapt their form and content to their individual cognitive skills and needs. Provided that the interactive features allow for intuitive use without increasing cognitive load, interactive videos should therefore lead to more efficient forms of learning. This notion was tested in an experimental study, where participants learned to tie four nautical knots of different complexity by watching either non-interactive or interactive videos. The results show that in the interactive condition, participants used the interactive features like stopping, replaying, reversing or changing speed to adapt the pace of the video demonstration. This led to an uneven distribution of their attention and cognitive resources across the videos, which was more pronounced for the difficult knots. Consequently users of non-interactive video presentations, needed substantially more time than users of the interactive videos to acquire the necessary skills for tying the knots.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research has shown, how such detection can be achieved by deep learning methods. However, that particular approach is limited to the identification of only 4 types of vulnerabilities. Subsequently, we analyze to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardized LLVM Intermediate Representation. The vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, the methodology presented allows an accurate detection of 23 (compared to 4) vulnerabilities.
Multi-agent systems are a subject of continuously increasing interest in applied technical sciences. Smart grids are one evolving field of application. Numerous smart grid projects with various interpretations of multi-agent systems as new control concept arose in the last decade. Although several theoretical definitions of the term ‘agent’ exist, there is a lack of practical understanding that might be improved by clearly distinguishing the agent technologies from other state-of-the-art control technologies. In this paper we clarify the differences between controllers, optimizers, learning systems, and agents. Further, we review most recent smart grid projects, and contrast their interpretations with our understanding of agents and multi-agent systems. We point out that multi-agent systems applied in the smart grid can add value when they are understood as fully distributed networks of control entities embedded in dynamic grid environments; able to operate in a cooperative manner and to automatically (re-)configure themselves.
Micro-cracks give rise to non-analytic behavior of the stress-strain relation. For the case of a homogeneous spatial distribution of aligned flat micro-cracks, the influence of this property of the stress-strain relation on harmonic generation is analyzed for Rayleigh waves and for acoustic wedge waves with the help of a simple micromechanical model adopted from the literature. For the efficiencies of harmonic generation of these guided waves, explicit expressions are derived in terms of the corresponding linear wave fields. The initial growth rates of the second harmonic, i.e., the acoustic nonlinearity parameter, has been evaluated numerically for steel as matrix material. The growth rate of the second harmonic of Rayleigh waves has also been determined for microcrack distributions with random orientation, using a model expression for the strain energy in terms of strain invariants known in a geophysical context.
Hybrid SPECT/US
(2014)
(1) Background: Little is known about the baroque composer Domenico Scarlatti (1685-1757), whose life was centred behind closed doors at the royal court in Spain. There are no reports about his illnesses. From his compositions, mainly for harpsichord, an outstanding virtuosity can be read. (2) Case Presentation: In this case report, the only known oil painting of Domenico Scarlatti is presented, on which he is about 50 years old. In it one recognizes conspicuous hands with hints of watch glass nails and drumstick fingers. (3) Discussion: Whether Scarlatti had chronic hypoxia of peripheral body regions as a sign of, e.g., bronchial cancer or a severe heart disease, is not known. (4) Conclusions: The above-mentioned signs recorded in the oil painting, even if they were not interpretable at that time, are clearly represented and recorded for us and are open to diagnostic discussion from today's point of view.
The aim of this data collection is to enforce evidence of SCS effectiveness in treating neuropathic chronic pain and the very low percentage of undesired side effects of complications reported in our case series suggests that all implants should be performed by similarly well-trained and experienced professionals.
The Raman spectra from the chemical compounds toluene and cyclohexane obtained using a Fourier Transform (FT)-Raman spectrometer prototype have been contrasted with the Raman spectra of these same materials collected with two different commercial FT-Raman devices. The FT-Raman spectrometer consist of a Michelson interferometer, a self-designed photon counter and a reference photo-detector. The evaluation methodology of the spectral information, contrary to the commercial devices that commonly use the zero-crossing method, is carried out by re-sampling the Raman scattering and by accurately extracting the optical path information of the Michelson interferometer. The FTRaman arrangement has been built using conventional parts without disregarding the spectral frequency precision that usually such a FTRaman instruments deliver. No additional complex hardware components or costly software modules have been included in this FT-Raman device. The main Raman lines from the spectra obtained with the three FT-Raman devices have been compared with the Raman lines from the standard Raman spectra of these two materials. The values obtained using the FT-Raman spectrometer prototype have shown a frequency accuracy comparable to that obtained with the commercial devices without facing the need for a large investment. Although the proposed FT-Raman prototype cannot be directly compared to the last generation of FT-Raman spectrometers from the commercial manufacturers, such a device could give an opportunity to users that require high frequency precision in their spectral analysis and are provided with rather scarce resources.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
Todays‘ traffic support environments are distributed by nature. In many cases the monitoring, control and guidance of traffic is effected by a federation of coordinating centers, often managed by different organizations, using differing local IT technology and system architecture. Despite the federative character of such systems, maintenance of a consistent overall traffic state is indispensable for a safe operation. This project develops a new type of middleware supporting federative systems
in the domain of Air Traffic Control (ATC), using OMG‘s DDS (Data Distribution Service) standard as contributor.
Running shoes were categorized either as motion control, cushioned, or minimal footwear in the past. Today, these categories blur and are not as clearly defined. Moreover, with the advances in manufacturing processes, it is possible to create individualized running shoes that incorporate features that meet individual biomechanical and experiential needs. However, specific ways to individualize footwear to reduce individual injury risk are poorly understood. Therefore, the purpose of this scoping review was to provide an overview of (1) footwear design features that have the potential for individualization; (2) human biomechanical variability as a theoretical foundation for individualization; (3) the literature on the differential responses to footwear design features between selected groups of individuals. These purposes focus exclusively on reducing running-related risk factors for overuse injuries. We included studies in the English language on adults that analyzed: (1) potential interaction effects between footwear design features and subgroups of runners or covariates (e.g., age, gender) for running-related biomechanical risk factors or injury incidences; (2) footwear perception for a systematically modified footwear design feature. Most of the included articles (n = 107) analyzed male runners. Several footwear design features (e.g., midsole characteristics, upper, outsole profile) show potential for individualization. However, the overall body of literature addressing individualized footwear solutions and the potential to reduce biomechanical risk factors is limited. Future studies should leverage more extensive data collections considering relevant covariates and subgroups while systematically modifying isolated footwear design features to inform footwear individualization.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant margin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Silicon edges as one-dimensional waveguides for dispersion-free and supersonic leaky wedge waves
(2012)
Acoustic waves guided by the cleaved edge of a Si(111) crystal were studied using a laser-based angle-tunable transducer for selectively launching isolated wedge or surface modes. A supersonic leaky wedge wave and the fundamental wedge wave were observed experimentally and confirmed theoretically. Coupling of the supersonic wave to shear waves is discussed, and its leakage into the surface acoustic wave was observed directly. The velocity and penetration depth of the wedge waves were determined by contact-free optical probing. Thus, a detailed experimental and theoretical study of linear one-dimensional guided modes in silicon is presented.
In the modern knowledge-based and digital economy, the value of knowledge is growing relative to other assets and new intellectual property is being created at an ever-increasing rate. Therefore, the ability to find non-trivial solutions, systematically generate new concepts, and create intellectual property rapidly become crucial to achieving competitive advantage and leveraging the intellectual potential of organizations.
Using patent information for identification of new product features with high market potential
(2014)
The paper conceptualizes the systemic approach for enhancing innovative and competitive capacity of industrial companies (named as Advanced Innovation Design Approach – AIDA) including analysis, optimizations and further development of the innovation process and promoting the innovation climate in industrial companies. The innovation process is understood as a holistic stage-gate system comprising following typical phases with feedback loops and simultaneous auxiliary or follow-up processes: uncovering of solution-neutral customer needs, technology and market trends, identification of the needs and problems with high market potential and formulation of the innovation tasks and strategy, idea generation and problem solving, evaluation and enhancement of solution ideas, creation of innovation concepts based on solution ideas, evaluation of the innovation concepts as well as implementation, validation and market launch of chosen innovation concepts. The article presents the current state of innovation research and discusses the actual status of innovation process in the industrial environment. It defines the future research tasks for amplification of the innovation process with self-configuration, self-optimization, self-diagnostics and intelligent information processing and communication.
The Advanced Innovation Design Approach is a holistic methodology for enhancing innovative and competitive capability of industrial companies. AIDA can be considered as an open mindset, an individually adaptable range of strongest innovation techniques such as comprehensive front-end innovation process, advanced innovation methods, best tools and methods of the TRIZ methodology, organizational measures for accelerating innovation, IT-solutions for Computer-Aided Innovation, and other innovation methods, elaborated in the recent decade in the industry and academia
The European TRIZ Association ETRIA acts as a connecting link between scientific institutions, universities and other educational organizations, industrial companies and individuals concerned with conceptual and practical questions relating to organization of innovation process, invention methods, and innovation knowledge. In the meantime, more than TFC 1000 papers or presentation of scientists, educators, and practitioners from all over the world are available at the official ETRIA website. Numerous research projects were supported or funded by the European Commission.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
Recent advances in spiked shoe design, characterized by increased longitudinal stiffness, thicker midsole foams, and reconfigured geometry are considered to improve sprint performance. However, so far there is no empirical data on the effects of advanced spikes technology on maximal sprinting speed (MSS) published yet. Consequently, we assessed MSS via ‘flying 30m’ sprints of 44 trained male (PR: 10.32 s - 12.08 s) and female (PR: 11.56 s - 14.18 s) athletes, wearing both traditional and advanced spikes in a randomized, repeated measures design. The results revealed a statistically significant increase in MSS by 1.21% on average when using advanced spikes technology. Notably, 87% of participants showed improved MSS with the use of advanced spikes. A cluster analysis unveiled that athletes with higher MSS may benefit to a greater extent. However, individual responses varied widely, suggesting the influence of multiple factors that need detailed exploration. Therefore, coaches and athletes are advised to interpret the promising performance enhancements cautiously and evaluate the appropriateness of the advanced spike technology for their athletes critically.
In an extensive research project, we have assessed the application of different service models by export credit agencies (ECAs) and export-import banks (EXIMs). We conducted interviews with 35 representatives of ECAs and EXIMs from 27 countries. The question guiding this study is: How do ECAs and EXIMs adopt public service models for supporting exporters? We conducted a holistic multiple case study, investigating if and how these organisations apply public service models developed by Schedler and Guenduez, and which roles of the state are relevant. We find that there is a variety of different service models used by ECAs and EXIMs, and that the service model approaches have great potential to learn from each other and innovate existing services.
Risk aversion, financing and real servicThe Global CEO Survey was launched in 2015 by researchers from Offenburg University, the University of Westminster and the London School of Economics and Political Science (LSE) to better understand and discover what factors influence exporters’ demand for credit insurance. Although some scholars discussed aspects of corporate insurance demand with regard to exporters, there is limited research concerning the demand for export credit insurance associated with firm-specific factors. Only few empirical studies support existing theories on corporate insurance demand and export credits. This project investigates and fills the relevant gap of official export credit insurance demand.es
Excellent organisations require targeted strategies to implement their vision and mission, deploying a stakeholder-focused approach. As part of evidence-based policy making, it is a common approach to measure government financing vehicles’ results. A state-of-the-art method in quantitative benchmarking to overcome the challenge of considering multiple inputs and outputs is Data Envelopment Analysis (DEA). Descriptive statistics and explorative-qualitative approaches are also applied in a modern ECA benchmarking model to substantiate DEA results and put them into perspective. This enabler-result model provides a holistic view and allows to identify top performing ECAs and Exim-Banks, providing the opportunity for inefficient institutions to learn from their most productive peers. This best practice approach for strategic benchmarking enables the senior management to develop and implement a cutting-edge strategy, and increase value for key stakeholders.
Creating growth through trade is an important part of the policy approach of many economies. For decades, many member countries of the Organisation for Economic Co-operation and Development (OECD) have cooperated in a fair competition for the benefit of their national exporters. The countries’ official export credit agencies (ECAs) have established and jointly improved rules and regulations for export credit and political risk insurance. However, new players such as China, Russia or other fast developing countries have now joined the list of top exporting nations. As these countries have established their own ECAs, there is a need to introduce rules and regulations on global standards for financial terms as well as truly international norms ensuring ‘ethical’ trading behaviour.
But how will government support for foreign trade look like in the future? Will global standards for export credit and political risk insurance become reality by 2020? And how will strict rules and regulations for officially supported export credits and FDI regarding ethics, human rights and the environment impact growth through trade in general, or exporters in particular? These are questions addressed by the thirty eight contributions to Global Policy’s third eBook entitled ‘The Future of Foreign Trade Support – Setting Global Standards for Export Credit and Political Risk Insurance’, guest edited by Andreas Klasen and Fiona Bannert.
Financing trade and development sustainably will be crucial for Africa. Enhanced collaboration between multilateral development banks, development finance institutions and ECAs could greatly enhance intra-regional trade. Furthermore, setting up a ‘level playing field’ on the continent will allow governments to make strategic interventions for successful export credits and trade finance solutions, fostering growth through trade. African trade is already showing signs of rebounding from the coronavirus- induced recession. Through concerted, co-operative and continent-wide efforts, drawing on the knowledge and resources of all types of institutions and policy experts, Africa will continue to grow confidently and quickly into its increasingly important role as an engine of economic growth and global trade.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present an unsupervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an autoencoder to generate suitable latent representation. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features could be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
The M-Bus protocol (EN13757) is in widespread use for metering applications within home area and neighborhood area networks, but lacks a strict specification. This may lead to incompatibilities in real-life installations and to problems in the deployment of new M-Bus networks. This paper presents the development of a novel testbed to emulate physical Metering Bus (M-Bus) networks with different topologies and to allow the flexible verification of real M-Bus devices in real-world scenarios. The testbed is designed to support device manufacturers and service technicians in test and analysis of their devices within a specific network before their installation. The testbed is fully programmable, allowing flexible changes of network topologies, cable lengths and types. Itis easy to use, as only the master and the slaves devices have to be physically connected. This allows to autonomously perform multiple tests, including automated regression tests. The testbed is available to other researchers and developers. We invite companies and research institutions to use this M-Bus testbed to increase the common knowledge and real-world experience.
The three wavelength extinction method (3-WEM) was applied for the on-line particle analysisof suspensions of monodisperse latex beads and polydisperse metal oxide particles of industrialinterest. Comparative measurements were performed by photon correlation spectroscopy (PCS). Thedata of latex particles obtained by 3-WEM and PCS are in good agreement with the manufacture’svalues. Also, the values of oxide particles measured by means of the two techniques are in reasonableagreement despite of the irregular particle shape.Discrepancies are observed by comparing the oxideparticle size results with those of scanning electron microscopy, which is due to the broad sampledistributions and shape irregularities.
Specific prototypes of sedimentation field flow fractionation devices (SdFFF) have been developed with relative success for cell sorting. However, no data are available to compare these apparatus with commercial ones. In order to compare with other devices mainly used for non-biological species, biocompatible systems were used for standard particle (latex: 3–10 μm of different size dispersities) separation development. In order to enhance size dependent separations, channels of reduced thickness were used (80 and 100 μm) and channel/carrier-phase equilibration procedures were necessary. For sample injection, the use of inlet tubing linked to the FFF accumulation wall, common for cell sorting, can be extended to latex species when they are eluted in the Steric Hyperlayer elution mode. It avoids any primary relaxation steps (stop flow injection procedure) simplifying series of elution processing. Mixtures composed of four different monodispersed latex beads can be eluted in 6 min with 100 μm channel thickness.
Additive manufacturing (AM) and in particular the application of 3D multi material printing offers completely new production technologies thanks to the degree of freedom in design and the simultaneous processing of several materials in one component. Today's CAD systems for product development are volume-based and therefore cannot adequately implement the multi-material approach. Voxel-based CAD systems offer the advantage that a component can be divided into many voxels and different materials and functions can be assigned to these voxels. In this contribution two voxel-based CAD systems will be analyzed in order to simplify the AM on voxel level with different materials. Thus, a number of suitable criteria for evaluating voxel-based CAD systems are being developed and applied. The results of a technical-economic comparison show the differences between the voxel-based systems and disclose their disadvantages compared to conventional CAD systems. In order to overcome these disadvantages, a new method is therefore presented as an approach that enables the voxelization of a component in a simple way based on a conventional CAD model. The process chain of this new method is demonstrated using a typical component from product design. The results of this implementation of the new method are illustrated and analyzed.
In order to make material design processes more efficient in the future, the underlying multidimensional process parameter spaces must be systematically explored using digitalisation techniques such as machine learning (ML) and digital simulation. In this paper we shortly review essential concepts for the digitalisation of electrodeposition processes with a special focus on chromium plating from trivalent electrolytes.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
Phenolic compounds, such as flavonoids and phenolic acids, are very important substances that occur in various medicinal plants. They show different pharmacological activities which might be useful in the therapy of many diseases. Phenolic compounds have achieved an increasing interest over the last years because these compounds are easily oxidized and, thus, act as strong antioxidants. We present the chemiluminescence of different phenolic compounds measured directly on high-performance thin-layer chromatography LiChrospher® plates using the oxalic acid derivative bis(2,4,6-trichlorophenyl) oxalate (TCPO) in conjunction with H2O2. Our results indicate that chemiluminescence intensity increases with an ascending number of phenolic groups in the molecule. The method can be used to detect phenolic compounds in beverages like coffee, tea, and wine.
Pressure dynamics in metal-oxygen (metal-air) batteries: a case study on sodium superoxide cells
(2014)
Electrochemical reactions in metal–oxygen batteries come along with the consumption or release of gaseous oxygen. We present a novel methodology for investigating electrode reactions and transport phenomena in metal–oxygen batteries by measuring the pressure dynamics in an enclosed gas reservoir above the oxygen electrode. The methodology is exemplified by a room-temperature sodium–oxygen battery forming sodium superoxide (NaO2) in an electrolyte of diethylene glycol dimethyl ether (diglyme) and sodium trifluoromethanesulfonate (NaOSO2CF3, NaOTf). The experiments are supported by microkinetic simulations with a one-dimensional multiphysics continuum model. During galvanostatic cycling over 30 cycles, a constant oxygen consumption/release rate is observed upon discharge/charge. The number of transferred electrons per oxygen molecule is calculated to 1.01 ± 0.02 and 1.03 ± 0.02 for discharge and charge, respectively, confirming the nature of the oxygen reaction product as superoxide O2–. The same ratio is observed in cyclic voltammetry experiments with low scan rate (<1 mV/s). However, at higher scan rates, the ratio increases as a result of oxygen transport limitations in the electrolyte. We introduce electrochemical pressure impedance spectroscopy (EPIS) for simultaneously analyzing current, voltage, and pressure of electrochemical cells. Pressure recording significantly increases the sensitivity of impedance toward oxygen transport properties of the porous electrode systems. In addition, we report experimental data on the diffusion coefficient and solubility of oxygen in electrolyte solutions as important parameters for the microkinetic models.
We introduce an open source python framework named PHS-Parallel Hyperparameter Search to enable hyperparameter optimization on numerous compute instances of any arbitrary python function. This is achieved with minimal modifications inside the target function. Possible applications appear in expensive to evaluate numerical computations which strongly depend on hyperparameters such as machine learning. Bayesian optimization is chosen as a sample efficient method to propose the next query set of parameters.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
The energy supply of Offenburg University of Applied Sciences (HS OG) was changed from separate generation to trigeneration in 2007/2008. Trigeneration was installed for supplying heat, cooling and electrical power at HS OG. In this paper, trigeneration process and its modes of operation along with the layout of the energy facility at HS OG were described. Special emphasis was given to the operation schemes and control strategies of the operation modes: winter mode, transition mode and summer mode. The components used in the energy facility were also outlined. Monitoring and data analysis of the energy system was carried out after the commissioning of trigeneration in the period from 2008 to 2011. Thus, valuable performance data was obtained.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.