Refine
Document Type
- Conference Proceeding (202)
- Article (reviewed) (74)
- Article (unreviewed) (27)
- Patent (20)
- Letter to Editor (16)
- Book (11)
- Part of a Book (10)
- Report (10)
- Doctoral Thesis (9)
- Contribution to a Periodical (7)
- Moving Images (1)
- Other (1)
- Working Paper (1)
Conference Type
- Konferenzartikel (176)
- Konferenz-Abstract (19)
- Sonstiges (5)
- Konferenz-Poster (2)
Language
- English (295)
- German (91)
- Other language (1)
- Multiple languages (1)
- Russian (1)
Has Fulltext
- no (389) (remove)
Is part of the Bibliography
- yes (389)
Keywords
- Machine Learning (12)
- RoboCup (12)
- Deep Leaning (9)
- Götz von Berlichingen (5)
- Heart rhythm model (5)
- Herzrhythmusmodell (5)
- Modeling and simulation (5)
- E-Fahrzeug (4)
- Johann Sebastian Bach (4)
- Regelungstechnik (4)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (389) (remove)
Open Access
- Open Access (160)
- Closed Access (136)
- Closed (81)
- Bronze (47)
- Diamond (19)
- Grün (3)
- Gold (2)
- Hybrid (1)
Analysing and predicting the advance rate of a tunnel boring machine (TBM) in hard rock is integral to tunnelling project planning and execution. It has been applied in the industry for several decades with varying success. Most prediction models are based on or designed for large-diameter TBMs, and much research has been conducted on related tunnelling projects. However, only a few models incorporate information from projects with an outer diameter smaller than 5 m and no penetration prediction model for pipe jacking machines exists to date. In contrast to large TBMs, small-diameter TBMs and their projects have been considered little in research. In general, they are characterised by distinctive features, including insufficient geotechnical information, sometimes rather short drive lengths, special machine designs and partially concurring lining methods like pipe jacking and segment lining. A database which covers most of the parameters mentioned above has been compiled to investigate the performance of small-diameter TBMs in hard rock. In order to provide sufficient geological and technical variance, this database contains 37 projects with 70 geotechnically homogeneous areas. Besides the technical parameters, important geotechnical data like lithological information, unconfined compressive strength, tensile strength and point load index is included and evaluated. The analysis shows that segment lining TBMs have considerably higher penetration rates in similar geological and technical settings mostly due to their design parameters. Different methodologies for predicting TBM penetration, including state-of-the-art models from the literature as well as newly derived regression and machine learning models, are discussed and deployed for backward modelling of the projects contained in the database. New ranges of application for small-diameter tunnelling in several industry-standard penetration models are presented, and new approaches for the penetration prediction of pipe jacking machines in hard rock are proposed.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
Analysis of Miniaturized Printed Flexible RFID/NFC Antennas Using Different Carrier Substrates
(2020)
Antennas for Radio Frequency Identification (RFID) provide benefits for high frequencies (HF) and wireless data transmission via Near Field Communication (NFC) and many other applications. In this case, various requirements for the design of the reader and transmitter antennas must be met in order to achieve a suitable transmission quality. In this work, a miniaturized cost-effective RFID/NFC antenna for a microelectronic measurement system is designed and printed on different flexible carrier substrates using a new and low-cost Direct Ink Writing (DIW) technology. Various practical aspects such as reflection and impedance magnitude as well as the behavior of the printed RFID/NFC antennas are analyzed and compared to an identical copper-based antenna of the same size. The results are presented in this paper. Furthermore, the problems during the printing process itself on the different substrates are evaluated. The effects of the characteristics on the antenna under kink-free bending tests are examined and subsequently long-term measurements are carried out.
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
Bei dem vorgestellten Ansatz soll der Auftreffpunkt des Pfeils durch die Kreuzkorrelation von Audio-Signalen bestimmt werden. Das Auftreffen des Pfeils erzeugt ein charakteristisches Geräusch, welches von mehreren Mikrofonen in bestimmter Anordnung um die Dartscheibe herum in elektrische Signale umgewandelt wird. Mithilfe der Schallgeschwindigkeit und den Zeitdifferenzen, welche die Schallwelle zu den einzelnen Mikrofonen benötigt soll dann der Auftreffpunkt berechnet werden.
Bach, Gas, Strom und Wasser
(2022)
In many application domains, in particular automotives, guaranteeing a very low failure rate is crucial to meet functional and safety standards. Especially, reliable operation of memory components such as SRAM cells is of essential importance. Due to aggressive technology downscaling, process and runtime variations significantly impact manufacturing yield as well as functionality. For this reason, a thorough memory failure rate assessment is imperative for correct circuit operation and yield improvement. In this regard, Monte Carlo simulations have been used as the conventional method to estimate the variability induced failure rate of memory components. However, Monte Carlo methods become infeasible when estimating rare events such as high-sigma failure rates. To this end, Importance Sampling methods have been proposed which reduce the number of required simulations substantially. However, existing methods still suffer from inaccuracies and high computational efforts, in particular for high-sigma problems. In this paper, we fill this gap by presenting an efficient mixture Importance Sampling approach based on Bayesian optimization, which deploys a surface model of the objective function to find the most probable failure points. Its advantages include constant complexity independent of the dimensions of design space, the potential to find the global extrema, and higher trustworthiness of the estimated failure rate by accurately exploring the design space. The approach is evaluated on a 6T-SRAM cell as well as a master-slave latch based on a 28nm FDSOI process. The results show an improvement in accuracy, resulting in up to 63× better accuracy in estimating failure rates compared to the best state-of-the-art solutions on a 28nm technology node.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
Biodegradable metals have entered the implant market in recent years, but still do not show fully satisfactory degradation behaviour and mechanical properties. In contrast, it has been shown that pure molybdenum has an excellent combination of the required properties in this respect. We report on PM based screen printing of thin-walled molybdenum tubes as a processing step for medical stent manufacture. We also present data on the in vivo degradation and biocompatibility of molybdenum. The degradation of molybdenum wires implanted in the aorta of rats was evaluated by SEM and EDX. Biocompatibility was assessed by histological investigation of organs and analysis of molybdenum levels in tissue extracts and body fluids. Degradation rates of up to 13.5 μm/y were observed after 12 months. No histological changes or elevated molybdenum levels in organ tissues were observed. In summary, the results further underline that molybdenum is a highly promising biodegradable metallic material.
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
Um die im Pariser Klimaschutzabkommen vereinbarte Begrenzung der Erderwärmung auf 1,5 Grad Celsius zu begrenzen, muss die Energiewende deutlich stärker vorangetrieben werden als bisher. Das Schaufenster C/sells in der größten der SINTEG-Modellregionen hat sich dieser Herausforderung gestellt. Über vier Jahre haben 56 Partner aus Energiewirtschaft, Wissenschaft und Politik in Baden-Württemberg, Bayern und Hessen daran gearbeitet, ein zelluläres Energiesystem zu etablieren. Sie haben Musterlösungen für eine erfolgreiche Energiewende entwickelt. In mehr als 30 Demonstrationszellen sowie in neun Partizipationszellen, den sogenannten C/sells-Citys, wurde demonstriert, wie ein Informationssystem die intelligente Organisation von Stromversorgungsnetzen und den regionalisierten Handel mit Energie und Flexibilitäten ermöglicht.
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE).
Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states.
Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients.
Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
The manufacturing of conventional electronics has become a highly complicated process, which requires intensive investment. In this context, printed electronics keeps attracting attention from both academia and industry. The primary reason is the simplification of the manufacturing process via additive printing technology such as ink-jet printing. Consequently, advantages are realized such as on-demand fabrication, minimal material waste and versatile choice of substrate materials. Central to the development of printed electronic circuits are printed transistors. Recently, metal oxide semiconductors such as indium oxide have become promising materials for the fabrication of printed transistors due to their high charge mobility. Furthermore, electrolyte-gating also provides benefits such as the low-voltage operation in sub-1 V regime due to the large gate capacitance provided by electrical double layers. This opens new possibilities to fabricate printed devices and circuits for niche applications.
To facilitate the design and fabrication of printed circuits, the development of compact models is necessary. However, most of the current works have focused on the study of the static behavior of transistors, while the in-depth understanding of other characteristics such as the dynamic or noise behavior is missing. To this end, the purpose of this work is the comprehensive study on capacitance and noise properties of inkjet-printed electrolyte-gated thin-film transistors (EGT) based on indium oxide semiconductors. Proper modeling approaches are also proposed to capture accurately the electrical behaviour, which can be further utilized to enable advanced analysis of digital, analog and mixed-signal circuits.
In this work, the capacitance of EGTs is characterized using voltage-dependent impedance spectroscopy. Intrinsic and extrinsic effects are carefully separated by using de-embedding test structures. Also, a dedicated equivalent circuit model is established to offer accurate simulations of the measured frequency response of the gate impedance. Based on that, it is revealed that top-gated EGTs have the potential to reach operation frequency in the kHz regime with proper optimizations of materials and printing process. Furthermore, a Meyer-like model is proposed to accurately capture the capacitance-voltage characteristics of the lumped terminal capacitance. Both parasitic and nonquasi-static effects are considered. This further enables the AC and transient analysis of complex circuits in circuit simulators.
Following, the study of noise properties in the field of printed electronics is conducted. Low-frequency noise of EGTs is characterized using a reliable experimental setup. By examining measured noise spectra of the drain current at various gate voltages, the number fluctuation with correlated mobility fluctuation has been determined as the primary noise mechanism. Based on that, normalized flat-band voltage noise can be determined as the key performance metrics, which is only 1.08 × 10−7 V^2 µm^2, significantly lower in comparison with other thin-film technologies, which are based on dielectric gating and semiconductors such as IZO and IGZO. A plausible reason could be the large gate capacitance offered by the electrical double layers. This renders EGT technology useful for low-noise and sensitive applications such as sensor periphery circuits.
Last but not least, various circuit designs based on EGT technology are proposed, including basic digital circuits such as inverters and ring oscillators. Their performance metrics such as the propagation delay and power consumption are extensively characterized. Also, the first design of a printed full-wave rectifier is presented by using diode-connected EGTs, which features near-zero threshold voltage. As a consequence, the presented rectifier can effectively process input voltage with a small amplitude of 100 mV and a cut-off frequency of 300 Hz, which is particularly attractive for the application domain of energy harvesting. Additionally, the previously established capacitance models are verified on those circuits, which provide a satisfactory agreement between the simulation and measurement data.
A circuit arrangement of a motor vehicle includes a high-voltage battery for storing electrical energy, an electric machine for driving the motor vehicle, a converter via which high-voltage direct current voltage provided by the high-voltage battery is convertible into high-voltage alternating current voltage for operating the electric machine, and a charging connection for providing electrical energy for charging the high-voltage battery. The converter is a three-stage converter having a first switch unit which is assigned to a first phase of the electric machine. The first switch unit has two switch groups connected in series which each have two insulated-gate bipolar transistors (IGBTs) connected in series, where a connection is disposed between the IGBTs of one of the two switch groups, which connection is electrically connected directly to a line of the charging connection.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Printed Electronics technology is a key-enabler for smart sensors, soft robotics, and wearables. The inkjet printed electrolyte-gated field effect transistor (EGFET) technology is a promising candidate for such applications due to its low-power operation, high field-effect mobility, and on-demand fabrication. Unlike conventional silicon-based technologies, inkjet printed electronics technology is an additive manufacturing process where multiple layers are printed on top of each other to realize functional devices such as transistors and their interconnections. Due to the additive manufacturing process, the technology has limited routing layers. For routing of complex circuits, insulating crossovers are printed at the intersection of routing paths to isolate them. The crossover can alter the electrical properties of a circuit based on specific location on a routing path. In this work, we propose a crossover-aware placement and routing (COPnR) methodology for inkjet-printed circuits by integrating the crossover constraints in our design framework. Our proposed placement methodology is based on a state-of-the-art evolutionary algorithm while the routing optimization is done using a genetic algorithm. The proposed methodology is compared with the industrial standard placement and routing (PnR) tools. On average, the proposed methodology has 38% fewer crossovers and 94% fewer failing paths compared to the industrial PnR tools applied to printed circuit designs.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
Current Harmonics Control Algorithm for inverter-fed Nonlinear Synchronous Electrical Machines
(2023)
Current harmonics are a well known challenge of electrical machines. They can be undesirable as they can cause instabilities in the control, generate additional losses and lead to torque ripples with noise. However, they can also be specifically generated in new methods in order to improve the machine behavior. In this paper, an algorithm for controlling current harmonics is proposed. It can be described as a combination of different PI controllers for defined angles of the machine with repetitive control characteristics for whole revolutions. The controller design is explained and important points where linearization is necessary are shown. Furthermore, the limits are analyzed and, for validation, measurement results with a permanently excited synchronous machine on the test bench are considered.
Data Science
(2019)
Data Science steht derzeit wie kein anderer Begriff für die Auswertung großer Datenmengen mit analytischen Konzepten des Machine Learning oder der künstlichen Intelligenz. Nach der bewussten Wahrnehmung der Big Data und dabei insbesondere der Verfügbarmachung in Unternehmen sind Technologien und Methoden zur Auswertung dort gefordert, wo klassische Business Intelligence an ihre Grenzen stößt.
Dieses Buch bietet eine umfassende Einführung in Data Science und deren praktische Relevanz für Unternehmen. Dabei wird auch die Integration von Data Science in ein bereits bestehendes Business-Intelligence-Ökosystem thematisiert. In verschiedenen Beiträgen werden sowohl Aufgabenfelder und Methoden als auch Rollen- und Organisationsmodelle erläutert, die im Zusammenspiel mit Konzepten und Architekturen auf Data Science wirken. Neben den Grundlagen werden unter anderem folgende Themen behandelt:
- Data Science und künstliche Intelligenz
- Konzeption und Entwicklung von Data-driven Products
- Deep Learning
- Self-Service im Data-Science-Umfeld
- Data Privacy und Fragen zur digitalen Ethik
- Customer Churn mit Keras/TensorFlow und H2O
- Wirtschaftlichkeitsbetrachtung bei der Auswahl und Entwicklung von Data Science
- Predictive Maintenance
- Scrum in Data-Science-Projekten
Zahlreiche Anwendungsfälle und Praxisbeispiele geben Einblicke in die aktuellen Erfahrungen bei Data-Science-Projekten und erlauben dem Leser einen direkten Transfer in die tägliche Arbeit.
Data Science
(2021)
Know-how für Data Scientists
• Übersichtliche und anwendungsbezogene Einführung
• Zahlreiche Anwendungsfälle und Praxisbeispiele aus unterschiedlichen Branchen
• Potenziale, aber auch mögliche Fallstricke werden aufgezeigt
Data Science steht derzeit wie kein anderer Begriff für die Auswertung großer Datenmengen mit analytischen Konzepten des Machine Learning oder der künstlichen Intelligenz. Nach der bewussten Wahrnehmung der Big Data und dabei insbesondere der Verfügbarmachung in Unternehmen sind Technologien und Methoden zur Auswertung dort gefordert, wo klassische Businss Intelligence an ihre Grenzen stößt.
Dieses Buch bietet eine umfassende Einführung in Data Science und deren praktische Relevanz für Unternehmen. Dabei wird auch die Integration von Data Science in ein bereits bestehendes Business-Intelligence-Ökosystem thematisiert. In verschiedenen Beiträgen werden sowohl Aufgabenfelder und Methoden als auch Rollen- und Organisationsmodelle erläutert, die im Zusammenspiel mit Konzepten und Architekturen auf Data Science wirken.
Diese 2., überarbeitete Auflage wurde um neue Themen wie Feature Selection und Deep Reinforcement Learning sowie eine neue Fallstudie erweitert.
Fused Filament Fabrication (FFF) is a widespread additive manufacturing technology, mostly in the field of printable polymers. The use of filaments filled with metal particles for the manufacture of metallic parts by FFF presents specific challenges regarding debinding and sintering. For aluminium and its alloys, the sintering temperature range overlaps with the temperature range of thermal decomposition of many commonly used “backbone” polymers, which provide stability to the green parts. Moreover, the high oxygen affinity of aluminium necessitates the use of special sintering regimes and alloying strategies. Therefore, it is challenging to achieve both low porosity and low levels of oxygen and carbon impurities at the same time. Feedstocks compatible with the special requirements of aluminium alloys were developed. We present results on the investigation of debinding/sintering regimes by Fourier Transform Infrared spectroscopy (FTIR) based In-Situ Process Gas Analysis and discuss optimized thermal treatment strategies for Al-based FFF.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
An important step in seismic data processing to improve inversion and interpretation is multiples attenuation. Radon-based algorithms are often used for discriminating primaries and multiples. Recently, deep learning (DL), based on convolutional neural networks (CNNs) has shown promising results in demultiple that could mitigate the challenges of Radon-based methods. In this work, we investigate new different strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. We evaluate the performance of the CNNs trained with the different strategies on 400 clean and noisy synthetic data, considering 3 metrics. We found that training a CNN to predict the multiples and then subtracting them from the input image is the most effective strategy for demultiple. Furthermore, including the primaries labels as a constraint during the training of multiples prediction improves the results. Finally, we test the strategies on a field dataset. The CNNs trained with different strategies report competitive results on real data compared with Radon demultiple. As a result, effectively trained CNN models can potentially replace Radon-based demultiple in existing workflows.
This paper describes a thorough analysis of using PPO to learn kick behaviors with simulated NAO robots in the simspark environment. The analysis includes an investigation of the influence of PPO hyperparameters, network size, training setups and performance in real games. We believe to improve the state of the art mainly in four points: first, the kicks are learned with a toed version of the NAO robot, second, we improve the reliability with respect to kickable area and avoidance of falls, third, the kick can be parameterized with desired distance and direction as input to the deep network and fourth, the approach allows to integrate the learned behavior seamlessly into soccer games. The result is a significant improvement of the general level of play.
In dieser Arbeit wird ein historischer Fallbericht des bis heute weit über seine Landesgrenzen bekannten italienischen Kriminalanthropologen Cesare Lombroso (1835–1909) vorgestellt. In diesem Fallbericht wird der berüchtigte und psychisch auffällige Dieb Pietro Bersone mit Hilfe eines sog. Hydrosphygmographen überführt, einem zur damaligen Zeit neuartigen technischen Gerät, das den Puls nicht-invasiv aufzeichnen konnte. Lombroso ist vermutlich einer der ersten, wenn nicht sogar der erste, der durch den Einsatz eines solchen Geräts die Idee zum „Lügendetektor“ vorweggenommen hat. Die vorgestellte Textstelle aus Lombrosos Buch „Neue Fortschritte in den Verbrecherstudien“ ist daher ein besonderes Fundstück auch für die Geschichte der Polygraphie.
A physical unclonable function (PUF) is a hardware circuit that produces a random sequence based on its manufacturing-induced intrinsic characteristics. In the past decade, silicon-based PUFs have been extensively studied as a security primitive for identification and authentication. The emerging field of printed electronics (PE) enables novel application fields in the scope of the Internet of Things (IoT) and smart sensors. In this paper, we design and evaluate a printed differential circuit PUF (DiffC-PUF). The simulation data are verified by Monte Carlo analysis. Our design is highly scalable while consisting of a low number of printed transistors. Furthermore, we investigate the best operating point by varying the PUF challenge configuration and analyzing the PUF security metrics in order to achieve high robustness. At the best operating point, the results show areliability of 98.37% and a uniqueness of 50.02%, respectively. This analysis also provides useful and comprehensive insights into the design of hybrid or fully printed PUF circuits. In addition, the proposed printed DiffC-PUF core has been fabricated with electrolyte-gated field-effect transistor technology to verify our design in hardware.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.