Refine
Year of publication
- 2020 (110) (remove)
Document Type
- Conference Proceeding (33)
- Article (reviewed) (31)
- Article (unreviewed) (9)
- Bachelor Thesis (8)
- Letter to Editor (6)
- Master's Thesis (6)
- Patent (5)
- Contribution to a Periodical (4)
- Report (3)
- Book (1)
Conference Type
- Konferenzartikel (30)
- Konferenz-Abstract (1)
- Konferenz-Poster (1)
- Sonstiges (1)
Language
- English (74)
- German (35)
- Other language (1)
Keywords
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (110) (remove)
Open Access
- Closed Access (55)
- Open Access (47)
- Bronze (4)
- Gold (2)
- Closed (1)
Many different methods, such as screen printing, gravure, flexography, inkjet etc., have been employed to print electronic devices. Depending on the type and performance of the devices, processing is done at low or high temperature using precursor- or particle-based inks. As a result of the processing details, devices can be fabricated on flexible or non-flexible substrates, depending on their temperature stability. Furthermore, in order to reduce the operating voltage, printed devices rely on high-capacitance electrolytes rather than on dielectrics. The printing resolution and speed are two of the major challenging parameters for printed electronics. High-resolution printing produces small-size printed devices and high-integration densities with minimum materials consumption. However, most printing methods have resolutions between 20 and 50 μm. Printing resolutions close to 1 μm have also been achieved with optimized process conditions and better printing technology.
The final physical dimensions of the devices pose severe limitations on their performance. For example, the channel lengths being of this dimension affect the operating frequency of the thin-film transistors (TFTs), which is inversely proportional to the square of channel length. Consequently, short channels are favorable not only for high-frequency applications but also for high-density integration. The need to reduce this dimension to substantially smaller sizes than those possible with today’s printers can be fulfilled either by developing alternative printing or stamping techniques, or alternative transistor geometries. The development of a polymer pen lithography technique allows scaling up parallel printing of a large number of devices in one step, including the successive printing of different materials. The introduction of an alternative transistor geometry, namely the vertical Field Effect Transistor (vFET), is based on the idea to use the film thickness as the channel length, instead of the lateral dimensions of the printed structure, thus reducing the channel length by orders of magnitude. The improvements in printing technologies and the possibilities offered by nanotechnological approaches can result in unprecedented opportunities for the Internet of Things (IoT) and many other applications. The vision of printing functional materials, and not only colors as in conventional paper printing, is attractive to many researchers and industries because of the added opportunities when using flexible substrates such as polymers and textiles. Additionally, the reduction of costs opens new markets. The range of processing techniques covers laterally-structured and large-area printing technologies, thermal, laser and UV-annealing, as well as bonding techniques, etc. Materials, such as conducting, semiconducting, dielectric and sensing materials, rigid and flexible substrates, protective coating, organic, inorganic and polymeric substances, energy conversion and energy storage materials constitute an enormous challenge in their integration into complex devices.
Diese Arbeit befasst sich mit agilen Methodiken zur Konzeption einer Softwarearchitektur. Es wurden Vorgehensweisen der Anforderungserhebung basierend auf themenspezifischer Literatur recherchiert und angewandt. Passend zu den Anforderungen wurden Architektur- und Dokumentationsmittel gewählt, welche die Konzeption der Architektur sowie die Implementierung der geforderten Software zum Erstellen und Ausführen von Lasttests auf softwarebasierten Langzeitarchivsystemen erleichtern sollen. Ein bestehendes Softwaresystem, welches bisher diese Aufgabe übernommen hat, wurde als Grundlage einer Neuentwicklung in Betracht gezogen. Es wurde dahingehend analysiert, aber begründet verworfen. In der Konzeptionsphase wurde eine Lösungsstrategie ermittelt sowie die Struktur der Architektur geplant und dokumentiert. Anhand eines beispielhaften Datenflusses wurde die Realisierbarkeit des Modells nachgewiesen. Auf Basis einer frei zugänglichen Architekturdokumentationsvorlage wurde eine Dokumentation des Konzeptes erstellt, welche einen schnellen Start in die agile Entwicklungsphase ermöglichen soll.
Die Erfindung betrifft eine Schaltungsanordnung (10) für ein Kraftfahrzeug, mit einer Hochvolt-Batterie (12) zum Speichern von elektrischer Energie, mit wenigstens einer elektrischen Maschine (14) zum Antreiben des Kraftfahrzeugs, mit einem Stromrichter (16), mittels welchem von der Hochvolt-Batterie (12) bereitstellbare Hochvolt-Gleichspannung in Hochvolt-Wechselspannung zum Betreiben der elektrischen Maschine (14) umwandelbar ist, und mit einem Ladeanschluss (20) zum Bereitstellen von elektrischer Energie zum Laden der Hochvolt-Batterie (12), wobei der Stromrichter (16) als ein Drei-Stufen-Stromrichter ausgebildet ist und wenigstens eine einer Phase (u) der elektrischen Maschine (14) zugeordnete Schaltereinheit (46) aufweist, welche zwei in Reihe geschaltete Schaltergruppen (52, 54) umfasst, die jeweils zwei in Reihe geschaltete IGBTs (T11, T12, T13, T14) aufweisen, wobei zwischen den IGBTs (T11, T12) einer der Schaltergruppen (52, 54) ein Anschluss (64) angeordnet ist, welcher direkt mit einer Leitung (34) des Ladeanschlusses (20) elektrisch verbunden ist.
Das Monitoring von Industrieanlagen stellt in der Wirtschaft sicher, dass hoch-automatisierte Prozesse reibungslos ablaufen können. Meistens steht hier das Monitoring der Anlagen selbst im Mittelpunkt, die Kommunikationsleitungen für den Datenaustausch auf Ethernet-Basis (z.B. Profinet) sind gegenwärtig noch nicht Teil einer kontinuierlichen Überwachung. Zwar werden auch hier die physischen Verbindungen überprüft, jedoch geschieht häufig dies nur zum Zeitpunkt der Inbetriebnahme, wenn die Anlage noch nicht in das Gesamtsystem integriert ist oder während eines Wartungszyklus, wenn die Maschine für die Dauer der Wartung aus dem Betriebsablauf genommen wird. Dies führt dazu, dass insbesondere heute, wo vor allem Ethernet zunehmend als Basis für die industrielle Kommunikation herangezogen wird, Maschinenausfälle aufgrund fehlender Kabelüberwachung immer wahrscheinlicher werden. Um dem entgegenwirken zu können, wurde im Projekt Ko2SiBus ein neues Messverfahren konzipiert, implementiert und validiert, das kostengünstig in neue oder bestehende Systeme integriert werden kann. Um die Tauglichkeit zu zeigen, wurden die Projektergebnisse in Prototypen und Demonstratoren implementiert, die sowohl als Stand-Alone aber auch als Integrationslösungen dienen können.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
Die industrielle Kommunikation war früher von relativ eingeschränkten, geschlossenen Feldbussystemen geprägt. Mit der zunehmenden Öffnung von Automatisierungsnetzen durch die horizontale und vertikale Integration in Produktionsanlagen entstehen gefährliche Angriffsflächen, die zum Diebstahl von Produktionsgeheimnissen, der Manipulation oder dem kompletten Lahmlegen der Produktionsprozesse führen können. Hieraus ergeben sich grundlegend neue Anforderung an die Datensicherheit, denen mit innovativen Lösungsansätzen begegnet werden muss.
Ziel des Forschungsvorhabens „SecureField“ war es, die Umsetzbarkeit und Anwendbarkeit des Ansatzes „(D)TLS-over-Anything“ zu untersuchen und nachzuweisen, sowie einen Werkzeugkasten zur Definition und Implementierung entsprechender Sicherheitslösungen vorzubereiten. Als langjährig etablierter Standard im IT-Umfeld stellte sich das (Datagram) Transport Layer Security ((D)TLS) Protokoll in Kombination mit einer industrie- bzw. automatisierungskompatiblen Public-Key-Infrastruktur (PKI) als äußerst vielversprechende Möglichkeit dar, Datensicherheit auch im OT-Umfeld zu erzielen. Hierbei sollten insbesondere KMU adressiert werden, für welche eigene Entwicklungsarbeiten in diesem Umfeld häufig zu aufwändig und technisch sowie wirtschaftlich zu riskant sind.
Mit „SecureField“ konnten Ergebnisse auf mehreren Ebenen erzielt werden. Zunächst konnte im Projektverlauf ein umfassendes und generisches Konzept zur Ende-zu-Ende-Absicherung von Kommunikationspfaden und -protokollen im industriellen Umfeld erarbeitet werden. Dieses Konzept besteht aus einem generischen Kommunikationsmodell sowie aus einem generischen Authentifikationsmodell.
Modern society is more than ever striving for digital connectivity -- everywhere and at any time, giving rise to megatrends such as the Internet of Things (IoT). Already today, 'things' communicate and interact autonomously with each other and are managed in networks. In the future, people, data, and things will be interlinked, which is also referred to as the Internet of Everything (IoE). Billions of devices will be ubiquitously present in our everyday environment and are being connected over the Internet.
As an emerging technology, printed electronics (PE) is a key enabler for the IoE offering novel device types with free form factors, new materials, and a wide range of substrates that can be flexible, transparent, as well as biodegradable. Furthermore, PE enables new degrees of freedom in circuit customizability, cost-efficiency as well as large-area fabrication at the point of use.
These unique features of PE complement conventional silicon-based technologies. Additive manufacturing processes enable the realization of many envisioned applications such as smart objects, flexible displays, wearables in health care, green electronics, to name but a few.
From the perspective of the IoE, interconnecting billions of heterogeneous devices and systems is one of the major challenges to be solved. Complex high-performance devices interact with highly specialized lightweight electronic devices, such as e.g. smartphones and smart sensors. Data is often measured, stored, and shared continuously with neighboring devices or in the cloud. Thereby, the abundance of data being collected and processed raises privacy and security concerns.
Conventional cryptographic operations are typically based on deterministic algorithms requiring high circuit and system complexity, which makes them unsuitable for lightweight devices.
Many applications do exist, where strong cryptographic operations are not required, such as e.g. in device identification and authentication. Thereby, the security level mainly depends on the quality of the entropy source and the trustworthiness of the derived keys. Statistical properties such as the uniqueness of the keys are of great importance to precisely distinguish between single entities.
In the past decades, hardware-intrinsic security, particularly physically unclonable functions (PUFs), gained a lot of attraction to provide security features for IoT devices. PUFs use their inherent variations to derive device-specific unique identifiers, comparable to fingerprints in biometry.
The potentials of this technology include the use of a true source of randomness, on demand key derivation, as well as inherent key storage.
Combining these potentials with the unique features of PE technology opens up new opportunities to bring security to lightweight electronic devices and systems. Although PE is still far from being matured and from being as reliable as silicon technology, in this thesis we show that PE-based PUFs are promising candidates to provide key derivation suitable for device identification in the IoE.
Thereby, this thesis is primarily concerned with the development, investigation, and assessment of PE-based PUFs to provide security functionalities to resource constrained printed devices and systems.
As a first contribution of this thesis, we introduce the scalable PE-based Differential Circuit PUF (DiffC-PUF) design to provide secure keys to be used in security applications for resource constrained printed devices. The DiffC-PUF is designed as a hybrid system architecture incorporating silicon-based and inkjet-printed components. We develop an embedded PUF platform to enable large-scale characterization of silicon and printed PUF cores.
In the second contribution of this thesis, we fabricate silicon PUF cores based on discrete components and perform statistical tests under realistic operating conditions. A comprehensive experimental analysis on the PUF security metrics is carried out. The results show that the silicon-based DiffC-PUF exhibits nearly ideal values for the uniqueness and reliability metrics. Furthermore, the identification capabilities of the DiffC-PUF are investigated and it is shown that additional post-processing can further improve the quality of the identification system.
In the third contribution of this thesis, we firstly introduce an evaluation workflow to simulate PE-based DiffC-PUFs, also called hybrid PUFs. Hereof, we introduce a Python-based simulation environment to investigate the characteristics and variations of printed PUF cores based on Monte Carlo (MC) simulations. The simulation results show, that the security metrics to be expected from the fabricated devices are close to ideal at the best operating point.
Secondly, we employ fabricated printed PUF cores for statistical tests under varying operating conditions including variations in ambient temperature, relative humidity, and supply voltage. The evaluations of the uniqueness, bit aliasing, and uniformity metrics are in good agreement with the simulation results. The experimentally determined mean reliability value is relatively low, which can be explained by the missing passivation and encapsulation of the printed transistors. The investigation of the identification capabilities based on the raw PUF responses shows that the pure hybrid PUF is not suitable for cryptographic applications, but qualifies for device identification tasks.
The final contribution is to switch to the perspective of an attacker. To judge on the security capabilities of the hybrid PUF, a comprehensive security analysis in the manner of a cryptanalysis is performed. The analysis of the entropy of the hybrid PUF shows that its vulnerability against model-based attacks mainly depends on the selected challenge building method. Furthermore, an attack methodology is introduced to assess the performances of different mathematical cloning attacks on the basis of eavesdropped challenge-response pairs (CRPs). To clone the hybrid PUF, a sorting algorithm is introduced and compared with commonly used supervised machine learning (ML) classifiers including logistic regression (LR), random forest (RF), as well as multi-layer perceptron (MLP).
The results show that the hybrid PUF is vulnerable against model-based attacks. The sorting algorithm benefits from shorter training times compared to the ML algorithms. If the eavesdropped CRPs are erroneous, the ML algorithms outperform the sorting algorithm.
Diese Arbeit umfasst erste Tests und die Inbetriebnahme eines neuen Prüfplatzes bei der QMK-GmbH. Der Prüfplatz selbst soll in der Lage sein, Leistungsshuntwiderstände kalibrieren zu können. Leistungsshuntwiderstände sind meist eher groß und schwer, damit durch viel Material die Wärmeentwicklung kompensiert werden kann. Zudem sind die Kontaktflächen dementsprechend groß, damit der Übergangswiderstand an den Kontaktflächen möglichst gering ist. Der Widerstandswert selbst ist sehr klein. Standardmäßig liegen Widerstände hier im Bereich von 10 bis 100 Ω. Um so kleine Widerstände möglichst genau messen zu können, muss technisch viel Aufwand betrieben werden. In der Regel wird dies mit einer Vierleitermessung realisiert. Leistungsshuntwiderstände werden aber generell mit einem eher hohen Strom im Bereich von 100 bis 10 000 . Mit dem neuen Prüfplatz soll dies auch umgesetzt werden. Die Widerstände sollen mithilfe von hohen Strömen bis 2 kA kalibriert werden, damit der, für den Prüfling, zutreffende Arbeitsbereich unter Berücksichtigung seiner Eigenerwärmung abgebildet werden kann. Für diese Anwendung wurde ein Prüfplatz entwickelt, der 2 kA zur Verfügung stellen kann und mithilfe eines genauen und kalibrierten Referenzwiderstandes den Widerstand des Kalibriergegenstandes ermitteln kann. Würde man den Aufbau messtechnisch beschreiben, so wird durch eine Konstantstromquelle ein Gleichstrom erzeugt, der beide in Reihe geschalteten Widerstände durchströmt. Damit ist der Strom an beiden Widerständen identisch und kann ermittelt werden. An den Widerständen wird gleichzeitig dessen Spannungsabfall gemessen. Mit dem ermittelten Strom kann anschließend über das Ohmsche Gesetz der „unbekannte“ Widerstandswert des Kalibriergegenstandes ermittelt werden. Dieser wird mit dem Sollwert seines Datenblattes verglichen und in einem Protokoll unter Berücksichtigung der eigenen Messunsicherheit bewertet. Die Messergebnisse werden nach der Messung bzw. Kalibrierung in einem Zertifikat zusammen gefasst, und dem Kunden ausgestellt.
Ziel der Arbeit ist es, eine Kalibriereinrichtung zu entwickeln und zu bewerten, die den Richtlinien und Grundlangen der DAkkS entspricht oder zumindest als Grundlage für eine Akkreditierung bei der DAkkS dient. In erster Linie, soll es mit der Kalibriereinrichtung möglich sein, ISO-Kalibrierungen nach der 9001 Norm durchzuführen und zu bewerten.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
The term attribute transfer refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator.
In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our Attribute Transfer Inpainting Generative Adversarial Network (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the G are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Eine kontinuierliche Überwachung von Ethernet-Leitungne beugt Maschinenausfällen in der Industrie vor. Aktuell fehlen jedoch geiegnete Methoden, um diese Überwachung flächendeckend durchzuführen. Im Projekt Ko²SiBus wurde deshalb ein kostengünstiges Verfahren zur kontinuierlichen Überwachung von Ethernet-Leitungen entwickelt.
Aus Ideen werden Produkte
(2020)
Analysis of Miniaturized Printed Flexible RFID/NFC Antennas Using Different Carrier Substrates
(2020)
Antennas for Radio Frequency Identification (RFID) provide benefits for high frequencies (HF) and wireless data transmission via Near Field Communication (NFC) and many other applications. In this case, various requirements for the design of the reader and transmitter antennas must be met in order to achieve a suitable transmission quality. In this work, a miniaturized cost-effective RFID/NFC antenna for a microelectronic measurement system is designed and printed on different flexible carrier substrates using a new and low-cost Direct Ink Writing (DIW) technology. Various practical aspects such as reflection and impedance magnitude as well as the behavior of the printed RFID/NFC antennas are analyzed and compared to an identical copper-based antenna of the same size. The results are presented in this paper. Furthermore, the problems during the printing process itself on the different substrates are evaluated. The effects of the characteristics on the antenna under kink-free bending tests are examined and subsequently long-term measurements are carried out.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
In this work a method for the estimation of current slopes induced by inverters operating interior permanent magnet synchronous machines is presented. After the derivation of the estimation algorithm, the requirements for a suitable sensor setup in terms of accuracy, dynamic and electromagnetic interference are discussed. The boundary conditions for the estimation algorithm are presented with respect to application within high power traction systems. The estimation algorithm is implemented on a field programmable gateway array. This moving least-square algorithm offers the advantage that it is not dependent on vectors and therefore not every measured value has to be stored. The summation of all measured values leads to a significant reduction of the required storage units and thus decreases the hardware requirements. The algorithm is designed to be calculated within the dead time of the inverter. Appropriate countermeasures for disturbances and hardware restrictions are implemented. The results are discussed afterwards.
Method for controlling a device, in particular, a prosthetic hand or a robotic arm (US20200327705A1)
(2020)
A method for controlling a device, in particular a prosthetic hand or a robotic arm, includes using an operator-mounted camera to detect at least one marker positioned on or in relation to the device. Starting from the detection of the at least one marker, a predefined movement of the operator together with the camera is detected and is used to trigger a corresponding action of the device. The predefined movement of the operator is detected in the form of a line of sight by means of camera tracking. A system for controlling a device, in particular a prosthetic hand or a robotic arm, includes a pair of AR glasses adapted to detect the at least one marker and to detect the predefined movement of the operator.
Purpose
This work presents a new monocular peer-to-peer tracking concept overcoming the distinction between tracking tools and tracked tools for optical navigation systems. A marker model concept based on marker triplets combined with a fast and robust algorithm for assigning image feature points to the corresponding markers of the tracker is introduced. Also included is a new and fast algorithm for pose estimation.
Methods
A peer-to-peer tracker consists of seven markers, which can be tracked by other peers, and one camera which is used to track the position and orientation of other peers. The special marker layout enables a fast and robust algorithm for assigning image feature points to the correct markers. The iterative pose estimation algorithm is based on point-to-line matching with Lagrange–Newton optimization and does not rely on initial guesses. Uniformly distributed quaternions in 4D (the vertices of a hexacosichora) are used as starting points and always provide the global minimum.
Results
Experiments have shown that the marker assignment algorithm robustly assigns image feature points to the correct markers even under challenging conditions. The pose estimation algorithm works fast, robustly and always finds the correct pose of the trackers. Image processing, marker assignment, and pose estimation for two trackers are handled in less than 18 ms on an Intel i7-6700 desktop computer at 3.4 GHz.
Conclusion
The new peer-to-peer tracking concept is a valuable approach to a decentralized navigation system that offers more freedom in the operating room while providing accurate, fast, and robust results.
Als Einstieg in den Diskurs über zivile Netzwerktechnologien, mobile Geräte, Onlinedienste und die Frage, wie sich die „Kirche der Zukunft“ (zumindest aus medienwissenschaftlicher Sicht) positionieren kann, dienen drei Zitate. Die Gegenüberstellung der darin vertretenen Positionen soll den Nutzen und die Folgen der zunehmend vollständigen Durchdringung (fast) aller Lebensbereiche mit Digitaltechnik für den Einzelnen wie für die Gesellschaft aufzeigen.
Background: A disturbed synchronization of the ventricular contraction can cause a highly developed systolic heart failure in affected patients, which can often be explained by a diseased left bundle branch block (LBBB). If medication remains unresponsive, the concerned patients will be treated with a cardiac resynchronization therapy (CRT) system. The aim of this study was to integrate His bundle pacing into the Offenburg heart rhythm model in order to visualize the electrical pacing field generated by His bundle pacing.
Methods: Modelling and electrical field simulation activities were performed with the software CST (Computer Simulation Technology) from Dessault Systèms. CRT with biventricular pacing is to be achieved by an apical right ventricular electrode and an additional left ventricular electrode, which is floated into the coronary vein sinus. This conventional type of biventricular pacing leads to a reduction of the left ventricular ejection fraction. Furthermore, the non-responder rate of the CRT therapy is about one third of the CRT patients.
Results: His bundle pacing represents a physiological alternative to conventional cardiac pacing and cardiac resynchronization. An electrode implanted in the His bundle emits a stronger electrical pacing field than the electrical pacing field of conventional cardiac pacemakers. The pacing of the His bundle was performed by the Medtronic Select Secure 3830 electrode with pacing voltage amplitudes of 3 V, 2 V and 1.5 V in combination with a pacing pulse duration of 1 ms.
Conclusions: Compared to conventional cardiac pacemaker pacing, His bundle pacing is capable of bridging LBBB conduction disorders in the left ventricle. The His bundle pacing electrical field is able to spread via the physiological pathway in the right and left ventricles for CRT with a narrow QRS-complex in the surface ECG.
Bei dem vorgestellten Ansatz soll der Auftreffpunkt des Pfeils durch die Kreuzkorrelation von Audio-Signalen bestimmt werden. Das Auftreffen des Pfeils erzeugt ein charakteristisches Geräusch, welches von mehreren Mikrofonen in bestimmter Anordnung um die Dartscheibe herum in elektrische Signale umgewandelt wird. Mithilfe der Schallgeschwindigkeit und den Zeitdifferenzen, welche die Schallwelle zu den einzelnen Mikrofonen benötigt soll dann der Auftreffpunkt berechnet werden.
Mit der Implementierung sowie einer anschließenden aussagekräftigen Evaluierung, soll das, visuelle-inertiale Kartierungs- und Lokalisierungssystem maplab analysiert werden. Hierbei basiert die Kartierung bzw. Lokalisierung auf der Detektion von Umgebungsmerkmalen. Neben der Möglichkeit der Kartenerstellung besteht ferner die Option, mehrere Karten zu fusionieren und somit weitreichende Gebiete zu kartieren sowie für weitere Datenauswertungen zu nutzen. Aufgrund der Durchführung und Bewertung der Ergebnisse in unterschiedlichen Anwendungsszenarien zeigt sich, dass maplab besonders zur Kartierung von Räumen bzw. kleinen Gebäudekomplexen geeignet ist. Die Möglichkeit der Kartenfusionierung bietet weiterhin die Option, den Informationsgehalt von Karten zu erhöhen, welches die Effektivität für eine anschließende Lokalisierung steigert. Bei wachsender Kartierungsgröße hingegen zeigt sich jedoch eine Vergrößerung geometrischer Inkonsistenzen.
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present an unsupervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an autoencoder to generate suitable latent representation. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features could be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
We introduce an open source python framework named PHS-Parallel Hyperparameter Search to enable hyperparameter optimization on numerous compute instances of any arbitrary python function. This is achieved with minimal modifications inside the target function. Possible applications appear in expensive to evaluate numerical computations which strongly depend on hyperparameters such as machine learning. Bayesian optimization is chosen as a sample efficient method to propose the next query set of parameters.
Diffracted waves carry high‐resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. However, the diffraction energy tends to be weak compared to the reflected energy and is also sensitive to inaccuracies in the migration velocity, making the identification of its signal challenging. In this work, we present an innovative workflow to automatically detect scattering points in the migration dip angle domain using deep learning. By taking advantage of the different kinematic properties of reflected and diffracted waves, we separate the two types of signals by migrating the seismic amplitudes to dip angle gathers using prestack depth imaging in the local angle domain. Convolutional neural networks are a class of deep learning algorithms able to learn to extract spatial information about the data in order to identify its characteristics. They have now become the method of choice to solve supervised pattern recognition problems. In this work, we use wave equation modelling to create a large and diversified dataset of synthetic examples to train a network into identifying the probable position of scattering objects in the subsurface. After giving an intuitive introduction to diffraction imaging and deep learning and discussing some of the pitfalls of the methods, we evaluate the trained network on field data and demonstrate the validity and good generalization performance of our algorithm. We successfully identify with a high‐accuracy and high‐resolution diffraction points, including those which have a low signal to noise and reflection ratio. We also show how our method allows us to quickly scan through high dimensional data consisting of several versions of a dataset migrated with a range of velocities to overcome the strong effect of incorrect migration velocity on the diffraction signal.
Extracting horizon surfaces from key reflections in a seismic image is an important step of the interpretation process. Interpreting a reflection surface in a geologically complex area is a difficult and time-consuming task, and it requires an understanding of the 3D subsurface geometry. Common methods to help automate the process are based on tracking waveforms in a local window around manual picks. Those approaches often fail when the wavelet character lacks lateral continuity or when reflections are truncated by faults. We have formulated horizon picking as a multiclass segmentation problem and solved it by supervised training of a 3D convolutional neural network. We design an efficient architecture to analyze the data over multiple scales while keeping memory and computational needs to a practical level. To allow for uncertainties in the exact location of the reflections, we use a probabilistic formulation to express the horizons position. By using a masked loss function, we give interpreters flexibility when picking the training data. Our method allows experts to interactively improve the results of the picking by fine training the network in the more complex areas. We also determine how our algorithm can be used to extend horizons to the prestack domain by following reflections across offsets planes, even in the presence of residual moveout. We validate our approach on two field data sets and show that it yields accurate results on nontrivial reflectivity while being trained from a workable amount of manually picked data. Initial training of the network takes approximately 1 h, and the fine training and prediction on a large seismic volume take a minute at most.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
Background: This paper presents a novel approach for a hand prosthesis consisting of a flexible, anthropomorphic, 3D-printed replacement hand combined with a commercially available motorized orthosis that allows gripping.
Methods: A 3D light scanner was used to produce a personalized replacement hand. The wrist of the replacement hand was printed of rigid material; the rest of the hand was printed of flexible material. A standard arm liner was used to enable the user’s arm stump to be connected to the replacement hand. With computer-aided design, two different concepts were developed for the scanned hand model: In the first concept, the replacement hand was attached to the arm liner with a screw. The second concept involved attaching with a commercially available fastening system; furthermore, a skeleton was designed that was located within the flexible part of the replacement hand.
Results: 3D-multi-material printing of the two different hands was unproblematic and inexpensive. The printed hands had approximately the weight of the real hand. When testing the replacement hands with the orthosis it was possible to prove a convincing everyday functionality. For example, it was possible to grip and lift a 1-L water bottle. In addition, a pen could be held, making writing possible.
Conclusions: This first proof-of-concept study encourages further testing with users.
Seit 2009 nimmt das Team ”magmaOffenburg” an der 3D-Simulationsliga des RoboCups teil. Für das erfolgreiche Abschneiden in Turnieren ist die Qualität der erlernten Bewegungsabläufe ein zentraler Faktor. Bisher wurden genetische Algorithmen verwendet, um verschiedenste Aktionen zu erlernen sowie zu optimieren. In dieser Arbeit wird der Deep Reinforcement Learning Algorithmus Proximal Policy Optimization für das Erlernen bestimmter Bewegungen verwendet. Um ein Verständnis für dessen einflussreichen Parameter zu erhalten, werden Größen wie paralleles Lernen, Hyperparameter, Netzwerktopologie, Größe des Observationspace sowie asynchronem Lernen anhand dem Kicken aus dem Stand evaluiert. Durch die Ergebnisse der Evaluierung konnte der erlernte Kick signifikant verbessert werden und sein genetisch erlerntes Gegenstück im Spiel ablösen. Drüber hinaus wurden die Erkenntnisse anhand dem Laufen lernen evaluiert und Zusammenhänge bzw. Unterschiede der zwei Lernprobleme festgestellt.
The precise positioning of mobile systems is a prerequisite for any autonomous behavior, in an industrial environment as well as for field robotics. The paper describes the set up for an experimental platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. Two approaches are compared. First, a local method based on point cloud matching and integration of inertial measurement units is evaluated. Subsequent matching makes it possible to create a three-dimensional point cloud that can be used as a map in subsequent runs. The second approach is a full SLAM algorithm, based on graph relaxation models, incorporating the full sensor suite of odometry, inertial sensors, and 3D laser scan data.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
In this contribution, we propose an system setup for the detection andclassification of objects in autonomous driving applications. The recognition algo-rithm is based upon deep neural networks, operating in the 2D image domain. Theresults are combined with data of a stereo camera system to finally incorporatethe 3D object information into our mapping framework. The detection systemis locally running upon the onboard CPU of the vehicle. Several network archi-tectures are implemented and evaluated with respect to accuracy and run-timedemands for the given camera and hardware setup.