Refine
Document Type
- Conference Proceeding (82) (remove)
Conference Type
- Konferenzartikel (79)
- Konferenz-Abstract (3)
Has Fulltext
- no (82)
Is part of the Bibliography
- yes (82)
Keywords
- Heart rhythm model (3)
- Modeling and simulation (3)
- neural networks (3)
- convolutional neural networks (2)
- image classification (2)
- printed electronics (2)
- AC machines (1)
- Air Pollution (1)
- Amplitude and Phase Errors (1)
- Angle of Arrival (1)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (82) (remove)
Open Access
- Closed Access (82) (remove)
Background: Transesophageal left atrial (LA) pacing and transesophageal LA ECG recording are semi-invasive techniques for diagnostic and therapy of supraventricular rhythm disturbance. Cardiac resynchronization therapy (CRT) with right atrial (RA) sensed biventricular pacing is an established therapy for heart failure patients with reduced left ventricular (LV) ejection fraction, sinus rhythm and interventricular electrical desynchronization.
Purpose: The aim of the study was to evaluate electromagnetic and voltage pacing fields of the combination of RA pacing, LA pacing and biventricular pacing in patients with long interatrial and interventricular electrical desynchronization.
Methods: The modelling and electromagnetic simulations of transesophageal LA pacing in combination with RA pacing and biventricular pacing would be staged and analyzed with the CST (Computer Simulation Technology) software. Different electrodes were modelled in order to simulate different types of bipolar pacing in the 3D-CAD Offenburg heart rhythm model: The bipolar Solid S (Biotronik) electrode where modelled for RA pacing and right ventricular (RV) pacing, Attain 4194 (Medtronic) for LV pacing and TO8 (Osypka) multipolar esophageal electrode with hemispheric electrodes for LA pacing.
Results: The pacemaker amplitudes for the electromagnetic pacing simulations were performed with 3 V for RA pacing, 1.5 V for RV pacing, 50 V for LA pacing and 3V for LV pacing with pacing impulse duration of 0.5 ms for RA, RV and LV pacing and 10 ms for LA pacing. The atrioventricular pacing delay after RA pacing was 140 ms. The different pacing modes AAI, VVI, DDD, DDD0V and DDD0D were evaluated for the analysis of the electric pacing field propagation of pacemaker, CRT and LA pacing. The pacing results were compared at minimum (LOW) and maximum (HIGH) parameter settings. While the LOW setting produced fewer tetrahedral and more inaccurate results, the HIGH setting produced many tetrahedral and therefore more accurate results.
Conclusions: The simulation of the combination of transesophageal LA pacing with RA sensed biventricular pacing is possible with the Offenburg heart rhythm model. The new temporary 4-chamber pacing method may be additional useful method in CRT non-responders with long interatrial electrical delay.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
One of the main requirements of spatially distributed Internet of Things (IoT) solutions is to have networks with wider coverage to connect many low-power devices. Low-Power Wide-Area Networks (LPWAN) and Cellular IoT(cIOT) networks are promising candidates in this space. LPWAN approaches are based on enhanced physical layer (PHY) implementations to achieve long range such as LoRaWAN, SigFox, MIOTY. Narrowband versions of cellular network offer reduced bandwidth and, simplified node and network management mechanisms, such as Narrow Band IoT (NB-IoT) and Long-Term Evolution for Machines (LTE-M). Since the underlying use cases come with various requirements it is essential to perform a comparative analysis of competing technologies. This article provides systematic performance measurement and comparison of LPWAN and NB-IoT technologies in a unified testbed, also discusses the necessity of future fifth generation (5G) LPWAN solutions.
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
The twin concept is increasingly used for optimization tasks in the context of Industry 4.0 and digitization. The twin concept can also help small and medium-sized enterprises (SME) to exploit their energy flexibility potential and to achieve added value by appropriate energy marketing. At the same time, this use of flexibility helps to realize a climate-neutral energy supply with high shares of renewable energies. The digital twin reflects real production, power flows and market influences as a computer model, which makes it possible to simulate and optimize on-site interventions and interactions with the energy market without disturbing the real production processes. This paper describes the development of a generic model library that maps flexibility-relevant components and processes of SME, thus simplifying the creation of a digital twin. The paper also includes the development of an experimental twin consisting of SME hardware components and a PLC-based SCADA system. The experimental twin provides a laboratory environment in which the digital twin can be tested, further developed and demonstrated on a laboratory scale. Concrete implementations of such a digital twin and experimental twin are described as examples.
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
Neuromorphic computing systems have demonstrated many advantages for popular classification problems with significantly less computational resources. We present in this paper the design, fabrication and training of a programmable neuromorphic circuit, which is based on printed electrolytegated field-effect transistor (EGFET). Based on printable neuron architecture involving several resistors and one transistor, the proposed circuit can realize multiply-add and activation functions. The functionality of the circuit, i.e. the weights of the neural network, can be set during a post-fabrication step in form of printing resistors to the crossbar. Besides the fabrication of a programmable neuron, we also provide a learning algorithm, tailored to the requirements of the technology and the proposed programmable neuron design, which is verified through simulations. The proposed neuromorphic circuit operates at 5V and occupies 385mm 2 of area.
Printed electronics (PE) offers flexible, extremely low-cost, and on-demand hardware due to its additive manufacturing process, enabling emerging ultra-low-cost applications, including machine learning applications. However, large feature sizes in PE limit the complexity of a machine learning classifier (e.g., a neural network (NN)) in PE. Stochastic computing Neural Networks (SC-NNs) can reduce area in silicon technologies, but still require complex designs due to unique implementation tradeoffs in PE. In this paper, we propose a printed mixed-signal system, which substitutes complex and power-hungry conventional stochastic computing (SC) components by printed analog designs. The printed mixed-signal SC consumes only 35% of power consumption and requires only 25% of area compared to a conventional 4-bit NN implementation. We also show that the proposed mixed-signal SC-NN provides good accuracy for popular neural network classification problems. We consider this work as an important step towards the realization of printed SC-NN hardware for near-sensor-processing.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.