Refine
Document Type
- Conference Proceeding (194) (remove)
Conference Type
- Konferenzartikel (169)
- Konferenz-Abstract (19)
- Sonstiges (5)
- Konferenz-Poster (1)
Language
- English (194) (remove)
Is part of the Bibliography
- yes (194)
Keywords
- RoboCup (12)
- Machine Learning (9)
- Deep Leaning (7)
- Heart rhythm model (5)
- Modeling and simulation (5)
- Robustness (4)
- machine learning (4)
- Generative Adversarial Network (3)
- Radar (3)
- cryptography (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (194) (remove)
Open Access
- Closed Access (79)
- Open Access (63)
- Closed (51)
- Bronze (22)
- Diamond (9)
- Grün (3)
- Gold (1)
Disturbances of the cardiac conduction system causing reentry mechanisms above the atrioventricular (AV) node are induced by at least one accessory pathway with different conducting properties and refractory periods. This work aims to further develop the already existing and continuously expanding Offenburg heart rhythm model to visualise the most common supraventricular reentry tachycardias to provide a better understanding of the cause of the respective reentry mechanism.
The visualization of heart rhythm disturbance and atrial fibrillation therapy allow the optimization of new cardiac catheter ablations. With the simulation software CST (Computer Simulation Technology, Darmstadt) electromagnetic and thermal simulations can be carried out to analyze and optimize different heart rhythm disturbance and cardiac catheters for pulmonary vein isolation. Another form of visualization is provided by haptic, three-dimensional print models. These models can be produced using an additive manufacturing method, such as a 3D printer. The aim of the study was to produce a 3D print of the Offenburg heart rhythm model with a representation of an atrial fibrillation ablation procedure to improve the visualization of simulation of cardiac catheter ablation.
The basis of 3D printing was the Offenburg heart rhythm model and the associated simulation of cryoablation of the pulmonary vein. The thermal simulation shows the pulmonary vein isolation of the left inferior pulmonary vein with the cryoballoon catheter Arctic Front AdvanceTM from Medtronic. After running through the simulation, the thermal propagation during the procedure was shown in the form of different colors. The three-dimensional print models were constructed on the base of the described simulation in a CAD program. Four different 3D printers are available for this purpose in a rapid prototyping laboratory at the University of Applied Science Offenburg. Two different printing processes were used: 1. a binder jetting printer with polymer gypsum and 2. a multi-material printer with photopolymer. A final print model with additional representation of the esophagus and internal esophagus catheter was also prepared for printing.
With the help of the thermal simulation results and the subsequent evaluation, it was possible to make a conclusion about the propagation of the cold emanating from the catheter in the myocardium and the surrounding tissue. It could be measured that already 3 mm from the balloon surface into the myocardium the temperature drops to 25 °C. The simulation model was printed using two 3D printing methods. Both methods as well as the different printing materials offer different advantages and disadvantages. While the first model made of polymer gypsum can be produced quickly and cheaply, the second model made of photopolymer takes five times longer and was twice as expensive. On the other hand, the second model offers significantly better properties and was more durable overall. All relevant parts, especially the balloon catheter and the conduction, are realistically represented. Only the thermal propagation in the form of different colors is not shown on this model.
Three-dimensional heart rhythm models as well as virtual simulations allow a very good visualization of complex cardiac rhythm therapy and atrial fibrillation treatment methods. The printed models can be used for optimization and demonstration of cryoballoon catheter ablation in patients with atrial fibrillation.
The Internet of Things (IoT) application has becoming progressively in-demand, most notably for the embedded devices (ED). However, each device has its own difference in computational capabilities, memory usage, and energy resources in connecting to the Internet by using Wireless Sensor Networks (WSNs). In order for this to be achievable, the WSNs that form the bulk of the IoT implementation requires a new set of technologies and protocol that would have a defined area, in which it addresses. Thus, IPv6 Low Power Area Network (6LoWPAN) was designed by the Internet Engineering Task Force (IETF) as a standard network for ED. Nevertheless, the communication between ED and 6LoWPAN requires appropriate routing protocols for it to achieve the efficient Quality of Service (QoS). Among the protocols of 6LoWPAN network, RPL is considered to be the best protocol, however its Energy Consumption (EC) and Routing Overhead (RO) is considerably high when it is implemented in a large network. Therefore, this paper would propose the HRPL to enchance the RPL protocol in reducing the EC and RO. In this study, the researchers would present the performance of RPL and HRPL in terms of EC, Control traffic Overhead (CTO) and latency based on the simulation of the 6LoWPAN network in fixed environment using COOJA simulator. The results show HRPL protocol achieves better performance in all the tested topology in terms of EC and CTO. However, the latency of HRPL only improves in chain topology compared with RPL. We found that further research is required to study the relationship between the latency and the load of packet transmission in order to optimize the EC usage.
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
The Go programming language is an increasingly popular language but some of its features lack a formal investigation. This article explains Go's resolution mechanism for overloaded methods and its support for structural subtyping by means of translation from Featherweight Go to a simple target language. The translation employs a form of dictionary passing known from type classes in Haskell and preserves the dynamic behavior of Featherweight Go programs.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
When designing and installing Indoor Positioning Systems, several interrelated tasks have to be solved to find an optimum placement of the Access Points. For this purpose, a mathematical model for a predefined number of access points indoors is presented. Two iterative algorithms for the minimization of localization error of a mobile object are described. Both algorithms use local search technique and signal level probabilities. Previously registered signal strengths maps were used in computer simulation.
In this work a method for the estimation of current slopes induced by inverters operating interior permanent magnet synchronous machines is presented. After the derivation of the estimation algorithm, the requirements for a suitable sensor setup in terms of accuracy, dynamic and electromagnetic interference are discussed. The boundary conditions for the estimation algorithm are presented with respect to application within high power traction systems. The estimation algorithm is implemented on a field programmable gateway array. This moving least-square algorithm offers the advantage that it is not dependent on vectors and therefore not every measured value has to be stored. The summation of all measured values leads to a significant reduction of the required storage units and thus decreases the hardware requirements. The algorithm is designed to be calculated within the dead time of the inverter. Appropriate countermeasures for disturbances and hardware restrictions are implemented. The results are discussed afterwards.
This paper presents the use of model predictive control (MPC) based approach for peak shaving application of a battery in a Photovoltaic (PV) battery system connected to a rural low voltage gird. The goals of the MPC are to shave the peaks in the PV feed-in and the grid power consumption and at the same time maximize the use of the battery. The benefit to the prosumer is from the maximum use of the self-produced electricity. The benefit to the grid is from the reduced peaks in the PV feed-in and the grid power consumption. This would allow an increase in the PV hosting and the load hosting capacity of the grid.
The paper presents the mathematical formulation of the optimal control problem
along with the cost benefit analysis. The MPC implementation scheme in the
laboratory and experiment results have also been presented. The results show
that the MPC is able to track the deviation in the weather forecast and operate
the battery by solving the optimal control problem to handle this deviation.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
A Novel Approach of High Dynamic Current Control of Interior Permanent Magnet Synchronous Machines
(2019)
Harmonic-afflicted effects of permanent magnet synchronous machines with high power density are hardly faced by traditional current PI controllers, due to limited controller bandwidth. As a consequence, currents and lastly torque ripples appear. In this paper, a new deadbeat current controller architecture has been presented, which is capable to encounter the effects of these harmonics. This new control algorithm, here named “Hybrid-Deadbeat-Controller”, combines the stability and the low steady-state errors offered by common PI regulators with the high dynamic offered by the deadbeat control. Therefore, a novel control algorithm is proposed, capable of either compensating the current harmonics in order to get smoother currents or to control a varying reference value to achieve a smoother torque. The information needed to calculate the optimal reference currents is based on an online parameter estimation feeding an optimization algorithm to achieve an optimal torque output and will be investigated in future research. In order to ensure the stability of the controller over the whole area of operation even under the influence of effects changing the system’s parameter, this work as well focusses on the robustness of the “hybrid” dead beat controller.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
In recent years, the topic of embedded machine learning has become very popular in AI research. With the help of various compression techniques such as pruning, quantization and others compression techniques, it became possible to run neural networks on embedded devices. These techniques have opened up a whole new application area for machine learning. They range from smart products such as voice assistants to smart sensors that are needed in robotics. Despite the achievements in embedded machine learning, efficient algorithms for training neural networks in constrained domains are still lacking. Training on embedded devices will open up further fields of applications. Efficient training algorithms would enable federated learning on embedded devices, in which the data remains where it was collected, or retraining of neural networks in different domains. In this paper, we summarize techniques that make training on embedded devices possible. We first describe the need and requirements for such algorithms. Then we examine existing techniques that address training in resource-constrained environments as well as techniques that are also suitable for training on embedded devices, such as incremental learning. At the end, we also discuss which problems and open questions still need to be solved in these areas.
Ensuring that software applications present their users the most recent version of data is not trivial. Self-adjusting computations are a technique for automatically and efficiently recomputing output data whenever some input changes.
This article describes the software architecture of a large, commercial software system built around a framework for coarse-grained self-adjusting computations in Haskell. It discusses advantages and disadvantages based on longtime experience. The article also presents a demo of the system and explains the API of the framework.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.