Refine
Document Type
- Conference Proceeding (82) (remove)
Conference Type
- Konferenzartikel (79)
- Konferenz-Abstract (3)
Has Fulltext
- no (82)
Is part of the Bibliography
- yes (82)
Keywords
- Heart rhythm model (3)
- Modeling and simulation (3)
- neural networks (3)
- convolutional neural networks (2)
- image classification (2)
- printed electronics (2)
- AC machines (1)
- Air Pollution (1)
- Amplitude and Phase Errors (1)
- Angle of Arrival (1)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (82) (remove)
Open Access
- Closed Access (82) (remove)
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
In automotive parking scenario, where the curb shall be detected and classified to be traversable or not, radars play an important role. There are different approaches already proposed in other works to estimate the target height. This paper assesses and compares two methods. The first is based on Angle of Arrival (AoA) estimation of input signals of multiple antennas using the Multiple-Input-Multiple-Output (MIMO) principle. The second method uses the geometry in multipath propagation of the radar echo signal for one antenna input. In this work a modified method of calculation of the curb height based on the second method is proposed. The theory of approach is mathematically proved and effectiveness is demonstrated by evaluation of measurements with a 77 GHz Frequency Modulated Continuous Wave (FMCW) radar. In order to evaluate the performance of the introduced method the mean square error (MSE) is used in the proposed scenario. This method, using only one antenna input, produced up to 3.4 times better results for curb height detection in comparison with former methods.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Investigation of the Angle Dependency of Self-Calibration in Multiple-Input-Multiple-Output Radars
(2021)
Multiple-Input-Multiple-Output (MIMO) is a key technology in improving the angular resolution (spatial resolution) of radars. In MIMO radars the amplitude and phase errors in antenna elements lead to increase in the sidelobe level and a misalignment of the mainlobe. As the result the performance of the antenna channels will be affected. Firstly, this paper presents analysis of effect of the amplitude and phase errors on angular spectrum using Monte-Carlo simulations. Then, the results are compared with performed measurements. Finally, the error correction with a self-calibration method is proposed and its angle dependency is evaluated. It is shown that the values of the errors change with an incident angle, which leads to a required angle-dependent calibration.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
The Go programming language is an increasingly popular language but some of its features lack a formal investigation. This article explains Go's resolution mechanism for overloaded methods and its support for structural subtyping by means of translation from Featherweight Go to a simple target language. The translation employs a form of dictionary passing known from type classes in Haskell and preserves the dynamic behavior of Featherweight Go programs.
We describe a prototype for power line communi- cation for grid monitoring. The PLC receiver is used to gain information about the PLC channel and the current state of the power grid. The PLC receiver uses the communication signal to obtain an accurate estimate of the current channel and provides information which can be used as a basis for further processing with the aim to detect partial discharges and other anomalies in the grid. This monitoring of the power grid takes advantage of existing PLC infrastructure and uses the data signals, which are transmitted anyway to obtain a real-time measurement of the channel transfer function and the received noise signal. Since this signal is sampled at a high sampling rate compared to simpler measurement sensors, it contains valuable information about possible degradations in the grid which need to be addressed. While channel measurements are based on a received PLC signal, information about partial discharges or other sources of interference can be gathered by a PLC receiver in the absence of a transmit signal. A prototype based on Software Defined Radio has been developed, which implements the simultaneous communication and sensing for a power grid.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
The nonlinear behavior of inverters is mainly influenced by the interlocking and switching times of the semiconductors. In the following work, a method is presented that enables the possibility of an online identification of the switching times of the semiconductors. This information allows a compensation of the non-linear behavior, a reduction of the locking time and can be used for diagnostic purposes. First, a theoretical derivation of the method is made by considering different cases when switching of the inverter and deriving identification possibilities. The method is then extended so that the entire module is taken into account. Furthermore, a possible theoretical implementation is shown. After the methodology has been investigated with possible limitations, boundary conditions and with respect to real hardware, an implementation in the FPGA is performed. Finally, the results are presented, discussed
and further improvements are presented in an outlook.
Printed electronics (PE) offers flexible, extremely low-cost, and on-demand hardware due to its additive manufacturing process, enabling emerging ultra-low-cost applications, including machine learning applications. However, large feature sizes in PE limit the complexity of a machine learning classifier (e.g., a neural network (NN)) in PE. Stochastic computing Neural Networks (SC-NNs) can reduce area in silicon technologies, but still require complex designs due to unique implementation tradeoffs in PE. In this paper, we propose a printed mixed-signal system, which substitutes complex and power-hungry conventional stochastic computing (SC) components by printed analog designs. The printed mixed-signal SC consumes only 35% of power consumption and requires only 25% of area compared to a conventional 4-bit NN implementation. We also show that the proposed mixed-signal SC-NN provides good accuracy for popular neural network classification problems. We consider this work as an important step towards the realization of printed SC-NN hardware for near-sensor-processing.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.
Das hier vorgestellte System verbindet das neue Konzept der Peer-to-Peer-Navigation mit dem Einsatz von Augmented Reality zur Unterstützung von bettseitig durchgeführten externen Ventrikeldrainagen. Das sehr kompakte und genaue Gesamtsystem beinhaltet einen Patiententracker mit integrierter Kamera, eine Augmented-Reality-Brille mit Kamera und eine Punktionsnadel bzw. einen Pointer mit zwei Trackern, mit dessen Hilfe die Anatomie des Patienten aufgenommen wird. Die exakte Position und Richtung der Punktionsnadel wird unter Zuhilfenahme der aufgenommenen Landmarken berechnet und über die Augmented-Reality-Brille für den Chirurgen sichtbar auf dem Patienten dargestellt. Die Methode zur Kalibrierung der statischen Transformationen zwischen Patiententracker und daran befestigter Kamera beziehungsweise zwischen den Trackern der Punktionsnadel sind für die Genauigkeit sehr wichtig und werden hier vorgestellt. Das Gesamtsystem konnte in vitro erfolgreich getestet werden und bestätigt den Nutzen eines Peer-to-Peer-Navigationssystems.
The twin concept is increasingly used for optimization tasks in the context of Industry 4.0 and digitization. The twin concept can also help small and medium-sized enterprises (SME) to exploit their energy flexibility potential and to achieve added value by appropriate energy marketing. At the same time, this use of flexibility helps to realize a climate-neutral energy supply with high shares of renewable energies. The digital twin reflects real production, power flows and market influences as a computer model, which makes it possible to simulate and optimize on-site interventions and interactions with the energy market without disturbing the real production processes. This paper describes the development of a generic model library that maps flexibility-relevant components and processes of SME, thus simplifying the creation of a digital twin. The paper also includes the development of an experimental twin consisting of SME hardware components and a PLC-based SCADA system. The experimental twin provides a laboratory environment in which the digital twin can be tested, further developed and demonstrated on a laboratory scale. Concrete implementations of such a digital twin and experimental twin are described as examples.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
Sustainable chemical processes should be designed to combine the technological advantages and progress with lower safety risks and minimization of environmental impact such as, for example, reduction of raw materials, energy and water consumption, and avoidance of hazardous waste and pollution with toxic chemical agents. A number of novel eco-friendly chemical technologies have been developed in the recent decades with the help of the eco-innovations approaches and methods such as Life Cycle Analysis, Green Process Engineering, Process Intensification, Process Design for Sustainability, and others. An emerging approach to the sustainable process design in process engineering builds on the innovative solutions inspired from nature. However, the implementation of the eco-friendly technologies often faces secondary ecological problems. The study postulates that the eco-inventive principles identified in natural systems allow to avoid secondary eco-problems and proposes to apply these principles for sustainable design in chemical process engineering. The research work critically examines how this approach differs from the biomimetics, as it is commonly used for copying natural systems. The application of nature-inspired eco-design principles is illustrated with an example of a sustainable technology for extraction of nickel from pyrophyllite.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
Novel manufacturing technologies, such as printed electronics, may enable future applications for the Internet of Everything like large-area sensor devices, disposable security, and identification tags. Printed physically unclonable functions (PUFs) are promising candidates to be embedded as hardware security keys into lightweight identification devices. We investigate hybrid PUFs based on a printed PUF core. The statistics on the intra- and inter-hamming distance distributions indicate a performance suitable for identification purposes. Our evaluations are based on statistical simulations of the PUF core circuit and the thereof generated challenge-response pairs. The analysis shows that hardware-intrinsic security features can be realized with printed lightweight devices.
The precise positioning of mobile systems is a prerequisite for any autonomous behavior, in an industrial environment as well as for field robotics. The paper describes the set up for an experimental platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. Two approaches are compared. First, a local method based on point cloud matching and integration of inertial measurement units is evaluated. Subsequent matching makes it possible to create a three-dimensional point cloud that can be used as a map in subsequent runs. The second approach is a full SLAM algorithm, based on graph relaxation models, incorporating the full sensor suite of odometry, inertial sensors, and 3D laser scan data.
Neuromorphic computing systems have demonstrated many advantages for popular classification problems with significantly less computational resources. We present in this paper the design, fabrication and training of a programmable neuromorphic circuit, which is based on printed electrolytegated field-effect transistor (EGFET). Based on printable neuron architecture involving several resistors and one transistor, the proposed circuit can realize multiply-add and activation functions. The functionality of the circuit, i.e. the weights of the neural network, can be set during a post-fabrication step in form of printing resistors to the crossbar. Besides the fabrication of a programmable neuron, we also provide a learning algorithm, tailored to the requirements of the technology and the proposed programmable neuron design, which is verified through simulations. The proposed neuromorphic circuit operates at 5V and occupies 385mm 2 of area.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
Als Einstieg in den Diskurs über zivile Netzwerktechnologien, mobile Geräte, Onlinedienste und die Frage, wie sich die „Kirche der Zukunft“ (zumindest aus medienwissenschaftlicher Sicht) positionieren kann, dienen drei Zitate. Die Gegenüberstellung der darin vertretenen Positionen soll den Nutzen und die Folgen der zunehmend vollständigen Durchdringung (fast) aller Lebensbereiche mit Digitaltechnik für den Einzelnen wie für die Gesellschaft aufzeigen.
In this work a method for the estimation of current slopes induced by inverters operating interior permanent magnet synchronous machines is presented. After the derivation of the estimation algorithm, the requirements for a suitable sensor setup in terms of accuracy, dynamic and electromagnetic interference are discussed. The boundary conditions for the estimation algorithm are presented with respect to application within high power traction systems. The estimation algorithm is implemented on a field programmable gateway array. This moving least-square algorithm offers the advantage that it is not dependent on vectors and therefore not every measured value has to be stored. The summation of all measured values leads to a significant reduction of the required storage units and thus decreases the hardware requirements. The algorithm is designed to be calculated within the dead time of the inverter. Appropriate countermeasures for disturbances and hardware restrictions are implemented. The results are discussed afterwards.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
Wireless communication technologies play a major role to enable megatrends like Internet of Things (IoT) and Industry 4.0. The Narrowband Wireless WAN (NBWWAN) introduced to meet the long range and low power requirements of spatially distributed wireless communication use cases. These networks introduce additional challenges in testing because the network topology and RF characteristics become particularly complex and thus a multitude of different scenarios must be tested. This paper describes the infrastructure for automated testing of radio communication and for systematic measurements of the network performance of NBWWAN.
One of the main requirements of spatially distributed Internet of Things (IoT) solutions is to have networks with wider coverage to connect many low-power devices. Low-Power Wide-Area Networks (LPWAN) and Cellular IoT(cIOT) networks are promising candidates in this space. LPWAN approaches are based on enhanced physical layer (PHY) implementations to achieve long range such as LoRaWAN, SigFox, MIOTY. Narrowband versions of cellular network offer reduced bandwidth and, simplified node and network management mechanisms, such as Narrow Band IoT (NB-IoT) and Long-Term Evolution for Machines (LTE-M). Since the underlying use cases come with various requirements it is essential to perform a comparative analysis of competing technologies. This article provides systematic performance measurement and comparison of LPWAN and NB-IoT technologies in a unified testbed, also discusses the necessity of future fifth generation (5G) LPWAN solutions.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.
Amongst all the major hazard aspects for the health of people in big conglomerates is the increase of the particulate matter concentration. Traditional systems for particulate matter (PM) monitoring have a great number of drawbacks but the main issues are economical and are related to the installation costs and never ending periodical maintenance expenses. After all there are installations of such systems but their number is limited and having in mind the growth of population, cities and industry areas, there is even a bigger need for more information on air quality because PM changes non-linearly, has a wide range and different sources. In this paper, we propose an approach, based on low-cost sensor nodes, for real-time measuring and obtaining information about the PM concentration. The adoption of that approach allows for a detailed study of the intensities of pollution and its sources. The system power supply is powered by a PV module. The power supply unit is designed using a model-based design that is a new approach to prototyping power-operated electronic devices with guaranteed performance.
This paper presents an approach for implementing an automated hit detection and score calculation system for a steel dartboard using a standard webcam. First, the rectilinear field separations of the dartboard are described mathematically by means of line slopes and are than stored. These slopes serve as a basis for later score calculation. In addition, thrown darts have to be detected and the pixel at which the dart cuts the dartboard has to be determined. When this information is known, a comparison is made using the line slopes, allowing the field number of the hit to be detected. The decision for single, double or triple hit is made by evaluating the defined colors on the dartboard. All these functions are then packaged in a Matlab GUI.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.
The fast and cost-effective manufacturing of tools for thermoforming is an essential requirement to shorten the development time of products. Thus, additive processes are used increasingly in tooling for thermoforming of plastic sheets. However, a disadvantage of many additive methods is that they are highly cost-intensive, since complex systems based on laser technology and expensive metal powders are needed. Therefore, this paper examines how to work with favorable additive methods, e.g. Binder Jetting, to manufacture tools, which provide sufficient strength for thermoforming. The use of comparatively low-priced inkjet technology for the layer construction and a polymer plaster as material can be expected to result in significant cost reductions. Based on a case study using a cowling (engine bonnet) for an Unmanned Aerial Vehicle (UAV), the development of a complex tool for thermoforming is demonstrated. The object in this study is to produce a tool for a complex-shaped component in small numbers and high quality in a short time and at reasonable costs. Within the tooling process, integrated vacuum channels are implemented in additive tooling without the need for additional post-processing (for example, drilling). In addition, special technical challenges, such as the demolding of undercuts or the parting of the tool are explained. All process steps from tool design to the use of the additively manufactured tool are analyzed. Based on the manufacturing of a small series of cowlings for a UAV made of plastic sheets (ABS), it is shown, that the Binder Jetting offers sufficient mechanical and thermal strength for additive tooling. In addition, an economic evaluation of the tool manufacturing and a detailed consideration of the required manufacturing times for the different process steps are carried out. Finally, a comparison is made with conventional and alternative additive methods of tooling.
When designing and installing Indoor Positioning Systems, several interrelated tasks have to be solved to find an optimum placement of the Access Points. For this purpose, a mathematical model for a predefined number of access points indoors is presented. Two iterative algorithms for the minimization of localization error of a mobile object are described. Both algorithms use local search technique and signal level probabilities. Previously registered signal strengths maps were used in computer simulation.
Narrowband IoT (NB-IoT) as a radio access technology for the cellular Internet of Things (cIoT) is getting more traction due to attractive system parameters, new proposals in the 3 rd Generation Partnership Project (3GPP) Release 14 for reduced power consumption and ongoing world-wide deployment. As per 3GPP, the low-power and wide-area use cases in 5G specification will be addressed by the early NB-IoT and Long-Term Evolution for Machines (LTE-M) based technologies. Since these cIoT networks will operate in a spatially distributed environment, there are various challenges to be addressed for tests and measurements of these networks. To meet these requirements, unified emulated and field testbeds for NB-IoT-networks were developed and used for extensive performance measurements. This paper analyses the results of these measurements with regard to RF coverage, signal quality, latency, and protocol consistency.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
The Internet of Things (IoT) application has becoming progressively in-demand, most notably for the embedded devices (ED). However, each device has its own difference in computational capabilities, memory usage, and energy resources in connecting to the Internet by using Wireless Sensor Networks (WSNs). In order for this to be achievable, the WSNs that form the bulk of the IoT implementation requires a new set of technologies and protocol that would have a defined area, in which it addresses. Thus, IPv6 Low Power Area Network (6LoWPAN) was designed by the Internet Engineering Task Force (IETF) as a standard network for ED. Nevertheless, the communication between ED and 6LoWPAN requires appropriate routing protocols for it to achieve the efficient Quality of Service (QoS). Among the protocols of 6LoWPAN network, RPL is considered to be the best protocol, however its Energy Consumption (EC) and Routing Overhead (RO) is considerably high when it is implemented in a large network. Therefore, this paper would propose the HRPL to enchance the RPL protocol in reducing the EC and RO. In this study, the researchers would present the performance of RPL and HRPL in terms of EC, Control traffic Overhead (CTO) and latency based on the simulation of the 6LoWPAN network in fixed environment using COOJA simulator. The results show HRPL protocol achieves better performance in all the tested topology in terms of EC and CTO. However, the latency of HRPL only improves in chain topology compared with RPL. We found that further research is required to study the relationship between the latency and the load of packet transmission in order to optimize the EC usage.
Formal Description of Use Cases for Industry 4.0 Maintenance Processes Using Blockchain Technology
(2019)
Maintenance processes in Industry 4.0 applications try to achieve a high degree of quality to reduce the downtime of machinery. The monitoring of executed maintenance activities is challenging as in complex production setups, multiple stakeholders are involved. So, full transparency of the different activities and of the state of the machine can only be supported, if these stakeholders trust each other. Therefore, distributed ledger technologies, like Blockchain, can be promising candidates for supporting such applications. The goal of this paper is a formal description of business and technical interactions between non-trustful stakeholders in the context of Industry 4.0 maintenance processes using distributed ledger technologies. It also covers the integration of smart contracts for automated triggering of activities.
This paper presents the use of model predictive control (MPC) based approach for peak shaving application of a battery in a Photovoltaic (PV) battery system connected to a rural low voltage gird. The goals of the MPC are to shave the peaks in the PV feed-in and the grid power consumption and at the same time maximize the use of the battery. The benefit to the prosumer is from the maximum use of the self-produced electricity. The benefit to the grid is from the reduced peaks in the PV feed-in and the grid power consumption. This would allow an increase in the PV hosting and the load hosting capacity of the grid.
The paper presents the mathematical formulation of the optimal control problem
along with the cost benefit analysis. The MPC implementation scheme in the
laboratory and experiment results have also been presented. The results show
that the MPC is able to track the deviation in the weather forecast and operate
the battery by solving the optimal control problem to handle this deviation.
Printed electronics (PE) is a fast growing technology with promising applications in wearables, smart sensors and smart cards since it provides mechanical flexibility, low-cost, on-demand and customizable fabrication. To secure the operation of these applications, True Random Number Generators (TRNGs) are required to generate unpredictable bits for cryptographic functions and padding. However, since the additive fabrication process of PE circuits results in high intrinsic variation due to the random dispersion of the printed inks on the substrate, constructing a printed TRNG is challenging. In this paper, we exploit the additive customizable fabrication feature of inkjet printing to design a TRNG based on electrolyte-gated field effect transistors (EGFETs). The proposed memory-based TRNG circuit can operate at low voltages (≤ 1 V ), it is hence suitable for low-power applications. We also propose a flow which tunes the printed resistors of the TRNG circuit to mitigate the overall process variation of the TRNG so that the generated bits are mostly based on the random noise in the circuit, providing a true random behaviour. The results show that the overall process variation of the TRNGs is mitigated by 110 times, and the simulated TRNGs pass the National Institute of Standards and Technology Statistical Test Suite.
Printed Electronics is perceived to have a major impact in the fields of smart sensors, Internet of Things and wearables. Especially low power printed technologies such as electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials and inkjet printing are very promising in such application domains. In this paper, we discuss a modeling approach to describe the variations of printed devices. Incorporating these models and design flows into our previously developed printed design system allows for robust circuit design. Additionally, we propose a reliability-aware routing solution for printed electronics technology based on the technology constraints in printing crossovers. The proposed methodology was validated on multiple benchmark circuits and can be easily integrated with the design automation tools-set.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
The high peak power in comparison to the average transmit power is one of the major long-standing problems in multicarrier modulation and is known as the PAPR (peak to average power ratio) problem. Many PAPR reduction methods have been devised and their comparison is usually based on the complementary cumulative distribution function (CCDF) of the PAPR. While this comparison is straightforward and easy to compute, its relationship with system performance metrics like the (uncoded) BER or the word error rate (WER) for coded systems is considerably more involved. We evaluate the impact of the PAPR on performance metrics like uncoded BER, EVM (error vector magnitude), mutual information and the WER for soft decoding. In this context, we find that system performance is not necessarily degraded by an increasing PAPR. We show that a high number of subcarriers, despite the corresponding high PAPR, is actually not a problem for the system performance and provide a simple explanation for this seemingly counter-intuitive fact.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods. While these approaches tend to work in practice, there are still many gaps in the theoretical understanding of key aspects like convergence and generalization guarantees, which are induced by the properties of the optimization surface (loss landscape). In order to gain deeper insights, a number of recent publications proposed methods to visualize and analyze the otimization surfaces. However, the computational cost of these methods are very high, making it hardly possible to use them on larger networks. In this paper, we present the GradVis Toolbox, an open source library for efficient and scalable visualization and analysis of deep neural network loss landscapes in Tesorflow and PyTorch. Introducing more efficient mathematical formulations and a novel parallelization scheme, GradVis allows to plot 2d and 3d projections of optimization surfaces and trajectories, as well as high resolution second order gradient information for large networks.
Recent deep learning based approaches have shown remarkable success on object segmentation tasks. However, there is still room for further improvement. Inspired by generative adversarial networks, we present a generic end-to-end adversarial approach, which can be combined with a wide range of existing semantic segmentation networks to improve their segmentation performance. The key element of our method is to replace the commonly used binary adversarial loss with a high resolution pixel-wise loss. In addition, we train our generator employing stochastic weight averaging fashion, which further enhances the predicted output label maps leading to state-of-the-art results. We show, that this combination of pixel-wise adversarial training and weight averaging leads to significant and consistent gains in segmentation performance, compared to the baseline models.
A Novel Approach of High Dynamic Current Control of Interior Permanent Magnet Synchronous Machines
(2019)
Harmonic-afflicted effects of permanent magnet synchronous machines with high power density are hardly faced by traditional current PI controllers, due to limited controller bandwidth. As a consequence, currents and lastly torque ripples appear. In this paper, a new deadbeat current controller architecture has been presented, which is capable to encounter the effects of these harmonics. This new control algorithm, here named “Hybrid-Deadbeat-Controller”, combines the stability and the low steady-state errors offered by common PI regulators with the high dynamic offered by the deadbeat control. Therefore, a novel control algorithm is proposed, capable of either compensating the current harmonics in order to get smoother currents or to control a varying reference value to achieve a smoother torque. The information needed to calculate the optimal reference currents is based on an online parameter estimation feeding an optimization algorithm to achieve an optimal torque output and will be investigated in future research. In order to ensure the stability of the controller over the whole area of operation even under the influence of effects changing the system’s parameter, this work as well focusses on the robustness of the “hybrid” dead beat controller.
Background: The application of high-frequency ablation is used for the treatment of tachycardia arrhythmias and is a respected method. Ablation with high frequency current leads to the targeted heat destruction of myocardial tissue at specific sites and thus prevents the pathological propagation of excitation through these structures.
Purpose: The aim of this study was to simulate heat propagation during RF ablation with modeled electrodes in different sizes and materials. The simulation was performed on atrioventricular node re-entry tachycardia (AVNRT), atrioventricular re-entry tachycardia (AVRT) and atrial flutter (AFL).
Methods: Using the modeling and simulation software CST, ablation catheters with 4 mm and 8 mm tip electrodes were modeled from gold and platinum for each. The designed catheters correspond to the manufacturer"s specifications of Medtronic, Biotronik and Osypka. The catheters were integrated into the Offenburg heart rhythm model to simulate and compare the heat propagation during an ablation application, which also takes into account the blood flow in the four heart chambers. A power of 5 W - 40 W was simulated for the 4 mm electrodes and a power of 50 W - 80 W for the 8 mm electrodes.
Results: During the simulated HF ablation application, the temperature at the ablation electrode was measured at different powers. This is 40.67°C at 5 W, 44.34°C at 10 W, 51.76°C at 20 W, 59.0°C at 30 W, and 66.33°C at 40 W. The measured temperature during 40 W application is 39.5°C at 0,5 mm depth in the myocardium and 37.5°C at 2 mm depth.
In the simulation, the 8 mm platinum electrode reached an ablation temperature of 72.85°C at its tip during an applied power of 60 W. In contrast, the 8 mm platinum electrode reached a depth of 5 mm at 39.5 C° and at a depth of 2 mm at 37.5 °C. In contrast, the 8 mm gold electrode reached a temperature of 64.66°C with the same performance. This is due to the thermal properties of gold, which has a better thermal conductivity than platinum.
Conclusions: CST offers the possibility to carry out a static and dynamic simulation of a heart model and the ablation electrodes integrated in it during an HF ablation. In variation with different electrode sizes and materials, therapy methods for the treatment of AVNRT, AVRT and AFL can be optimized
Background: Pulmonary vein isolation (PVI) using cryoballoon catheters are a recognized method for the treatment of atrial fibrillation (AF). This method offers shorter treatment duration in contrast to the classical therapy with high-frequency (HF) ablation.
Purpose: The aim of this study was to integrate different cryoballoon catheters and a HF catheter into a heart rhythm model and to compare them by means of static and dynamic electromagnetic and thermal simulation in use under AF.
Methods: The cryoballoon catheters from Medtronic and the HF ablation catheter from Osypka were modelled virtually with the aid of manufacturer specifications and the CST (Computer Simulation Technology, Darmstadt) simulation program. The cryoballoon catheter was located in the lower left pulmonary vein of the virtual heart rhythm model for the realization of pulmonary vein isolation (PVI) by cryoenergy. The simulated temperature at the balloon surface was -50°C during the simulation.
Results: During a simulated 20 second application of a cryoballoon catheter at -50°C, a temperature of -24°C was measured at a depth of 0.5 mm in the myocardium. At a depth of 1 mm the temperature was -3°C, at 2 mm depth 18°C and at 3 mm depth 29°C. Under the 15 second application of a RF catheter with a 8 mm electrode and a power of 5 W at 420 kHz, the temperature at the tip of the electrode was 110°C. At a depth of 0.5 mm in the myocardium, the temperature was 75°C, at a depth of 1 mm 58°C, at 2 mm depth 45°C and at 3 mm depth 38°C.
Conclusions: The simulation of temperature profiles during the virtual application of several catheter models in the heart rhythm model allows the static and dynamic simulation of PVI by cryoballoon ablation and RF ablation. The three-dimensional simulation can be used to improve ablation applications by creating a model in personalized cardiac rhythm therapy from MRI or CT data of a heart and finding a favourable position for ablation of AF.
Background: Transesophageal left atrial (LA) pacing and transesophageal LA ECG recording are semi-invasive techniques for diagnostic and therapy of supraventricular rhythm disturbance. Cardiac resynchronization therapy (CRT) with right atrial (RA) sensed biventricular pacing is an established therapy for heart failure patients with reduced left ventricular (LV) ejection fraction, sinus rhythm and interventricular electrical desynchronization.
Purpose: The aim of the study was to evaluate electromagnetic and voltage pacing fields of the combination of RA pacing, LA pacing and biventricular pacing in patients with long interatrial and interventricular electrical desynchronization.
Methods: The modelling and electromagnetic simulations of transesophageal LA pacing in combination with RA pacing and biventricular pacing would be staged and analyzed with the CST (Computer Simulation Technology) software. Different electrodes were modelled in order to simulate different types of bipolar pacing in the 3D-CAD Offenburg heart rhythm model: The bipolar Solid S (Biotronik) electrode where modelled for RA pacing and right ventricular (RV) pacing, Attain 4194 (Medtronic) for LV pacing and TO8 (Osypka) multipolar esophageal electrode with hemispheric electrodes for LA pacing.
Results: The pacemaker amplitudes for the electromagnetic pacing simulations were performed with 3 V for RA pacing, 1.5 V for RV pacing, 50 V for LA pacing and 3V for LV pacing with pacing impulse duration of 0.5 ms for RA, RV and LV pacing and 10 ms for LA pacing. The atrioventricular pacing delay after RA pacing was 140 ms. The different pacing modes AAI, VVI, DDD, DDD0V and DDD0D were evaluated for the analysis of the electric pacing field propagation of pacemaker, CRT and LA pacing. The pacing results were compared at minimum (LOW) and maximum (HIGH) parameter settings. While the LOW setting produced fewer tetrahedral and more inaccurate results, the HIGH setting produced many tetrahedral and therefore more accurate results.
Conclusions: The simulation of the combination of transesophageal LA pacing with RA sensed biventricular pacing is possible with the Offenburg heart rhythm model. The new temporary 4-chamber pacing method may be additional useful method in CRT non-responders with long interatrial electrical delay.