Refine
Document Type
- Article (reviewed) (99) (remove)
Language
- English (99) (remove)
Keywords
- neuroprosthetics (5)
- Blockchain (4)
- Götz von Berlichingen (4)
- blockchain (4)
- Deep Leaning (3)
- amputee (3)
- bimodal hearing (3)
- sound localization (3)
- 3D computer-aided design (2)
- 3D printing (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (99) (remove)
Open Access
- Open Access (69)
- Gold (27)
- Closed Access (23)
- Closed (5)
- Hybrid (2)
- Bronze (1)
- Diamond (1)
This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. In all these applications, the challenge is to extract surfaces with a specific isovalue from a given characteristic, so called isosurfaces. These different application areas have given rise to solution approaches that all solve the problem of isocontouring in their own way. Based on the literature, the following four dominant methods can be identified: the marching cubes algorithms, the tessellation-based algorithms, the surface nets algorithms and the ray tracing algorithms. With regard to their application, it can be seen that the methods are mainly used in the fields of medical imaging, computer graphics and the visualization of simulation results. In our work, we provide a broad and compact overview of the common methods that are currently used in terms of isocontouring with respect to certain criteria and their individual limitations. In this context, we discuss the individual methods and identify possible future research directions in the field of isocontouring.
"Ad fontes!"
Francesco Petrarca (1301–1374)
In the beginning, there was an idea: the reconstruction of the first "Iron Hand" of the Franconian imperial knight Götz von Berlichingen (1480–1562). We found that with this historical prosthesis, simple actions for daily use, such as holding a wine glass, a mobile phone, a bicycle handlebar grip, a horse’s reins, or some grapes, are possible without effort. Controlling this passive artificial hand, however, is based on the help of a healthy second hand.
Introduction: Subjects with mild to moderate hearing loss today often receive hearing aids (HA) with open-fitting (OF). In OF, direct sound reaches the eardrums with minimal damping. Due to the required processing delay in digital HA, the amplified HA sound follows some milliseconds later. This process occurs in both ears symmetrically in bilateral HA provision and is likely to have no or minor detrimental effect on binaural hearing. However, the delayed and amplified sound are only present in one ear in cases of unilateral hearing loss provided with one HA. This processing alters interaural timing differences in the resulting ear signals.
Methods: In the present study, an experiment with normal-hearing subjects to investigate speech intelligibility in noise with direct and delayed sound was performed to mimic unilateral and bilateral HA provision with OF.
Results: The outcomes reveal that these delays affect speech reception thresholds (SRT) in the unilateral OF simulation when presenting speech and noise from different spatial directions. A significant decrease in the median SRT from –18.1 to –14.7 dB SNR is observed when typical HA processing delays are applied. On the other hand, SRT was independent of the delay between direct and delayed sound in the bilateral OF simulation.
Discussion: The significant effect emphasizes the development of rapid processing algorithms for unilateral HA provision.
Analysing and predicting the advance rate of a tunnel boring machine (TBM) in hard rock is integral to tunnelling project planning and execution. It has been applied in the industry for several decades with varying success. Most prediction models are based on or designed for large-diameter TBMs, and much research has been conducted on related tunnelling projects. However, only a few models incorporate information from projects with an outer diameter smaller than 5 m and no penetration prediction model for pipe jacking machines exists to date. In contrast to large TBMs, small-diameter TBMs and their projects have been considered little in research. In general, they are characterised by distinctive features, including insufficient geotechnical information, sometimes rather short drive lengths, special machine designs and partially concurring lining methods like pipe jacking and segment lining. A database which covers most of the parameters mentioned above has been compiled to investigate the performance of small-diameter TBMs in hard rock. In order to provide sufficient geological and technical variance, this database contains 37 projects with 70 geotechnically homogeneous areas. Besides the technical parameters, important geotechnical data like lithological information, unconfined compressive strength, tensile strength and point load index is included and evaluated. The analysis shows that segment lining TBMs have considerably higher penetration rates in similar geological and technical settings mostly due to their design parameters. Different methodologies for predicting TBM penetration, including state-of-the-art models from the literature as well as newly derived regression and machine learning models, are discussed and deployed for backward modelling of the projects contained in the database. New ranges of application for small-diameter tunnelling in several industry-standard penetration models are presented, and new approaches for the penetration prediction of pipe jacking machines in hard rock are proposed.
Precisely synchronized communication is a major precondition for many industrial applications. At the same time, hardware cost and power consumption need to be kept as low as possible in the Internet of Things (IoT) paradigm. While many wired solutions on the market achieve these requirements, wireless alternatives are an interesting field for research and development. This article presents a novel IEEE802.11n/ac wireless solution, exhibiting several advantages over state-of-the-art competitors. It is based on a market-available wireless System on a Chip with modified low-level communication firmware combined with a low-cost field-programmable gate array. By achieving submicrosecond synchronization accuracy, our solution outperforms the precision of low-cost products by almost four orders of magnitude. Based on inexpensive hardware, the presented wireless module is up to 20 times cheaper than software-defined-radio solutions with comparable timing accuracy. Moreover, it consumes three to five times less power. To back up our claims, we report data that we collected with a high sampling rate (2000 samples per second) during an extended measurement campaign of more than 120 h, which makes our experimental results far more representative than others reported in the literature. Additional support is provided by the size of the testbed we used during the experiments, composed of a hybrid network with nine nodes divided into two independent wireless segments connected by a wired backbone. In conclusion, we believe that our novel Industrial IoT module architecture will have a significant impact on the future technological development of high-precision time-synchronized communication for the cost-sensitive industrial IoT market.
Blockchain interoperability: the state of heterogenous blockchain-to-blockchain communication
(2023)
Blockchain technology has been increasingly adopted over the past few years since the introduction of Bitcoin, with several blockchain architectures and solutions being proposed. Most proposed solutions have been developed in isolation, without a standard protocol or cryptographic structure to work with. This has led to the problem of interoperability, where solutions running on different blockchain platforms are unable to communicate, limiting the scope of use. With blockchains being adopted in a variety of fields such as the Internet of Things, it is expected that the problem of interoperability if not addressed quickly, will stifle technology advancement. This paper presents the current state of interoperability solutions proposed for heterogenous blockchain systems. A look is taken at interoperability solutions, not only for cryptocurrencies, but also for general data-based use cases. Current open issues in heterogenous blockchain interoperability are presented. Additionally, some possible research directions are presented to enhance and to extend the existing blockchain interoperability solutions. It was discovered that though there are a number of proposed solutions in literature, few have seen real-world implementation. The lack of blockchain-specific standards has slowed the progress of interoperability. It was also realized that most of the proposed solutions are developed targeting cryptocurrency-based applications.
This paper presents an overview of EREMI, a two-year project funded under ERASMUS+ KA203, and its results. The project team’s main objective was to develop and validate an advanced interdisciplinary higher education curriculum, which includes lifelong learning components. The curriculum focuses on enhancing resource efficiency in the manufacturing industry and optimising poorly or non-digitised industrial physical infrastructure systems. The paper also discusses the results of the project, highlighting the successful achievement of its goals. EREMI effectively supports the transition to Industry 5.0 by preparing a common European pool of future experts. Through comprehensive research and collaboration, the project team has designed a curriculum that equips students with the necessary skills and knowledge to thrive in the evolving manufacturing landscape. Furthermore, the paper explores the significance of EREMI’s contributions to the field, emphasising the importance of resource efficiency and system optimisation in industrial settings. By addressing the challenges posed by under-digitised infrastructure, the project aims to drive sustainable and innovative practices in manufacturing. All five project partner organisations have been actively engaged in offering relevant educational content and framework for decentralised sustainable economic development in regional and national contexts through capacity building at a local level. A crucial element of the added value is the new channel for obtaining feedback from students. The survey results, which are outlined in the paper, offer valuable insights gathered from students, contributing to the continuous improvement of the project.
Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
Design and Implementation of a Camera-Based Tracking System for MAV Using Deep Learning Algorithms
(2023)
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The primary objective of this study is to build up a system capable of detecting objects and locating them on a map in real time. Detection and positioning are achieved solely through the utilization of the drone’s camera and sensors. To accomplish this goal, several deep learning algorithms are assessed and adopted because of their suitability with the system. Object detection is based upon a single-shot detector architecture chosen for maximum computation speed, and the tracking is based upon the combination of deep neural-network-based features combined with an efficient sorting strategy. Subsequently, the developed system is evaluated using diverse metrics to determine its performance for detection and tracking. To further validate the approach, the system is employed in the real world to show its possible deployment. For this, two distinct scenarios were chosen to adjust the algorithms and system setup: a search and rescue scenario with user interaction and precise geolocalization of missing objects, and a livestock control scenario, showing the capability of surveying individual members and keeping track of number and area. The results demonstrate that the system is capable of operating in real time, and the evaluation verifies that the implemented system enables precise and reliable determination of detected object positions. The ablation studies prove that object identification through small variations in phenotypes is feasible with our approach.
An in-depth study of U-net for seismic data conditioning: Multiple removal by moveout discrimination
(2024)
Seismic processing often involves suppressing multiples that are an inherent component of collected seismic data. Elaborate multiple prediction and subtraction schemes such as surface-related multiple removal have become standard in industry workflows. In cases of limited spatial sampling, low signal-to-noise ratio, or conservative subtraction of the predicted multiples, the processed data frequently suffer from residual multiples. To tackle these artifacts in the postmigration domain, practitioners often rely on Radon transform-based algorithms. However, such traditional approaches are both time-consuming and parameter dependent, making them relatively complex. In this work, we present a deep learning-based alternative that provides competitive results, while reducing the complexity of its usage, and, hence simplifying its applicability. Our proposed model demonstrates excellent performance when applied to complex field data, despite it being exclusively trained on synthetic data. Furthermore, extensive experiments show that our method can preserve the inherent characteristics of the data, avoiding undesired oversmoothed results, while removing the multiples from seismic offset or angle gathers. Finally, we conduct an in-depth analysis of the model, where we pinpoint the effects of the main hyperparameters on real data inference, and we probabilistically assess its performance from a Bayesian perspective. In this study, we put particular emphasis on helping the user reveal the inner workings of the neural network and attempt to unbox the model.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In the past years, there has been a remarkable increase of machine-learning-based solutions that have addressed the aforementioned issues. In particular, deep-learning practitioners have usually relied on heavily fine-tuned, customized discriminative algorithms. Although, these methods can provide solid results, they seem to lack semantic understanding of the provided data. Motivated by this limitation, in this work, we employ a generative solution, as it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for three seismic applications: demultiple, denoising and interpolation. To that end, we run experiments on synthetic and on real data, and we compare the diffusion performance with standardized algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
Neural networks tend to overfit the training distribution and perform poorly on out-ofdistribution data. A conceptually simple solution lies in adversarial training, which introduces worst-case perturbations into the training data and thus improves model generalization to some extent. However, it is only one ingredient towards generally more robust models and requires knowledge about the potential attacks or inference time data corruptions during model training. This paper focuses on the native robustness of models that can learn robust behavior directly from conventional training data without out-of-distribution examples. To this end, we study the frequencies in learned convolution filters. Clean-trained models often prioritize high-frequency information, whereas adversarial training enforces models to shift the focus to low-frequency details during training. By mimicking this behavior through frequency regularization in learned convolution weights, we achieve improved native robustness to adversarial attacks, common corruptions, and other out-of-distribution tests. Additionally, this method leads to more favorable shifts in decision-making towards low-frequency information, such as shapes, which inherently aligns more closely with human vision.
Predictive control has great potential in the home energy management domain. However, such controls need reliable predictions of the system dynamics as well as energy consumption and generation, and the actual implementation in the real system is associated with many challenges. This paper presents the implementation of predictive controls for a heat pump with thermal storage in a real single-family house with a photovoltaic rooftop system. The predictive controls make use of a novel cloud camera-based short-term solar energy prediction and an intraday prediction system that includes additional data sources. In addition, machine learning methods were used to model the dynamics of the heating system and predict loads using extensive measured data. The results of the real and simulated operation will be presented.
The increasingly stringent CO2 emissions standards require innovative solutions in the vehicle development process. One possibility to reduce CO2 emissions is the electrification of powertrains. The resulting increased complexity, as well as the increased competition and time pressure make the use of simulation software and test benches indispensable in the early development phases. This publication therefore presents a methodology for test bench coupling to enable early testing of electrified powertrains. For this purpose, an internal combustion engine test bench and an electric motor test bench are virtually interconnected. By applying and extending the Distributed Co-Simulation Protocol Standard for the presented hybrid electric powertrain use case, real-time-capable communication between the two test benches is achieved. Insights into the test bench setups, and the communication between the test benches and the protocol extension, especially with regard to temperature measurements, enable the extension to be applied to other powertrain or test bench configurations. The shown results from coupled test bench operations emphasize the applicability. The discussed experiences from the test bench coupling experiments complete the insights.
With the function RooTri(), we present a simple and robust calculation method for the approximation of the intersection points of a scalar field given as an unstructured point cloud with a plane oriented arbitrarily in space. The point cloud is approximated to a surface consisting of triangles whose edges are used for computing the intersection points. The function contourc() of Matlab is taken as a reference. Our experiments show that the function contourc() produces outliers that deviate significantly from the defined nominal value, while the quality of the results produced by the function RooTri() increases with finer resolution of the examined grid.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping, and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver. This article shows a different approach by defining a type-directed translation from FGG− to an untyped lambda-calculus. FGG− includes all features of FGG but type assertions. The translation of an FGG− program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary. Every program in the image of the translation has the same dynamic semantics as its source FGG− program. The proof of this result is based on a syntactic, step-indexed logical relation. The step index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods. Although being non-deterministic, the translation is coherent.
In this paper, the performance of different continuous-time and discrete-time models of the electrical subsystem of induction machines and permanent-magnet synchronous machines as well as methods based on them for decoupling the direct and
quadrature axis components of the stator current are investigated and compared. The focus here is on inverter-fed, pulse width modulated drives when operated with a relatively large product of stator frequency and sampling time, where significant
differences between the models and decoupling methods used come to light. Recommendations for a discrete-time model to be used uniformly in the future are made, as well as statements on whether feedforward or feedback decoupling structures are better suited and whether state controllers improve decoupling measures for very steep speed ramps. Simulation studies and measurement results support the statements made above.
A balcony photovoltaic (PV) system, also known as a micro-PV system, is a small PV system consisting of one or two solar modules with an output of 100–600 Wp and a corresponding inverter that uses standard plugs to feed the renewable energy into the house grid. In the present study we demonstrate the integration of a commercial lithium-ion battery into a commercial micro-PV system. We firstly show simulations over one year with one second time resolution which we use to assess the influence of battery and PV size on self-consumption, self-sufficiency and the annual cost savings. We then develop and operate experimental setups using two different architectures for integrating the battery into the micro-PV system. In the passive hybrid architecture, the battery is in parallel electrical connection to the PV module. In the active hybrid architecture, an additional DC-DC converter is used. Both architectures include measures to avoid maximum power point tracking of the battery by the module inverter. Resulting PV/battery/inverter systems with 300 Wp PV and 555 Wh battery were tested in continuous operation over three days under real solar irradiance conditions. Both architectures were able to maintain stable operation and demonstrate the shift of PV energy from the day into the night. System efficiencies were observed comparable to a reference system without battery. This study therefore demonstrates the feasibility of both active and passive coupling architectures.
Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3–4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.
Electrolyte-gated transistors (EGTs) represent an interesting alternative to conventional dielectric-gating to reduce the required high supply voltage for printed electronic applications. Here, a type of ink-jet printable ion-gel is introduced and optimized to fabricate a chemically crosslinked ion-gel by self-assembled gelation, without additional crosslinking processes, e.g., UV-curing. For the self-assembled gelation, poly(vinyl alcohol) and poly(ethylene-alt-maleic anhydride) are used as the polymer backbone and chemical crosslinker, respectively, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate ([EMIM][OTf]) is utilized as an ionic species to ensure ionic conductivity. The as-synthesized ion-gel exhibits an ionic conductivity of ≈5 mS cm−1 and an effective capacitance of 5.4 µF cm−2 at 1 Hz. The ion-gel is successfully employed in EGTs with an indium oxide (In2O3) channel, which shows on/off-ratios of up to 1.3 × 106 and a subthreshold swing of 80.62 mV dec−1.
Deep learning approaches are becoming increasingly important for the estimation of the Remaining Useful Life (RUL) of mechanical elements such as bearings. This paper proposes and evaluates a novel transfer learning-based approach for RUL estimations of different bearing types with small datasets and low sampling rates. The approach is based on an intermediate domain that abstracts features of the bearings based on their fault frequencies. The features are processed by convolutional layers. Finally, the RUL estimation is performed using a Long Short-Term Memory (LSTM) network. The transfer learning relies on a fixed-feature extraction. This novel deep learning approach successfully uses data of a low-frequency range, which is a precondition to use low-cost sensors. It is validated against the IEEE PHM 2012 Data Challenge, where it outperforms the winning approach. The results show its suitability for low-frequency sensor data and for efficient and effective transfer learning between different bearing types.
Gas Analysis and Optimization of Debinding and Sintering Processes for Metallic Binder-Based AM*
(2022)
Binder-based additive manufacturing processes for metallic
AM components in a wide range of applications usually use
organic binders and process-related additives that must be
thermally removed before sintering. Debinding processes are
typically parameterized empirically and thus far from the optimum.
Since debinding based on thermal decomposition processes
of organic components and the subsequent thermochemical
reactions between process atmosphere and metal
powder materials make uncomplicated parameterization difficult,
in-situ instrumentation was introduced at Fraunhofer
IFAM. This measurement method relies on infrared spectroscopy
and mass spectrometry in various furnace concepts to
understand the gas processes of decomposition of organic
components and the subsequent thermochemical reactions
between the carrier gas atmosphere and the metal part, as well
as their kinetics. This method enables an efficient optimization
of the temperature-time profiles and the required atmosphere
composition to realize dense AM components with low contamination.
In the paper, the optimization strategy is presented,
and the achievable properties are illustrated using a fused
filament fabrication (FFF) component example made of 316L
stainless steel.
Inadequate mechanical compliance of orthopedic implants can result in excessive strain of the bone interface, and ultimately, aseptic loosening. It is hypothesized that a fiber-based biometal with adjustable anisotropic mechanical properties can reduce interface strain, facilitate continuous remodeling, and improve implant survival under complex loads. The biometal is based on strategically layered sintered titanium fibers. Six different topologies are manufactured. Specimens are tested under compression in three orthogonal axes under 3-point bending and torsion until failure. Biocompatibility testing involves murine osteoblasts. Osseointegration is investigated by micro-computed tomography and histomorphometry after implantation in a metaphyseal trepanation model in sheep. The material demonstrates compressive yield strengths of up to 50 MPa and anisotropy correlating closely with fiber layout. Samples with 75% porosity are both stronger and stiffer than those with 85% porosity. The highest bending modulus is found in samples with parallel fiber orientation, while the highest shear modulus is found in cross-ply layouts. Cell metabolism and morphology indicate uncompromised biocompatibility. Implants demonstrate robust circumferential osseointegration in vivo after 8 weeks. The biometal introduced in this study demonstrates anisotropic mechanical properties similar to bone, and excellent osteoconductivity and feasibility as an orthopedic implant material.
Titanium and stainless steel are commonly known as osteosynthesis materials with high strength and good biocompatibility. However, they have the big disadvantage that a second operation for hardware removal is necessary. Although resorbable systems made of polymers or magnesium are increasingly used, they show some severe adverse foreign body reactions or unsatisfying degradation behavior. Therefore, we started to investigate molybdenum as a potential new biodegradable material for osteosynthesis in craniomaxillofacial surgery. To characterize molybdenum as a biocompatible material, we performed in vitro assays in accordance with ISO Norm 10993-5. In four different experimental setups, we showed that pure molybdenum and molybdenum rhenium alloys do not lead to cytotoxicity in human and mouse fibroblasts. We also examined the degradation behavior of molybdenum by carrying out long-term immersion tests (up to 6 months) with molybdenum sheet metal. We showed that molybdenum has sufficient mechanical stability over at least 6 months for implants on the one hand and is subject to very uniform degradation on the other. The results of our experiments are very promising for the development of new resorbable osteosynthesis materials for craniomaxillofacial surgery based on molybdenum.
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
The integration of Internet of Things devices onto the Blockchain implies an increase in the transactions that occur on the Blockchain, thus increasing the storage requirements.
A solution approach is to leverage cloud resources for storing blocks within the chain. The paper, therefore, proposes two solutions to this problem. The first being an improved hybrid architecture design which uses containerization to create a side chain on a fog node for the devices connected to it and an Advanced Time‑variant Multi‑objective Particle Swarm Optimization Algorithm (AT‑MOPSO) for determining the optimal number of blocks that should be transferred to the cloud for storage. This algorithm uses time‑variant weights for the velocity of the particle swarm optimization and the non‑dominated sorting and mutation schemes from NSGA‑III. The proposed algorithm was compared with results from the original MOPSO algorithm, the Strength Pareto Evolutionary Algorithm (SPEA‑II), and the Pareto Envelope‑based Selection Algorithm with region‑based selection (PESA‑II), and NSGA‑III. The proposed AT‑MOPSO showed better results than the aforementioned MOPSO algorithms in cloud storage cost and query probability optimization. Importantly, AT‑MOPSO achieved 52% energy efficiency compared to NSGA‑III.
To show how this algorithm can be applied to a real‑world Blockchain system, the BISS industrial Blockchain architecture was adapted and modified to show how the AT‑MOPSO can be used with existing Blockchain systems and the benefits it provides.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average R² score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
In this paper, a concept for an anthropomorphic replacement hand cast with silicone with an integrated sensory feedback system is presented. In order to construct the personalized replacement hand, a 3D scan of a healthy hand was used to create a 3D-printed mold using computer-aided design (CAD). To allow for movement of the index and middle fingers, a motorized orthosis was used. Information about the applied force for grasping and the degree of flexion of the fingers is registered using two pressure sensors and one bending sensor in each movable finger. To integrate the sensors and additional cavities for increased flexibility, the fingers were cast in three parts, separately from the rest of the hand. A silicone adhesive (Silpuran 4200) was examined to combine the individual parts afterwards. For this, tests with different geometries were carried out. Furthermore, different test series for the secure integration of the sensors were performed, including measurements of the registered information of the sensors. Based on these findings, skin-toned individual fingers and a replacement hand with integrated sensors were created. Using Silpuran 4200, it was possible to integrate the needed cavities and to place the sensors securely into the hand while retaining full flexion using a motorized orthosis. The measurements during different loadings and while grasping various objects proved that it is possible to realize such a sensory feedback system in a replacement hand. As a result, it can be stated that the cost-effective realization of a personalized, anthropomorphic replacement hand with an integrated sensory feedback system is possible using 3D scanning and 3D printing. By integrating smaller sensors, the risk of damaging the sensors through movement could be decreased.
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm.
An Overview of Technologies for Improving Storage Efficiency in Blockchain-Based IIoT Applications
(2022)
Since the inception of blockchain-based cryptocurrencies, researchers have been fascinated with the idea of integrating blockchain technology into other fields, such as health and manufacturing. Despite the benefits of blockchain, which include immutability, transparency, and traceability, certain issues that limit its integration with IIoT still linger. One of these prominent problems is the storage inefficiency of the blockchain. Due to the append-only nature of the blockchain, the growth of the blockchain ledger inevitably leads to high storage requirements for blockchain peers. This poses a challenge for its integration with the IIoT, where high volumes of data are generated at a relatively faster rate than in applications such as financial systems. Therefore, there is a need for blockchain architectures that deal effectively with the rapid growth of the blockchain ledger. This paper discusses the problem of storage inefficiency in existing blockchain systems, how this affects their scalability, and the challenges that this poses to their integration with IIoT. This paper explores existing solutions for improving the storage efficiency of blockchain–IIoT systems, classifying these proposed solutions according to their approaches and providing insight into their effectiveness through a detailed comparative analysis and examination of their long-term sustainability. Potential directions for future research on the enhancement of storage efficiency in blockchain–IIoT systems are also discussed.
Note: In lieu of an abstract, this is an excerpt from the first page.
Recently, we reported the three-dimensional computer-aided design (3D-CAD) reconstruction of the first “Iron Hand” of the famous Franconian knight, Götz von Berlichingen (1480–1562), who lost his right hand by a cannon ball splinter injury in 1504 in the War of the Succession of Landshut [...]
In asymmetric treatment of hearing loss, processing latencies of the modalities typically differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at 0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence sound source localization in bimodal listeners provided with a hearing aid (HA) in one and a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in reference ITD on speech understanding, especially spatial release from masking (SRM) in normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with spatially collocated (S0N0) and spatially separated (S0N90) sound sources. Further, the cues for separation of target and masker were manipulated to measure the effect of a reference ITD on unmasking by A) ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM decreased significantly in conditions A) and B) when the reference ITD was increased: In condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These results were accurately predicted by the applied EC-model. The outcomes show that interaural processing latency differences should be considered in asymmetric treatment of hearing loss.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
Commercial simulators can only reproduce electrocardiograms (ECG) of the normal and diseased heart rhythm in a simplified waveform and with a low number of channels. With the presented project, the variety of digitally archived ECGs, recorded during electrophysiological examinations, should be made usable as original analogue signals for research and teaching purposes by the development of a special printed circuit board for the mini-computer “Raspberry-Pi “.
Industrial companies can use blockchain to assist them in resolving their trust and security issues. In this research, we provide a fully distributed blockchain-based architecture for industrial IoT, relying on trust management and reputation to enhance nodes’ trustworthiness. The purpose of this contribution is to introduce our system architecture to show how to secure network access for users with dynamic authorization management. All decisions in the system are made by trustful nodes’ consensus and are fully distributed. The remarkable feature of this system architecture is that the influence of the nodes’ power is lowered depending on their Proof of Work (PoW) and Proof of Stake (PoS), and the nodes’ significance and authority is determined by their behavior in the network.
This impact is based on game theory and an incentive mechanism for reputation between nodes. This system design can be used on legacy machines, which means that security and distributed systems
can be put in place at a low cost on industrial systems. While there are no numerical results yet, this work, based on the open questions regarding the majority problem and the proposed solutions based on a game-theoretic mechanism and a trust management system, points to what and how industrial IoT and existing blockchain frameworks that are focusing only on the power of PoW and PoS can be secured more effectively.
Significant progress in the development and commercialization of electrically conductive adhesives has been made. This makes shingling a very attractive approach for solar cell interconnection. In this study, we investigate the shading tolerance of two types of solar modules based on shingle interconnection: first, the already commercialized string approach, and second, the matrix technology where solar cells are intrinsically interconnected in parallel and in series. An experimentally validated LTspice model predicts major advantages for the power output of the matrix layout under partial shading. Diagonal as well as random shading of a 1.6-m2 solar module is examined. Power gains of up to 73.8 % for diagonal shading and up to 96.5 % for random shading are found for the matrix technology compared to the standard string approach. The key factor is an increased current extraction due to lateral current flows. Especially under minor shading, the matrix technology benefits from an increased fill factor as well. Under diagonal shading, we find the probability of parts of the matrix module being bypassed to be reduced by 40 % in comparison to the string module. In consequence, the overall risk of hotspot occurrence in matrix modules is decreased significantly.
A versatile liquid metal (LM) printing process enabling the fabrication of various fully printed devices such as intra- and interconnect wires, resistors, diodes, transistors, and basic circuit elements such as inverters which are process compatible with other digital printing and thin film structuring methods for integration is presented. For this, a glass capillary-based direct-write method for printing LMs such as eutectic gallium alloys, exploring the potential for fully printed LM-enabled devices is demonstrated. Examples for successful device fabrication include resistors, p–n diodes, and field effect transistors. The device functionality and easiness of one integrated fabrication flow shows that the potential of LM printing is far exceeding the use of interconnecting conventional electronic devices in printed electronics.
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE).
Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states.
Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients.
Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
Emerging applications in soft robotics, wearables, smart consumer products or IoT-devices benefit from soft materials, flexible substrates in conjunction with electronic functionality. Due to high production costs and conformity restrictions, rigid silicon technologies do not meet application requirements in these new domains. However, whenever signal processing becomes too comprehensive, silicon technology must be used for the high-performance computing unit. At the same time, designing everything in flexible or printed electronics using conventional digital logic is not feasible yet due to the limitations of printed technologies in terms of performance, power and integration density. We propose to rather use the strengths of neuromorphic computing architectures consisting in their homogeneous topologies, few building blocks and analog signal processing to be mapped to an inkjet-printed hardware architecture. It has remained a challenge to demonstrate non-linear elements besides weighted aggregation. We demonstrate in this work printed hardware building blocks such as inverter-based comprehensive weight representation and resistive crossbars as well as printed transistor-based activation functions. In addition, we present a learning algorithm developed to train the proposed printed NCS architecture based on specific requirements and constraints of the technology.
Fifth-generation (5G) cellular mobile networks are expected to support mission-critical low latency applications in addition to mobile broadband services, where fourth-generation (4G) cellular networks are unable to support Ultra-Reliable Low Latency Communication (URLLC). However, it might be interesting to understand which latency requirements can be met with both 4G and 5G networks. In this paper, we discuss (1) the components contributing to the latency of cellular networks and (2) evaluate control-plane and user-plane latencies for current-generation narrowband cellular networks and point out the potential improvements to reduce the latency of these networks, (3) present, implement and evaluate latency reduction techniques for latency-critical applications. The two elements we detected, namely the short transmission time interval and the semi-persistent scheduling are very promising as they allow to shorten the delay to processing received information both into the control and data planes. We then analyze the potential of latency reduction techniques for URLLC applications. To this end, we develop these techniques into the long term evolution (LTE) module of ns-3 simulator and then evaluate the performance of the proposed techniques into two different application fields: industrial automation and intelligent transportation systems. Our detailed evaluation results from simulations indicate that LTE can satisfy the low-latency requirements for a large choice of use cases in each field.
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
In recent years, physically unclonable functions (PUFs) have gained significant attraction in IoT security applications, such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of different devices to generate unique fingerprints for security applications. When generating PUF-based secret keys, the reliability and entropy of the keys are vital factors. This study proposes a novel method for generating PUF-based keys from a set of measurements. Firstly, it formulates the group-based key generation problem as an optimization problem and solves it using integer linear programming (ILP), which guarantees finding the optimum solution. Then, a novel scheme for the extraction of keys from groups is proposed, which we call positioning syndrome coding (PSC). The use of ILP as well as the introduction of PSC facilitates the generation of high-entropy keys with low error correction costs. These new methods have been tested by applying them on the output of a capacitor network PUF. The results confirm the application of ILP and PSC in generating high-quality keys.
Evaluation of Deep Learning-Based Neural Network Methods for Cloud Detection and Segmentation
(2021)
This paper presents a systematic approach for accurate short-time cloud coverage prediction based on a machine learning (ML) approach. Based on a newly built omnidirectional ground-based sky camera system, local training and evaluation data sets were created. These were used to train several state-of-the-art deep neural networks for object detection and segmentation. For this purpose, the camera-generated a full hemispherical image every 30 min over two months in daylight conditions with a fish-eye lens. From this data set, a subset of images was selected for training and evaluation according to various criteria. Deep neural networks, based on the two-stage R-CNN architecture, were trained and compared with a U-net segmentation approach implemented by CloudSegNet. All chosen deep networks were then evaluated and compared according to the local situation.
Interpreting seismic data requires the characterization of a number of key elements such as the position of faults and main reflections, presence of structural bodies, and clustering of areas exhibiting a similar amplitude versus angle response. Manual interpretation of geophysical data is often a difficult and time-consuming task, complicated by lack of resolution and presence of noise. In recent years, approaches based on convolutional neural networks have shown remarkable results in automating certain interpretative tasks. However, these state-of-the-art systems usually need to be trained in a supervised manner, and they suffer from a generalization problem. Hence, it is highly challenging to train a model that can yield accurate results on new real data obtained with different acquisition, processing, and geology than the data used for training. In this work, we introduce a novel method that combines generative neural networks with a segmentation task in order to decrease the gap between annotated training data and uninterpreted target data. We validate our approach on two applications: the detection of diffraction events and the picking of faults. We show that when transitioning from synthetic training data to real validation data, our workflow yields superior results compared to its counterpart without the generative network.
It is considered necessary to implement advanced controllers such as model predictive control (MPC) to utilize the technical flexibility of a building polygeneration system to support the rapidly expanding renewable electricity grid. These can handle multiple inputs and outputs, uncertainties in forecast data, and plant constraints, amongst other features. One of the main issues identified in the literature regarding deploying these controllers is the lack of experimental demonstrations using standard components and communication protocols. In this original work, the economic-MPC-based optimal scheduling of a real-world heat pump-based building energy plant is demonstrated, and its performance is evaluated against two conventional controllers. The demonstration includes the steps to integrate an optimization-based supervisory controller into a typical building automation and control system with off-the-shelf HVAC components and usage of state-of-art algorithms to solve a mixed integer quadratic problem. Technological benefits in terms of fewer constraint violations and a hardware-friendly operation with MPC were identified. Additionally, a strong dependency of the economic benefits on the type of load profile, system design and controller parameters was also identified. Future work for the quantification of these benefits, the application of machine learning algorithms, and the study of forecast deviations is also proposed.
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
With many advances in sensor technology and the Internet of Things, Vehicle Ad Hoc Net- work (VANET) is becoming a new generation. VANET’s current technical challenges are deploying decentralized architecture and protecting privacy. Because Blockchain features are decentralized, distributed, mass storage, and non-manipulation features, this paper designs a new decentralized architecture using Blockchain technology called Blockchain-based VANET. Blockchain-based VANET can effectively resolve centralized problems and mutual distrust between VANET units. To achieve this, it is needed to provide scalability on the blockchain to run for VANET. In this system, our focus is on the reliability of incoming messages on the network. Vehicles check the validity of the received messages using the proposed Bayesian formula for trust management system and some information saved in the Blockchain. Then, based on the validation result, the vehicle computes a rate for each message type and message source vehicle. Vehicles upload the computed rates to Roadside Units (RSUs) in order to calculate the net reliability value. Finally, RSUs using a sharding consensus mechanism generate blocks, including the net reliability value as a transaction. In this system, all RSUs collaboratively maintain the latest updated Blockchain. Our experimental results show that the proposed system is effective, scalable and dependable in data gathering, computing, organization, and retrieval of trust values in VANET.
Patients with focal ventricular tachycardia are at risk of hemodynamic failure and if no treatment is provided the mortality rate can exceed 30%. Therefore, medical professionals must be adequately trained in the management of these conditions. To achieve the best treatment, the origin of the abnormality should be known, as well as the course of the disease. This study provides an opportunity to visualize various focal ventricular tachycardias using the Offenburg heart rhythm model. Modeling and simulation of focal ventricular tachycardias in the Offenburg heart rhythm model was performed using CST (Computer Simulation Technology) software from Dessault Systèms. A bundle of nerve tissue in different regions in the left and right ventricle was defined as the focus in the already existing heart rhythm model. This ultimately served as the origin of the focal excitation sites. For the simulations, the heart rhythm model was divided into a mesh consisting of 5354516 tetrahedra, which is required to calculate the electric field lines. The simulations in the Offenburg heart rhythm model were able to successfully represent the progression of focal ventricular tachycardia in the heart using measured electrical field lines. The simulation results were realized as an animated sequence of images running in real time at a frame rate of 20 frames per second. By changing the frame rate, these simulations can additionally be produced at different speeds. The Offenburg heart rhythm model allows visualization of focal ventricular arrhythmias using computer simulations.