Refine
Document Type
- Article (reviewed) (27)
- Book (1)
- Conference Proceeding (1)
Conference Type
- Konferenzartikel (1)
Language
- English (29)
Keywords
- neuroprosthetics (5)
- Götz von Berlichingen (4)
- Iron Hand (3)
- hand prosthesis (3)
- 3D printing (2)
- 3D-CAD (2)
- Blockchain (2)
- CNC (2)
- Deep Leaning (2)
- FEM (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (29) (remove)
Open Access
- Gold (29) (remove)
Blockchain-IIoT integration into industrial processes promises greater security, transparency, and traceability. However, this advancement faces significant storage and scalability issues with existing blockchain technologies. Each peer in the blockchain network maintains a full copy of the ledger which is updated through consensus. This full replication approach places a burden on the storage space of the peers and would quickly outstrip the storage capacity of resource-constrained IIoT devices. Various solutions utilizing compression, summarization or different storage schemes have been proposed in literature. The use of cloud resources for blockchain storage has been extensively studied in recent years. Nonetheless, block selection remains a substantial challenge associated with cloud resources and blockchain integration. This paper proposes a deep reinforcement learning (DRL) approach as an alternative to solving the block selection problem, which involves identifying the blocks to be transferred to the cloud. We propose a DRL approach to solve our problem by converting the multi-objective optimization of block selection into a Markov decision process (MDP). We design a simulated blockchain environment for training and testing our proposed DRL approach. We utilize two DRL algorithms, Advantage Actor-Critic (A2C), and Proximal Policy Optimization (PPO) to solve the block selection problem and analyze their performance gains. PPO and A2C achieve 47.8% and 42.9% storage reduction on the blockchain peer compared to the full replication approach of conventional blockchain systems. The slowest DRL algorithm, A2C, achieves a run-time 7.2 times shorter than the benchmark evolutionary algorithms used in earlier works, which validates the gains introduced by the DRL algorithms. The simulation results further show that our DRL algorithms provide an adaptive and dynamic solution to the time-sensitive blockchain-IIoT environment.
An Overview of Technologies for Improving Storage Efficiency in Blockchain-Based IIoT Applications
(2022)
Since the inception of blockchain-based cryptocurrencies, researchers have been fascinated with the idea of integrating blockchain technology into other fields, such as health and manufacturing. Despite the benefits of blockchain, which include immutability, transparency, and traceability, certain issues that limit its integration with IIoT still linger. One of these prominent problems is the storage inefficiency of the blockchain. Due to the append-only nature of the blockchain, the growth of the blockchain ledger inevitably leads to high storage requirements for blockchain peers. This poses a challenge for its integration with the IIoT, where high volumes of data are generated at a relatively faster rate than in applications such as financial systems. Therefore, there is a need for blockchain architectures that deal effectively with the rapid growth of the blockchain ledger. This paper discusses the problem of storage inefficiency in existing blockchain systems, how this affects their scalability, and the challenges that this poses to their integration with IIoT. This paper explores existing solutions for improving the storage efficiency of blockchain–IIoT systems, classifying these proposed solutions according to their approaches and providing insight into their effectiveness through a detailed comparative analysis and examination of their long-term sustainability. Potential directions for future research on the enhancement of storage efficiency in blockchain–IIoT systems are also discussed.
In this paper, a concept for an anthropomorphic replacement hand cast with silicone with an integrated sensory feedback system is presented. In order to construct the personalized replacement hand, a 3D scan of a healthy hand was used to create a 3D-printed mold using computer-aided design (CAD). To allow for movement of the index and middle fingers, a motorized orthosis was used. Information about the applied force for grasping and the degree of flexion of the fingers is registered using two pressure sensors and one bending sensor in each movable finger. To integrate the sensors and additional cavities for increased flexibility, the fingers were cast in three parts, separately from the rest of the hand. A silicone adhesive (Silpuran 4200) was examined to combine the individual parts afterwards. For this, tests with different geometries were carried out. Furthermore, different test series for the secure integration of the sensors were performed, including measurements of the registered information of the sensors. Based on these findings, skin-toned individual fingers and a replacement hand with integrated sensors were created. Using Silpuran 4200, it was possible to integrate the needed cavities and to place the sensors securely into the hand while retaining full flexion using a motorized orthosis. The measurements during different loadings and while grasping various objects proved that it is possible to realize such a sensory feedback system in a replacement hand. As a result, it can be stated that the cost-effective realization of a personalized, anthropomorphic replacement hand with an integrated sensory feedback system is possible using 3D scanning and 3D printing. By integrating smaller sensors, the risk of damaging the sensors through movement could be decreased.
Subjects utilizing a cochlear implant (CI) in one ear and a hearing aid (HA) on the contralateral ear suffer from mismatches in stimulation timing due to different processing latencies of both devices. This device delay mismatch leads to a temporal mismatch in auditory nerve stimulation. Compensating for this auditory nerve stimulation mismatch by compensating for the device delay mismatch can significantly improve sound source localization accuracy. One CI manufacturer has already implemented the possibility of mismatch compensation in its current fitting software. This study investigated if this fitting parameter can be readily used in clinical settings and determined the effects of familiarization to a compensated device delay mismatch over a period of 3–4 weeks. Sound localization accuracy and speech understanding in noise were measured in eleven bimodal CI/HA users, with and without a compensation of the device delay mismatch. The results showed that sound localization bias improved to 0°, implying that the localization bias towards the CI was eliminated when the device delay mismatch was compensated. The RMS error was improved by 18% with this improvement not reaching statistical significance. The effects were acute and did not further improve after 3 weeks of familiarization. For the speech tests, spatial release from masking did not improve with a compensated mismatch. The results show that this fitting parameter can be readily used by clinicians to improve sound localization ability in bimodal users. Further, our findings suggest that subjects with poor sound localization ability benefit the most from the device delay mismatch compensation.
In asymmetric treatment of hearing loss, processing latencies of the modalities typically differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at 0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence sound source localization in bimodal listeners provided with a hearing aid (HA) in one and a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in reference ITD on speech understanding, especially spatial release from masking (SRM) in normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with spatially collocated (S0N0) and spatially separated (S0N90) sound sources. Further, the cues for separation of target and masker were manipulated to measure the effect of a reference ITD on unmasking by A) ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM decreased significantly in conditions A) and B) when the reference ITD was increased: In condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These results were accurately predicted by the applied EC-model. The outcomes show that interaural processing latency differences should be considered in asymmetric treatment of hearing loss.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
This paper provides a comprehensive overview of approaches to the determination of isocontours and isosurfaces from given data sets. Different algorithms are reported in the literature for this purpose, which originate from various application areas, such as computer graphics or medical imaging procedures. In all these applications, the challenge is to extract surfaces with a specific isovalue from a given characteristic, so called isosurfaces. These different application areas have given rise to solution approaches that all solve the problem of isocontouring in their own way. Based on the literature, the following four dominant methods can be identified: the marching cubes algorithms, the tessellation-based algorithms, the surface nets algorithms and the ray tracing algorithms. With regard to their application, it can be seen that the methods are mainly used in the fields of medical imaging, computer graphics and the visualization of simulation results. In our work, we provide a broad and compact overview of the common methods that are currently used in terms of isocontouring with respect to certain criteria and their individual limitations. In this context, we discuss the individual methods and identify possible future research directions in the field of isocontouring.
With the function RooTri(), we present a simple and robust calculation method for the approximation of the intersection points of a scalar field given as an unstructured point cloud with a plane oriented arbitrarily in space. The point cloud is approximated to a surface consisting of triangles whose edges are used for computing the intersection points. The function contourc() of Matlab is taken as a reference. Our experiments show that the function contourc() produces outliers that deviate significantly from the defined nominal value, while the quality of the results produced by the function RooTri() increases with finer resolution of the examined grid.
An in-depth study of U-net for seismic data conditioning: Multiple removal by moveout discrimination
(2024)
Seismic processing often involves suppressing multiples that are an inherent component of collected seismic data. Elaborate multiple prediction and subtraction schemes such as surface-related multiple removal have become standard in industry workflows. In cases of limited spatial sampling, low signal-to-noise ratio, or conservative subtraction of the predicted multiples, the processed data frequently suffer from residual multiples. To tackle these artifacts in the postmigration domain, practitioners often rely on Radon transform-based algorithms. However, such traditional approaches are both time-consuming and parameter dependent, making them relatively complex. In this work, we present a deep learning-based alternative that provides competitive results, while reducing the complexity of its usage, and, hence simplifying its applicability. Our proposed model demonstrates excellent performance when applied to complex field data, despite it being exclusively trained on synthetic data. Furthermore, extensive experiments show that our method can preserve the inherent characteristics of the data, avoiding undesired oversmoothed results, while removing the multiples from seismic offset or angle gathers. Finally, we conduct an in-depth analysis of the model, where we pinpoint the effects of the main hyperparameters on real data inference, and we probabilistically assess its performance from a Bayesian perspective. In this study, we put particular emphasis on helping the user reveal the inner workings of the neural network and attempt to unbox the model.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In the past years, there has been a remarkable increase of machine-learning-based solutions that have addressed the aforementioned issues. In particular, deep-learning practitioners have usually relied on heavily fine-tuned, customized discriminative algorithms. Although, these methods can provide solid results, they seem to lack semantic understanding of the provided data. Motivated by this limitation, in this work, we employ a generative solution, as it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for three seismic applications: demultiple, denoising and interpolation. To that end, we run experiments on synthetic and on real data, and we compare the diffusion performance with standardized algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.