Refine
Document Type
- Article (reviewed) (67) (remove)
Is part of the Bibliography
- yes (67) (remove)
Keywords
- 3D printing (5)
- blockchain (3)
- neuroprosthetics (3)
- Blockchain (2)
- CPC (2)
- Deep Leaning (2)
- Götz von Berlichingen (2)
- IIoT (2)
- Plastizität (2)
- Scalability (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (29)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (26)
- INES - Institut für nachhaltige Energiesysteme (10)
- Fakultät Wirtschaft (W) (8)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (7)
- POIM - Peter Osypka Institute of Medical Engineering (5)
- IMLA - Institute for Machine Learning and Analytics (4)
- Fakultät Medien (M) (ab 22.04.2021) (3)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (2)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (2)
Open Access
- Gold (67) (remove)
The mobile devices related industries are subject to rapid change, driven by technological advances and dynamic consumer behaviour. Hence, the understanding of the mobile devices markets is an important step in the analysis phase of mobile applications development. In this paper, a brief description of the different markets is introduced followed by an analysis of the main features of the markets leaders' devices which are important in the development process of mobile web applications. Finally, approaches are proposed to deal with the mobile devices diversity.
The growing demand for active medical implantable devices requires data and or power links between the implant and the outside world. Every implant has to be encapsulated from the body by a specific housing and one of the most common materials used is titanium or titanium alloy. Titanium thas the necessary properties in terms of mechanical and chemical stability and biocompatibility. However, its electrical conductivity presents a challenge for the electromagnetic transmission of data and power. The proposed paper presents a fast and practical method to determine the necessary transmission parameters for titanium encapsulated implants. Therefore, the basic transformer-transmission-model is used with measured or calculated key values for the inductances. Those are then expanded with correction factors to determine the behavior with the encapsulation. The correction factors are extracted from finite element method simulations. These also enable the analysis of the magnetic field distribution inside of the housing. The simulated transmission properties are very close to the measured values. Additionally, based on lumped elements and magnetic field distribution, the influential parameters are discussed in the paper. The parameter discussion describes how to enhance the transmitted power, data-rate or distance, or to reduce the size of the necessary coils. Finally, an example application demonstrates the usage of the methods.
Electrolyte-gated transistors (EGTs) represent an interesting alternative to conventional dielectric-gating to reduce the required high supply voltage for printed electronic applications. Here, a type of ink-jet printable ion-gel is introduced and optimized to fabricate a chemically crosslinked ion-gel by self-assembled gelation, without additional crosslinking processes, e.g., UV-curing. For the self-assembled gelation, poly(vinyl alcohol) and poly(ethylene-alt-maleic anhydride) are used as the polymer backbone and chemical crosslinker, respectively, and 1-ethyl-3-methylimidazolium trifluoromethanesulfonate ([EMIM][OTf]) is utilized as an ionic species to ensure ionic conductivity. The as-synthesized ion-gel exhibits an ionic conductivity of ≈5 mS cm−1 and an effective capacitance of 5.4 µF cm−2 at 1 Hz. The ion-gel is successfully employed in EGTs with an indium oxide (In2O3) channel, which shows on/off-ratios of up to 1.3 × 106 and a subthreshold swing of 80.62 mV dec−1.
Knight Götz von Berlichingen (1480–1562) lost his right hand distal to the wrist due to a cannon ball splinter injury in 1504 in the Landshut War of Succession at the age of 24. Early on, Götz commissioned a gunsmith to build the first “Iron Hand,” in which the artificial thumb and two finger blocks could be moved in their basic joints by a spring mechanism and released by a push button. Some years later, probably around 1530, a second “Iron Hand” was built, in which the fingers could be moved passively in all joints. In this review, the 3D computer-aided design (CAD) reconstructions and 3D multi-material polymer replica printings of the first “Iron hand“, which were developed in the last few years at Offenburg University, are presented. Even by today’s standards, the first “Iron Hand”—as could be shown in the replicas—demonstrates sophisticated mechanics and well thought-out functionality and still offers inspiration and food for discussion when it comes to the question of an artificial prosthetic replacement for a hand. It is also outlined how some of the ideas of this mechanical passive prosthesis can be translated into a modern motorized active prosthetic hand by using simple, commercially available electronic components.
In this editorial, a topic for general discussion in the field of neuroprosthetics of the upper limb is addressed: which way—invasive or non-invasive—is the right one for the future in the development of neuroprosthetic concepts. At present, two groups of research priorities (namely the invasive versus the non-invasive approach) seem to be emerging, without taking a closer look at the wishes but also the concerns of the patients. This piece is intended to stimulate the discussion on this.
A novel method for quasi-continuous tar monitoring in hot syngas from biomass gasification is reported. A very small syngas stream is extracted from the gasifier output, and the oxygen demand for tar combustion is determined by a well-defined dosage of synthetic air. Assuming the total oxidation of all of the combustible components at the Pt-electrode of a lambda-probe, the difference of the residual oxygen concentrations from successive operations with and without tar condensation represents the oxygen demand. From experiments in the laboratory with H2/N2/naphthalene model syngas, the linear sensitivity and a lower detection limit of about 70 ± 5 mg/m3 was estimated, and a very good long-term stability can be expected. This extremely sensitive and robust monitoring concept was evaluated further by the extraction of a small, constant flow of hot syngas as a sample (9 L/h) using a Laval nozzle combined with a metallic filter (a sintered metal plate (pore diameter 10 µm)) and a gas pump (in the cold zone). The first tests in the laboratory of this setup—which is appropriate for field applications—confirmed the excellent analysis results. However, the field tests concerning the monitoring of the tar in syngas from a woodchip-fueled gasifier demonstrated that the determination of the oxygen demand by the successive estimation of the oxygen concentration with/without tar trapping is not possible with enough accuracy due to continuous variation of the syngas composition. A method is proposed for how this constraint can be overcome.
Simulation based studies for operational energy system analysis play a significant role in evaluation of various new age technologies and concepts in the energy grid. Various modelling approaches already exist and in this original paper, four models representing these approaches are compared in two real-world hybrid energy system scenarios. The models, namely TransiEnt, µGRiDS, and OpSim (including pandaprosumer and mosaic) are classified into component-oriented or system-oriented approaches as deduced from the literature research. The methodology section describes their differences under standard conditions and the necessary parameterization for the purpose of creating a framework facilitating a closest possible comparison. A novel methodology for scenario generation is also explained. The results help to quantify primary differences in these approaches that are also identified in literature and qualify the influence of the accuracy of the models for application in a system-wide analysis. It is shown that a simplified model may be sufficient for the system-oriented approach especially when the objective is an optimization-based control or planning. However, from a field level operational point of view, the differences in the time series signify the importance of the component-oriented approaches.
In asymmetric treatment of hearing loss, processing latencies of the modalities typically differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at 0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence sound source localization in bimodal listeners provided with a hearing aid (HA) in one and a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in reference ITD on speech understanding, especially spatial release from masking (SRM) in normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with spatially collocated (S0N0) and spatially separated (S0N90) sound sources. Further, the cues for separation of target and masker were manipulated to measure the effect of a reference ITD on unmasking by A) ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM decreased significantly in conditions A) and B) when the reference ITD was increased: In condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These results were accurately predicted by the applied EC-model. The outcomes show that interaural processing latency differences should be considered in asymmetric treatment of hearing loss.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average R² score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.