Refine
Year of publication
- 2019 (121) (remove)
Document Type
- Conference Proceeding (70)
- Article (reviewed) (36)
- Article (unreviewed) (5)
- Book (3)
- Doctoral Thesis (3)
- Letter to Editor (2)
- Part of a Book (1)
- Patent (1)
Conference Type
- Konferenzartikel (59)
- Konferenz-Abstract (8)
- Sonstiges (2)
- Konferenzband (1)
Language
- English (121) (remove)
Has Fulltext
- no (121) (remove)
Is part of the Bibliography
- yes (121)
Keywords
- Heart rhythm model (5)
- Modeling and simulation (5)
- Virtual Reality (4)
- Education in Optics and Photonics (3)
- Human Computer Interaction (3)
- Machine Learning (3)
- Plastizität (3)
- RoboCup (3)
- Cryoballoon catheter ablation (2)
- Ecodesign (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (57)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (21)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (20)
- Fakultät Wirtschaft (W) (20)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (11)
- INES - Institut für nachhaltige Energiesysteme (8)
- ACI - Affective and Cognitive Institute (5)
- IfTI - Institute for Trade and Innovation (4)
- CRT - Campus Research & Transfer (3)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (3)
Open Access
- Closed Access (69)
- Open Access (43)
- Bronze (6)
With the surge in global data consumption with proliferation of Internet of Things (IoT), remote monitoring and control is increasingly becoming popular with a wide range of applications from emergency response in remote regions to monitoring of environmental parameters. Mesh networks are being employed to alleviate a number of issues associated with single-hop communication such as low area coverage, reliability, range and high energy consumption. Low-power Wireless Personal Area Networks (LoWPANs) are being used to help realize and permeate the applicability of IoT. In this paper, we present the design and test of IEEE 802.15.4-compliant smart IoT nodes with multi-hop routing. We first discuss the features of the software stack and design choices in hardware that resulted in high RF output power and then present field test results of different baseline network topologies in both rural and urban settings to demonstrate the deployability and scalability of our solution.
In the domain of printed electronics (PE), field-effect transistors (FETs) with an oxide semiconductor channel are very promising. In particular, the use of high gate-capacitance of the composite solid polymer electrolytes (CSPEs) as a gate-insulator ensures extremely low voltage requirements. Besides high gate capacitance, such CSPEs are proven to be easily printable, stable in air over wide temperature ranges, and possess high ion conductivity. These CSPEs can be sensitive to moisture, especially for high surface-to-volume ratio printed thin films. In this paper, we provide a comprehensive experimental study on the effect of humidity on CSPE-gated single transistors. At the circuit level, the performance of ring oscillators (ROs) has been compared for various humidity conditions. The experimental results of the electrolyte-gated FETs (EGFETs) demonstrate rather comparable currents between 30%-90% humidity levels. However, the shifted transistor parameters lead to a significant performance change of the RO frequency behavior. The study in this paper shows the need of an impermeable encapsulation for the CSPE-gated FETs to ensure identical performance at all humidity conditions.
Printed electrolyte-gated oxide electronics is an emerging electronic technology in the low voltage regime (≤1 V). Whereas in the past mainly dielectrics have been used for gating the transistors, many recent approaches employ the advantages of solution processable, solid polymer electrolytes, or ion gels that provide high gate capacitances produced by a Helmholtz double layer, allowing for low-voltage operation. Herein, with special focus on work performed at KIT recent advances in building electronic circuits based on indium oxide, n-type electrolyte-gated field-effect transistors (EGFETs) are reviewed. When integrated into ring oscillator circuits a digital performance ranging from 250 Hz at 1 V up to 1 kHz is achieved. Sequential circuits such as memory cells are also demonstrated. More complex circuits are feasible but remain challenging also because of the high variability of the printed devices. However, the device inherent variability can be even exploited in security circuits such as physically unclonable functions (PUFs), which output a reliable and unique, device specific, digital response signal. As an overall advantage of the technology all the presented circuits can operate at very low supply voltages (0.6 V), which is crucial for low-power printed electronics applications.
Current training methods for deep neural networks boil down to very high dimensional and non-convex optimization problems which are usually solved by a wide range of stochastic gradient descent methods. While these approaches tend to work in practice, there are still many gaps in the theoretical understanding of key aspects like convergence and generalization guarantees, which are induced by the properties of the optimization surface (loss landscape). In order to gain deeper insights, a number of recent publications proposed methods to visualize and analyze the otimization surfaces. However, the computational cost of these methods are very high, making it hardly possible to use them on larger networks. In this paper, we present the GradVis Toolbox, an open source library for efficient and scalable visualization and analysis of deep neural network loss landscapes in Tesorflow and PyTorch. Introducing more efficient mathematical formulations and a novel parallelization scheme, GradVis allows to plot 2d and 3d projections of optimization surfaces and trajectories, as well as high resolution second order gradient information for large networks.
We present our twenty years of experience in the live broadcasting of astronomical events, with the main focus on total lunar eclipses. Our efforts were motivated by the great impact and high number of viewers of these events. Visitors from over a hundred countries watched our live broadcasts. Our viewer record was set on July 27, 2018, with the live transmission of the total lunar eclipse from the Feldberg, the highest mountain in the Black Forest, attracting nearly half a million viewers in five hours.
An especially challenging activity was the live observing of the Mercury transit on 9 May 2016, which we presented as ‘live astronomy’ with hands-on telescope. The main goal of this event was to awake our students enthusiasm for optics and astronomy.
Furthermore, we report on our experiences with the photography of optical phenomena such as polar lights and green flash.
Art and Photonics
(2019)
In this paper we report on our continuous efforts to apply optics and photonics in art. This results in interdisciplinary projects which sometimes lead to concrete art installations.
We presented some of these projects at the UNESCO headquarters in Paris, at the opening ceremony of the International Year of Light and the inaugural ceremony of the International Day of Light.
Some newer projects, such as “A Maze: Ingenious Pipes” and “The Power of Your Eyes,” are also presented in this paper.
After the successful International Year of Light 2015, the idea of sustainability became increasingly imminent. After a preparatory year on 16 May 2018, the International Day of Light was launched for the first time. This event was celebrated with a public celebration in Paris at the UNESCO headquarters. In this paper we will present our projects dedicated to the International Day of Light in Paris. Together with a group of students from our university, we had the special opportunity to be integrated in the program of the opening ceremony at UNESCO in Paris. With our interdisciplinary projects we have tried to build a bridge between optics, photonics, art and media installations.
As part of the design education at Offenburg University, the teaching in technical documentation is continuously optimised. In this study, numerous mechanical engineering students, ages 19 to 29, are observed using the eye tracking technology and a video camera while performing various design exercises. The aim of the study is to enhance the students’ ability to read, understand and analyse complex engineering drawings. In one experiment, the students are asked to perform the “cube perspective test” after Stumpf and Fay to assess their ability for mental rotation as part of spatial visualization ability. Furthermore, the students are asked to prepare and give micro presentations on a topic related to their studies. Students have a maximum of 100 s time for these presentations. Thus, they can practise presenting important information in a short amount of time, show their rhetorical skills and demonstrate their acquisition of basic knowledge. During the presentation, the eye movement of a few selected students is recorded to analyse their information acquisition. In a further test, the students’ eye movements are analysed while reading an engineering drawing that consists of multiple views. All the spatial connections have to be included based on the different component views. Including these and their acquired knowledge, the students are asked to identify the correct representation of a component view. Furthermore the subjects are describing the function of an assembly, a parallel gripper and then they are to mentally disassemble the assembly to replace a damaged cylindrical pin. Simultaneously, they are filmed using a video camera to see which terms the students use for the individual technical terms. The evaluation of the eye movements shows that the increasing digitalisation of society and the use of electronic devices in everyday life lead to fast and only selective perceptual behaviour and that students feel insecure when dealing with technical drawings. The analysis of the videos shows a mostly non-technical and inaccurate manner of expression and a poor use of technical terms. The transferability of the achieved results to other technical tasks is part of further investigations.
A Novel Approach of High Dynamic Current Control of Interior Permanent Magnet Synchronous Machines
(2019)
Harmonic-afflicted effects of permanent magnet synchronous machines with high power density are hardly faced by traditional current PI controllers, due to limited controller bandwidth. As a consequence, currents and lastly torque ripples appear. In this paper, a new deadbeat current controller architecture has been presented, which is capable to encounter the effects of these harmonics. This new control algorithm, here named “Hybrid-Deadbeat-Controller”, combines the stability and the low steady-state errors offered by common PI regulators with the high dynamic offered by the deadbeat control. Therefore, a novel control algorithm is proposed, capable of either compensating the current harmonics in order to get smoother currents or to control a varying reference value to achieve a smoother torque. The information needed to calculate the optimal reference currents is based on an online parameter estimation feeding an optimization algorithm to achieve an optimal torque output and will be investigated in future research. In order to ensure the stability of the controller over the whole area of operation even under the influence of effects changing the system’s parameter, this work as well focusses on the robustness of the “hybrid” dead beat controller.
More than 200 years ago, the scientist Alexander von Humboldt noted in his travel diaries that "everything is interconnectedness", when he was fascinated by nature and the phenomena observed. The view of nature has become much more detailed through the knowledge of phenomena and natural processes, which led to a more precise view of nature shaped by Humboldt. Technological progress and the artificial intelligence of highly developed computer systems are upsetting this view and changing the established world view through a new, unprecedented interaction between man and machinery. Thus we need digital axioms and comprehensive rules and laws for such autonomous acting systems that determine human interaction between cybernetic systems and biological individuals. This digital humanism should encompass our relationship to nature, our handling of the complexity and diversity of nature and the technological influences on society in order to avoid technical colonialism through supercomputers.
Dissertation D. Dongol
This paper presents the use of model predictive control (MPC) based approach for peak shaving application of a battery in a Photovoltaic (PV) battery system connected to a rural low voltage gird. The goals of the MPC are to shave the peaks in the PV feed-in and the grid power consumption and at the same time maximize the use of the battery. The benefit to the prosumer is from the maximum use of the self-produced electricity. The benefit to the grid is from the reduced peaks in the PV feed-in and the grid power consumption. This would allow an increase in the PV hosting and the load hosting capacity of the grid.
The paper presents the mathematical formulation of the optimal control problem
along with the cost benefit analysis. The MPC implementation scheme in the
laboratory and experiment results have also been presented. The results show
that the MPC is able to track the deviation in the weather forecast and operate
the battery by solving the optimal control problem to handle this deviation.
Sweaty has already participated four times in RoboCup soccer competitions (Adult Size) and came second three times. While 2016 Sweaty needed a lot of luck to be finalist, 2017 Sweaty was a serious adversary in the preliminary rounds. In 2018 Sweaty showed up in the final with some lack of experience and room for improvements, but not without any chance. This paper describes the intended improvements of the humanoid adult size robot Sweaty in order to qualify for the RoboCup 2019 adult size competition.
Deep generative models have recently achieved impressive results for many real-world applications, successfully generating high-resolution and diverse samples from complex datasets. Due to this improvement, fake digital contents have proliferated growing concern and spreading distrust in image content, leading to an urgent need for automated ways to detect these AI-generated fake images.
Despite the fact that many face editing algorithms seem to produce realistic human faces, upon closer examination, they do exhibit artifacts in certain domains which are often hidden to the naked eye. In this work, we present a simple way to detect such fake face images - so-called DeepFakes. Our method is based on a classical frequency domain analysis followed by basic classifier. Compared to previous systems, which need to be fed with large amounts of labeled data, our approach showed very good results using only a few annotated training samples and even achieved good accuracies in fully unsupervised scenarios. For the evaluation on high resolution face images, we combined several public datasets of real and fake faces into a new benchmark: Faces-HQ. Given such high-resolution images, our approach reaches a perfect classification accuracy of 100% when it is trained on as little as 20 annotated samples. In a second experiment, in the evaluation of the medium-resolution images of the CelebA dataset, our method achieves 100% accuracy supervised and 96% in an unsupervised setting. Finally, evaluating a low-resolution video sequences of the FaceForensics++ dataset, our method achieves 91% accuracy detecting manipulated videos.
Recent deep learning based approaches have shown remarkable success on object segmentation tasks. However, there is still room for further improvement. Inspired by generative adversarial networks, we present a generic end-to-end adversarial approach, which can be combined with a wide range of existing semantic segmentation networks to improve their segmentation performance. The key element of our method is to replace the commonly used binary adversarial loss with a high resolution pixel-wise loss. In addition, we train our generator employing stochastic weight averaging fashion, which further enhances the predicted output label maps leading to state-of-the-art results. We show, that this combination of pixel-wise adversarial training and weight averaging leads to significant and consistent gains in segmentation performance, compared to the baseline models.
Recent studies have shown remarkable success in image-to-image translation for attribute transfer applications. However, most of existing approaches are based on deep learning and require an abundant amount of labeled data to produce good results, therefore limiting their applicability. In the same vein, recent advances in meta-learning have led to successful implementations with limited available data, allowing so-called few-shot learning.
In this paper, we address this limitation of supervised methods, by proposing a novel approach based on GANs. These are trained in a meta-training manner, which allows them to perform image-to-image translations using just a few labeled samples from a new target class. This work empirically demonstrates the potential of training a GAN for few shot image-to-image translation on hair color attribute synthesis tasks, opening the door to further research on generative transfer learning.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Background: The application of high-frequency ablation is used for the treatment of tachycardia arrhythmias and is a respected method. Ablation with high frequency current leads to the targeted heat destruction of myocardial tissue at specific sites and thus prevents the pathological propagation of excitation through these structures.
Purpose: The aim of this study was to simulate heat propagation during RF ablation with modeled electrodes in different sizes and materials. The simulation was performed on atrioventricular node re-entry tachycardia (AVNRT), atrioventricular re-entry tachycardia (AVRT) and atrial flutter (AFL).
Methods: Using the modeling and simulation software CST, ablation catheters with 4 mm and 8 mm tip electrodes were modeled from gold and platinum for each. The designed catheters correspond to the manufacturer"s specifications of Medtronic, Biotronik and Osypka. The catheters were integrated into the Offenburg heart rhythm model to simulate and compare the heat propagation during an ablation application, which also takes into account the blood flow in the four heart chambers. A power of 5 W - 40 W was simulated for the 4 mm electrodes and a power of 50 W - 80 W for the 8 mm electrodes.
Results: During the simulated HF ablation application, the temperature at the ablation electrode was measured at different powers. This is 40.67°C at 5 W, 44.34°C at 10 W, 51.76°C at 20 W, 59.0°C at 30 W, and 66.33°C at 40 W. The measured temperature during 40 W application is 39.5°C at 0,5 mm depth in the myocardium and 37.5°C at 2 mm depth.
In the simulation, the 8 mm platinum electrode reached an ablation temperature of 72.85°C at its tip during an applied power of 60 W. In contrast, the 8 mm platinum electrode reached a depth of 5 mm at 39.5 C° and at a depth of 2 mm at 37.5 °C. In contrast, the 8 mm gold electrode reached a temperature of 64.66°C with the same performance. This is due to the thermal properties of gold, which has a better thermal conductivity than platinum.
Conclusions: CST offers the possibility to carry out a static and dynamic simulation of a heart model and the ablation electrodes integrated in it during an HF ablation. In variation with different electrode sizes and materials, therapy methods for the treatment of AVNRT, AVRT and AFL can be optimized
We present a planar chromatographic separation method for the compounds caffeine, artemisinin, and equol, separated on high-performance thin-layer chromatography (HPTLC) silica gel plates. As solvents for separation, methyl t-butyl ether and cyclohexane (1:1, V/V) have been used for equol, cyclohexane and ethyl acetate (7:3, V/V) for artemisinin, and ethyl acetate and acetone (7:3, V/V) for caffeine. After separation, the plate was scanned with a very specific time of flight-direct analysis in real time-mass spectrometry (TOF-DART-MS) system using the (M + 1)+ signals of equol, artemisinin, and caffeine. The (M + 1) peak of artemisinin at 283.13 m/z is clearly detectable, which is the proof that DART-MS is applicable for the quantitative determination of rather instable molecules. The planar set-up of DART source, HPTLC plate and detector inlet in a line showed higher sensitivities compared to desorption at an angle. The optimal detector voltage increases with the molar mass of the analyte, thus an individual determination of optimal detector voltage setting for the different analyte is recommended to achieve the best possible measurement conditions. In conclusion, DART-MS detection in combination with an HPTLC separation allows very specific quantification of all three compounds.
Narrowband IoT (NB-IoT) as a radio access technology for the cellular Internet of Things (cIoT) is getting more traction due to attractive system parameters, new proposals in the 3 rd Generation Partnership Project (3GPP) Release 14 for reduced power consumption and ongoing world-wide deployment. As per 3GPP, the low-power and wide-area use cases in 5G specification will be addressed by the early NB-IoT and Long-Term Evolution for Machines (LTE-M) based technologies. Since these cIoT networks will operate in a spatially distributed environment, there are various challenges to be addressed for tests and measurements of these networks. To meet these requirements, unified emulated and field testbeds for NB-IoT-networks were developed and used for extensive performance measurements. This paper analyses the results of these measurements with regard to RF coverage, signal quality, latency, and protocol consistency.
Printed electronics (PE) is a fast growing technology with promising applications in wearables, smart sensors and smart cards since it provides mechanical flexibility, low-cost, on-demand and customizable fabrication. To secure the operation of these applications, True Random Number Generators (TRNGs) are required to generate unpredictable bits for cryptographic functions and padding. However, since the additive fabrication process of PE circuits results in high intrinsic variation due to the random dispersion of the printed inks on the substrate, constructing a printed TRNG is challenging. In this paper, we exploit the additive customizable fabrication feature of inkjet printing to design a TRNG based on electrolyte-gated field effect transistors (EGFETs). The proposed memory-based TRNG circuit can operate at low voltages (≤ 1 V ), it is hence suitable for low-power applications. We also propose a flow which tunes the printed resistors of the TRNG circuit to mitigate the overall process variation of the TRNG so that the generated bits are mostly based on the random noise in the circuit, providing a true random behaviour. The results show that the overall process variation of the TRNGs is mitigated by 110 times, and the simulated TRNGs pass the National Institute of Standards and Technology Statistical Test Suite.
Printed electronics (PE) circuits have several advantages over silicon counterparts for the applications where mechanical flexibility, extremely low-cost, large area, and custom fabrication are required. The custom (personalized) fabrication is a key feature of this technology, enabling customization per application, even in small quantities due to low-cost printing compared with lithography. However, the personalized and on-demand fabrication, the non-standard circuit design, and the limited number of printing layers with larger geometries compared with traditional silicon chip manufacturing open doors for new and unique reverse engineering (RE) schemes for this technology. In this paper, we present a robust RE methodology based on supervised machine learning, starting from image acquisition all the way to netlist extraction. The results show that the proposed RE methodology can reverse engineer the PE circuits with very limited manual effort and is robust against non-standard circuit design, customized layouts, and high variations resulting from the inherent properties of PE manufacturing processes.
Printed electronics can benefit from the deployment of electrolytesas gate insulators,which enables a high gate capacitance per unit area (1–10 μFcm−2) due to the formation of electrical double layers (EDLs). Consequently, electrolyte-gated field-effect transistors (EGFETs) attain high-charge carrier densities already in the subvoltage regime, allowing for low-voltage operation of circuits and systems. This article presents a systematic study of lumped terminal capacitances of printed electrolyte-gated transistors under various dc bias conditions. We perform voltage-dependent impedancemeasurements and separate extrinsic components from the lumped terminal capacitance.
The proposed Meyer-like capacitance model, which also accounts for the nonquasi-static (NQS) effect, agrees well with experimental data. Finally, to verify the model, we implement it in Verilog-A and simulate the transient response of an inverter and a ring oscillator circuit. Simulation results are in good agreement with the measurement data of fabricated devices.
Electrolyte-gated, printed field-effect transistors exhibit high charge carrier densities in the channel and thus high on-currents at low operating voltages, allowing for the low-power operation of such devices. This behavior is due to the high area-specific capacitance of the device, in which the electrolyte takes the role of the dielectric layer of classical architectures. In this paper, we investigate intrinsic double-layer capacitances of ink-jet printed electrolyte-gated inorganic field-effect transistors in both in-plane and top-gate architectures by means of voltage-dependent impedance spectroscopy. By comparison with deembedding structures, we separate the intrinsic properties of the double-layer capacitance at the transistor channel from parasitic effects and deduce accurate estimates for the double-layer capacitance based on an equivalent circuit fitting. Based on these results, we have performed simulations of the electrolyte cutoff frequency as a function of electrolyte and gate resistances, showing that the top-gate architecture has the potential to reach the kilohertz regime with proper optimization of materials and printing process. Our findings additionally enable accurate modeling of the frequency-dependent capacitance of electrolyte/ion gel-gated devices as required in the small-signal analysis in the circuit simulation.
Robots and automata are key elements of every vision and forecast of life in the near and distant future. However, robots and automata also have a long history, which reaches back into antiquity. Today most historians think that one of the key roles of robots and automata was to amaze or even terrify the audience: They were designed to express something mythical, magical, and not explainable. Moreover, the visions of robots and their envisioned fields of application reflect the different societies. Therefore, this short history of robotics and (especially) anthropomorphic automata aims to give an overview of several historical periods and their perspective on the topic. In a second step, this work aims to encourage readers to reflect on the recent discussion about fields of application as well as the role of robotics today and in the future.
A novel Bluetooth Low Energy advertising scan algorithm is presented for hybrid radios that are additionally capable to measure energy on Bluetooth channels, e.g. as they would need to be compliant with IEEE 802.15.4. Scanners applying this algorithm can achieve a low latency whilst consuming only a fraction of the power that existing mechanisms can achieve at a similar latency. Furthermore, the power consumption can scale with the incoming network traffic and in contrast to the existing mechanisms, scanners can operate without any frame loss given ideal network conditions. The algorithm does not require any changes to advertisers, hence, stays compatible with existing devices. Performance evaluated via simulation and experiments on real hardware shows a 37 percent lower power consumption compared to the best existing scan setting while even achieving a slightly lower latency which proves that this algorithm can be used to improve the quality of service of connection-less Bluetooth communication or reduce the connection establishment time of connection-oriented communication.
We present a novel approach that utilizes BLE packets sent from generic BLE capable radios to synthesize an FSK-(like) addressable wake-up packet. A wake-up receiver system was developed from off-the-shelf components to detect these packets. It makes use of two differential signal paths separated by passive band-pass filters. After the rectification of each channel a differential amplifier compares the signals and the resulting wake-up signal is evaluated by an AS3933 wake-up receiver IC. Overall, the combination of these techniques contributes to a BLE compatible wake-up system which is more robust than traditional OOK wake-up systems. Thus, increasing wake-up range, while still maintaining a low energy budget. The proof-of-concept setup achieved a sensitivity of -47.8 dBm at a power consumption of 18.5 uW during passive listening. The system has a latency of 31.8 ms with a symbol rate of 1437 Baud.
Tryptamines can occur naturally in plants, mushrooms, microbes, and amphibians. Synthetic tryptamines are sold as new psychoactive substances (NPS) because of their hallucinogenic effects. When it comes to NPS, metabolism studies are of crucial importance, due to the lack of pharmacological and toxicological data. Different approaches can be taken to study in vitro and in vivo metabolism of xenobiotica. The zygomycete fungus Cunninghamella elegans (C. elegans) can be used as a microbial model for the study of drug metabolism. The current study investigated the biotransformation of four naturally occurring and synthetic tryptamines [N,N‐Dimethyltryptamine (DMT), 4‐hydroxy‐N‐methyl‐N‐ethyltryptamine (4‐HO‐MET), N,N‐di allyl‐5‐methoxy tryptamine (5‐MeO‐DALT) and 5‐methoxy‐N‐methyl‐N‐isoporpoyltryptamine (5‐MeO‐MiPT)] in C. elegans after incubation for 72 hours. Metabolites were identified using liquid chromatography–high resolution–tandem mass spectrometry (LC–HR–MS/MS) with a quadrupole time‐of‐flight (QqTOF) instrument. Results were compared to already published data on these substances. C. elegans was capable of producing all major biotransformation steps: hydroxylation, N‐oxide formation, carboxylation, deamination, and demethylation. On average 63% of phase I metabolites found in the literature could also be detected in C. elegans. Additionally, metabolites specific for C. elegans were identified. Therefore, C. elegans is a suitable complementary model to other in vitro or in vivo methods to study the metabolism of naturally occurring or synthetic tryptamines.
Background: Transesophageal left atrial (LA) pacing and transesophageal LA ECG recording are semi-invasive techniques for diagnostic and therapy of supraventricular rhythm disturbance. Cardiac resynchronization therapy (CRT) with right atrial (RA) sensed biventricular pacing is an established therapy for heart failure patients with reduced left ventricular (LV) ejection fraction, sinus rhythm and interventricular electrical desynchronization.
Purpose: The aim of the study was to evaluate electromagnetic and voltage pacing fields of the combination of RA pacing, LA pacing and biventricular pacing in patients with long interatrial and interventricular electrical desynchronization.
Methods: The modelling and electromagnetic simulations of transesophageal LA pacing in combination with RA pacing and biventricular pacing would be staged and analyzed with the CST (Computer Simulation Technology) software. Different electrodes were modelled in order to simulate different types of bipolar pacing in the 3D-CAD Offenburg heart rhythm model: The bipolar Solid S (Biotronik) electrode where modelled for RA pacing and right ventricular (RV) pacing, Attain 4194 (Medtronic) for LV pacing and TO8 (Osypka) multipolar esophageal electrode with hemispheric electrodes for LA pacing.
Results: The pacemaker amplitudes for the electromagnetic pacing simulations were performed with 3 V for RA pacing, 1.5 V for RV pacing, 50 V for LA pacing and 3V for LV pacing with pacing impulse duration of 0.5 ms for RA, RV and LV pacing and 10 ms for LA pacing. The atrioventricular pacing delay after RA pacing was 140 ms. The different pacing modes AAI, VVI, DDD, DDD0V and DDD0D were evaluated for the analysis of the electric pacing field propagation of pacemaker, CRT and LA pacing. The pacing results were compared at minimum (LOW) and maximum (HIGH) parameter settings. While the LOW setting produced fewer tetrahedral and more inaccurate results, the HIGH setting produced many tetrahedral and therefore more accurate results.
Conclusions: The simulation of the combination of transesophageal LA pacing with RA sensed biventricular pacing is possible with the Offenburg heart rhythm model. The new temporary 4-chamber pacing method may be additional useful method in CRT non-responders with long interatrial electrical delay.
In recent times, the energy consumed by buildings facilities became considerable. Efficient local energy management is vital to deal with building power demand penalties. This operation becomes complex when a hybrid energy system is included in the power system. This study proposes new energy management between photovoltaic (PV) system, Battery Energy Storage System (BESS) and the power network in a building by controlling the PV/BESS inverter. The strategy is based on explicit model predictive control (MPC) to find an optimal power flow in the building for one-day ahead. The control algorithm is based on a simple power flow equation and weather forecast. Then, a cost function is formulated and optimised using genetic algorithms-based solver. The objective is reducing the imported energy from the grid preventing the saturation and emptiness of BESS. Including other targets to the control policy as energy price dynamic and BESS degradation, MPC can optimise dramatically the efficacy of the global building power system. The strategy is implemented and tested successfully using MATLAB/SimPowerSystems software, compared to classical hysteresis management, MPC has given 10% in energy cost economy and 25% improvement in BESS lifetime.
High temperature components in internal combustion engines and exhaust systems must withstand severe mechanical and thermal cyclic loads throughout their lifetime. The combination of thermal transients and mechanical load cycling results in a complex evolution of damage, leading to thermomechanical fatigue (TMF) of the material. Analytical tools are increasingly employed by designers and engineers for component durability assessment well before any hardware testing. The DTMF model for TMF life prediction, which assumes that micro-crack growth is the dominant damage mechanism, is capable of providing reliable predictions for a wide range of high-temperature components and materials in internal combustion engines. Thus far, the DTMF model has employed a local approach where surface stresses, strains, and temperatures are used to compute damage for estimating the number of cycles for a small initial defect or micro-crack to reach a critical length. In the presence of significant gradients of stresses, strains, and temperatures, the use of surface field values could lead to very conservative estimates of TMF life when compared with reported lives from hardware testing. As an approximation of gradient effects, a non-local approach of the DTMF model is applied. This approach considers through-thickness fields where the micro-crack growth law is integrated through the thickness considering these variable fields. With the help of software tools, this method is automated and applied to components with complex geometries and fields. It is shown, for the TMF life prediction of a turbocharger housing, that the gradient correction using the non-local approach leads to more realistic life predictions and can distinguish between surface cracks that may arrest or propagate through the thickness and lead to component failure.
Neuroprosthetics 2.0
(2019)
New employees are supposed to quickly understand their tasks, internal processes and familiarize with colleagues. This process is called “onboarding” and is still mainly realized by organizational methods from human resource management, such as introductory events or special employee sessions. Software tools and especially mobile applications are an innovative means to support provide onboarding processes in a modern, even remote, way. In this paper we analyze how the use of gamification can enhance onboarding processes. Firstly, we describe a mobile onboarding application specifically developed for the young, technically literate generations Y and Z, who are just about to start their career. Secondly, we report on a study with 98 students and young employees. We found that participants enjoyed the gamified application. They especially appreciated the feature “Team Bingo” which facilitates social integration and teambuilding. Based on the OCEAN personality model (“Big Five”), the personality traits agreeableness and openness revealed significant correlations with a preference for the gamified onboarding application.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.
This paper presents an approach for implementing an automated hit detection and score calculation system for a steel dartboard using a standard webcam. First, the rectilinear field separations of the dartboard are described mathematically by means of line slopes and are than stored. These slopes serve as a basis for later score calculation. In addition, thrown darts have to be detected and the pixel at which the dart cuts the dartboard has to be determined. When this information is known, a comparison is made using the line slopes, allowing the field number of the hit to be detected. The decision for single, double or triple hit is made by evaluating the defined colors on the dartboard. All these functions are then packaged in a Matlab GUI.
Avoiding collisions between a robot arm and any obstacle in its path is essential to human-robot collaboration. Multiple systems are available that can detect obstacles in the robot's way prior and subsequent to a collision. The systems work well in different areas surrounding the robot. One area that is difficult to handle is the area that is hidden by the robot arm. This paper focuses on pick and place maneuvers, especially on obstacle detection in between the robot arm and the table that robot is located on. It introduces the use of single pixel time-of-flight sensors to detect obstacles directly from the robot arm. The proposed approach reduces the complexity of the problem by locking axes of the robot that are not needed for the pick and place movement. The comparison of simulated results and laboratory measurements show concordance.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
Virtual reality in the hotel industry: assessing the acceptance of immersive hotel presentation
(2019)
In the hotel industry, it is crucial to reduce the inherent information asymmetry with regard to the goods offered. This asymmetry can be minimised through the use of smartphone-based virtual reality applications (SBVRs), which allow virtual simulation of real experiences and thus enable more efficient information retrieval. The aim of the study is to determine for the first time the user acceptance of these immersive hotel presentations for assessing the performance of a travel accommodation. For this purpose, the Technology Acceptance Model (TAM) was used to explain the acceptance behaviour for this new technology. A virtual reality application was specially developed, in which the participants could explore a hotel virtually. A total of 569 participants took part in the study. The structural equation model and the hypotheses were tested using a Partial Least Squares (PLS) analysis. The results illustrate that the immersive product experience leads to more efficient information gathering. The perceived usefulness significantly affects the attitude towards using the technology as well as the intention to use it. In contrast to the traditional TAM, the perceived ease of use of SBVRs has no effect on the perceived usefulness or attitude towards using the technology.
Purpose
The purpose of this study is to investigate the effects of telepresence while using a smartphone-based virtual reality system (SBVR) to explore a hotel virtually and to determine the influence of this immersive experience on the booking intention of the potential customer.
Design/methodology/approach
Within the scope of this study, a conceptual research model was developed which covered utilitarian and hedonic aspects of the user experience of SBVRs and showed their relevance for the booking intention. A virtual reality application was programmed especially for the study, in which the test persons were able to virtually explore a hotel complex. A total of 569 people participated in the study. A questionnaire was used for the data collection. The structural equation modelling and hypothesis verification were carried out using the partial least squares method.
Findings
The immersive feeling of telepresence increases the perceived enjoyment and usefulness of the potential customer. In addition, the user's curiosity is aroused by the telepresence, which also significantly increases the perceived enjoyment as well as the perceived usefulness. The hedonic and utilitarian value of the virtual hotel experience increases the probability that the customer will book the travel accommodation.
Research limitations/implications
The virtual reality application developed for the study is based on static panoramic images and does not contain audio-visual elements (e.g. sound, video, animation). Audio-visual elements might increase the degree of immersion and could therefore be investigated in future research.
Practical implications
The results of the study show that the SBVR is a suitable marketing tool to present hotels in an informative and entertaining way, and can thereby increase sales and profits.
Originality/value
For the first time, this study investigates the potential of SBVRs for the virtual product presentation of hotels and provides empirical evidence that the availability of this innovative form of presentation leads to a higher booking intention.
For e-commerce retailers it is crucial to present their products both informatively and attractively. Virtual reality (VR) systems represent a new marketing tool that supports customers in their decision-making process and offers an extraordinary product experience. Despite these advantages, the use of this technology for e-commerce retailers is also associated with risks, namely cybersickness. The aim of the study is to investigate the occurrence of cybersickness in the context of the customer’s perceived enjoyment and the perceived challenge of a VR product presentation. Based on a conceptual research framework, a laboratory study with 533 participants was conducted to determine the influence of these factors on the occurrence of cybersickness. The results demonstrate that the perceived challenge has a substantially stronger impact on the occurrence of cybersickness, which can only be partially reduced by perceived enjoyment. When realizing VR applications in general and VR product presentations in particular, e-commerce retailers should therefore first minimize possible challenges instead of focusing primarily on entertainment aspects of such applications.
Hot working tools are subjected to complex thermal and mechanical loads during service. Locally, the stresses can exceed the material’s yield strength in highly loaded areas. During production, this causes cyclic plastic deformation and thus thermomechanical fatigue, which can significantly shorten the lifetime of hot working tools. To sustain this high loads, the hot working tools are typically made of tempered martensitic hot work tool steels. While the annealing temperatures of the tool steels usually lie in the range of 400 to 600 °C, the steels may experience even higher temperatures during hot working, resulting in softening of the material due to changes in microstructure. Therefore, the temperature-dependent cyclic mechanical properties of the frequently used hot work tool steel 1.2367 (X38CrMoV5-3) after tempering are investigated in this work. To this end, hardness measurements are performed. Furthermore, the Institute of Forming Technology and Machines (IFUM) provides test results from cyclic tests at temperatures ranging from 20 °C (room temperature) to 650 °C. To describe the observed time- and temperature-dependent softening during tempering, a kinetic model for the evolution of the mean size of secondary carbides based on Ostwald ripening is developed. In addition, both mechanism-based and phenomenological relationships for the cyclic mechanical properties of the Ramberg- Osgood model depending on carbide size and temperature are proposed. The stress-strain hysteresis loops measured at different temperatures and after different heat treatments can be well described with the proposed kinetic and mechanical model. Furthermore, the model is suitable for integration in advanced mechanism-based lifetime models. However, since the Ramberg-Osgood model is not suitable for finite element implementation, a temperature-dependent incremental cyclic plasticity model is presented as well. Thus, softening due to particle coarsening can be applied in the finite element method (FEM). Therefore, a kinetic model is coupled with a cyclic plasticity model including kinematic hardening. The plasticity model is implemented via subroutines in the finite element program ABAQUS for implicit integration (subroutine called UMAT) and explicit integration (subroutine called VUMAT). The implemented model is used for the simulation of an exemplary hot working process to assess the effects of softening due to particle coarsening. It shows that the thermal softening at high temperatures, which occur over a long time at a mechanically highly loaded area, has a great influence. If this influence is not considered in tool design, an unexpected tool failure might occur bringing the production to a standstill.
Besides of conventional CAD systems, new, cloud-based CAD systems have also been available for some years. These CAD systems designed according to the principle of software as a service (SaaS) differ in some important features from the conventional CAD systems. Thus, these CAD systems are operated via a browser and it is not necessary to install the software on a computer. The CAD-data is stored in the cloud and not on a local computer or central server. This new approach should also facilitate the sharing and management of data. Finally, many of these new CAD systems are available as freeware for education purposes, so the universities can save license costs. This contribution examines newly developed, cloud-based CAD systems. In the context of a case study, the application of these new CAD systems are investigated in the training of engineers in design education. Thus, the students compare a conventional and a cloud-based CAD system as part of an exercise of designing and 3D modelling of a pinion shaft. Subsequently, the students manufacture a drawing with different views of the pinion shaft. This assessment evaluates different criteria such as user-friendliness, tutorial support and installation effort.
The ability to change aerodynamic parameters of airfoils during flying can potentially save energy as well as reducing the noise made by the unmanned aerial vehicles (UAV) because of sharp edges of the airfoil and its rudders. In this paper, an approach for the design of an adaptive wing using a multi-material 3D printer is shown. In multi-material 3D printing, up to six different materials can be combined in one component. Thus, the user can determine the mixture and the spatial arrangement of this “digital material” in advance in the pre-processing software. First, the theoretical benefits of adaptive wings are shown, and already existing adaptive wings and concepts are explicated within a literature review. Then the additive manufacturing process using photopolymer jetting and its capabilities to print multiple materials in one part are demonstrated. Within the scope of a case study, an adaptive wing is developed and the necessary steps for the product development and their implementation in CAD are presented. This contribution covers the requirements for different components and sections of an adaptive wing designed for additive manufacturing using multiple materials as well as the single steps of development with its different approaches until the final design of the adaptive wing. The developed wing section is simulated, and qualitative tests in a wind tunnel are carried out with the wing segment. Finally, the additively manufactured wing segment is evaluated under technical and economic aspects.
The additive manufacturing processes have developed significantly in recent years. Currently, new generative processes are coming onto the market. Likewise, the number of available materials that can be processed using additive processes is steadily increasing. Therefore, an important task is to integrate these new processes and materials into the university education of engineers. Due to the rapid change and the constant development in the field of additive manufacturing, a pure transfer of knowledge is not expedient, because this obsolete very quickly. Rather, the students should be enabled to use their skills in such a way that they can always handle new technologies and materials independently and meaningfully.
In this paper, therefore, a new course is developed in which the students largely independently work with additive manufacturing processes. For this purpose, teams of four to five students from different technical programs are formed. The teams have the task of developing and manufacturing a product using additive processes. The goal is to create a powerful product by taking into account the optimization of costs and use of resources.
As an example, the development and additive manufacturing of an ornithopter (aircraft that flies by flapping its wings) will be presented in this contribution. The students have to analyze and optimize the mechanics and aerodynamics of the aircraft. In addition, the rules for production-oriented design must be determined and applied. Further more, they should assess the costs and material consumption during development and production.
This contribution shows how the students have achieved the different learning outcomes. In addition, it becomes clear how the students independently acquired and applied their knowledge in development, design and additive manufacturing. Also, it will be demonstrated how much time the students spent on learning the different technologies.
The development of new processes and materials for additive manufacturing is currently progressing rapidly. In order to use the advantages of additive manufacturing, however, product development and design must also be adapted to these new processes. Therefore it is suitable to use structural optimization. To achieve the best results in lightweight design, it is important to have an approach that reduces the volume in the unloaded regions and considers the restrictions and characteristics of the additive manufacturing process. In this contribution, a case study using a humanoid robot is presented. Thus, the pelvis module of a humanoid robot is optimized regarding its weight and stiffness. Furthermore, an integrated design is implemented in order to reduce the number of parts and the screw connections. The manufacturing uses a new aluminum-based material that has been specially developed for use in additive manufacturing and lightweight construction. For the additive manufacturing by means of the Selective Laser Melting (SLM) process, different restrictions and the assembly concepts of the humanoid robot have to be taken into account. These restrictions have to be considered in the setting of the individual parameters and target functions of the structural optimization. As a result, a framework is presented that shows the steps of the redesign and the optimization of the pelvis module. In order to achieve high accuracy with the product, the redesign of the pelvis module is demonstrated with regard to mechanical and thermal postprocessing. Finally, the redesigned part and the different assembly concepts are compared to analyze the economic and technical effects of the optimization.
Direct Digital Manufacturing of Architectural Models using Binder Jetting and Polyjet Modeling
(2019)
Today, architectural models are an important tool for illustrating drawn-on plansor computer-generated virtual models and making them understandable. Inaddition to the conventional methods for the manufacturing of physical models, awide range of processes for Direct Digital Manufacturing (DDM) has spreadrapidly in recent years. In order to facilitate the application of these new methodsfor architects, this contribution examines which technical and economic resultsare possible using 3D printed architectural models. Within a case study, it will beshown on the basis of a multi-storey detached house, which kind of datapreparation is necessary. The DDM of architectural models will be demonstratedusing two widespread techniques and the resulting costs will be compared.
The fast and cost-effective manufacturing of tools for thermoforming is an essential requirement to shorten the development time of products. Thus, additive processes are used increasingly in tooling for thermoforming of plastic sheets. However, a disadvantage of many additive methods is that they are highly cost-intensive, since complex systems based on laser technology and expensive metal powders are needed. Therefore, this paper examines how to work with favorable additive methods, e.g. Binder Jetting, to manufacture tools, which provide sufficient strength for thermoforming. The use of comparatively low-priced inkjet technology for the layer construction and a polymer plaster as material can be expected to result in significant cost reductions. Based on a case study using a cowling (engine bonnet) for an Unmanned Aerial Vehicle (UAV), the development of a complex tool for thermoforming is demonstrated. The object in this study is to produce a tool for a complex-shaped component in small numbers and high quality in a short time and at reasonable costs. Within the tooling process, integrated vacuum channels are implemented in additive tooling without the need for additional post-processing (for example, drilling). In addition, special technical challenges, such as the demolding of undercuts or the parting of the tool are explained. All process steps from tool design to the use of the additively manufactured tool are analyzed. Based on the manufacturing of a small series of cowlings for a UAV made of plastic sheets (ABS), it is shown, that the Binder Jetting offers sufficient mechanical and thermal strength for additive tooling. In addition, an economic evaluation of the tool manufacturing and a detailed consideration of the required manufacturing times for the different process steps are carried out. Finally, a comparison is made with conventional and alternative additive methods of tooling.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
Provides a state-of-the-art overview of international trade policy research
The Handbook of Global Trade Policy offers readers a comprehensive resource for the study of international trade policy, governance, and financing. This timely and authoritative work presents contributions from a team of prominent experts that assess the policy implications of recent academic research on the subject. Discussions of contemporary research in fields such as economics, international business, international relations, law, and global politics help readers develop an expansive, interdisciplinary knowledge of 21st century foreign trade.
Accessible for students, yet relevant for practitioners and researchers, this book expertly guides readers through essential literature in the field while highlighting new connections between social science research and global policy-making. Authoritative chapters address new realities of the global trade environment, global governance and international institutions, multilateral trade agreements, regional trade in developing countries, value chains in the Pacific Rim, and more. Designed to provide a well-rounded survey of the subject, this book covers financing trade such as export credit arrangements in developing economies, export insurance markets, climate finance, and recent initiatives of the World Trade Organization (WTO). This state-of-the-art overview:
• Integrates new data and up-to-date research in the field
• Offers an interdisciplinary approach to examining global trade policy
• Introduces fundamental concepts of global trade in an understandable style
• Combines contemporary economic, legal, financial, and policy topics
• Presents a wide range of perspectives on current issues surrounding trade practices and policies
The Handbook of Global Trade Policy is a valuable resource for students, professionals, academics, researchers, and policy-makers in all areas of international trade, economics, business, and finance.
Open markets, international trade and foreign direct investments are a source of prosperity in challenging times. This Special Section looks at developed economies and emerging markets, also taking into account the role of trade for impactful capacity-building in least developed countries (LDCs). Specific emphasis is placed on financing economic development and trade, analysing what roles trade and development finance should play in the quest for an efficient mobilisation of private capital for growth, trade and development.
Excellent organisations require targeted strategies to implement their vision and mission, deploying a stakeholder-focused approach. As part of evidence-based policy making, it is a common approach to measure government financing vehicles’ results. A state-of-the-art method in quantitative benchmarking to overcome the challenge of considering multiple inputs and outputs is Data Envelopment Analysis (DEA). Descriptive statistics and explorative-qualitative approaches are also applied in a modern ECA benchmarking model to substantiate DEA results and put them into perspective. This enabler-result model provides a holistic view and allows to identify top performing ECAs and Exim-Banks, providing the opportunity for inefficient institutions to learn from their most productive peers. This best practice approach for strategic benchmarking enables the senior management to develop and implement a cutting-edge strategy, and increase value for key stakeholders.
What emotional effects does gamification have on users who work or learn with repetitive tasks? In this work, we use biosignals to analyze these affective effects of gamification. After a brief discussion of related work, we describe the implementation of an assistive system augmenting work by projecting elements for guidance and gamification. We also show how this system can be extended to analyse users' emotions. In a user study, we analyse both biosignals (facial expressions and electrodermal activity), and regular performance measures (error rate and task completion time).
For the performance measures, the results confirm known effects like increased speed and slightly increased error rate. In addition, the analysis of the biosignals provides strong evidence for two major affective effects: the gamification of work and learning tasks incites highly significantly more positive emotions and increases emotionality altogether. The results add to the design of assistive systems, which are aware of the physical as well as the affective context.
In this article the high-temperature behavior of a cylindrical lithium iron phosphate/graphite lithium-ion cell is investigated numerically and experimentally by means of differential scanning calorimetry (DSC), accelerating rate calorimetry (ARC), and external short circuit test (ESC). For the simulations a multi-physics multi-scale (1D+1D+1D) model is used. Assuming a two-step electro-/thermochemical SEI formation mechanism, the model is able to qualitatively reproduce experimental data at temperatures up to approx. 200 °C. Model assumptions and parameters could be evaluated via comparison to experimental results, where the three types of experiments (DSC, ARC, ESC) show complementary sensitivities towards model parameters. The results underline that elevated-temperature experiments can be used to identify parameters of the multi-physics model, which then can be used to understand and interpret high-temperature behavior. The resulting model is able to describe nominal charge/discharge operation behavior, long-term calendaric aging behavior, and short-term high-temperature behavior during extreme events, demonstrating the descriptive and predictive capabilities of physicochemical models.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Many sectors, like finance, medicine, manufacturing, and education, use blockchain applications to profit from the unique bundle of characteristics of this technology. Blockchain technology (BT) promises benefits in trustability, collaboration, organization, identification, credibility, and transparency. In this paper, we conduct an analysis in which we show how open science can benefit from this technology and its properties. For this, we determined the requirements of an open science ecosystem and compared them with the characteristics of BT to prove that the technology suits as an infrastructure. We also review literature and promising blockchain-based projects for open science to describe the current research situation. To this end, we examine the projects in particular for their relevance and contribution to open science and categorize them afterwards according to their primary purpose. Several of them already provide functionalities that can have a positive impact on current research workflows. So, BT offers promising possibilities for its use in science, but why is it then not used on a large-scale in that area? To answer this question, we point out various shortcomings, challenges, unanswered questions, and research potentials that we found in the literature and identified during our analysis. These topics shall serve as starting points for future research to foster the BT for open science and beyond, especially in the long-term.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation
Economic growth and ecological problems motivate industries to apply eco-friendly technologies and equipment. However, environmental impact, followed by energy and material consumption still remain the main negative implications of the technological progress in process engineering. Based on extensive patent analysis, this paper assigns more than 250 identified eco-innovation problems and requirements to 14 general eco-categories with energy consumption and losses, air pollution, and acidification as top issues. It defines primary eco-engineering contradictions, in case eco-problems appear as negative side effects of the new technologies, and secondary eco-engineering contradictions, if eco-friendly solutions have new environmental drawbacks. The study conceptualizes a correlation matrix between the eco-requirements for prediction of typical eco-contradictions on example of processes involving solids handling. Finally, it summarizes major eco-innovation approaches including Process Intensification in process engineering, and chronologically reviews 66 papers on eco-innovation adapting TRIZ methodology. Based on analysis of 100 eco-patents, 58 process intensification technologies, and literature, the study identifies 20 universal TRIZ inventive principles and sub-principles that have a higher value for environmental innovation.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Growing demands for cleaner production and higher eco-efficiency in process engineering require a comprehensive analysis of technical and environmental outcomes of customers and society. Moreover, unexpected additional technical or ecological drawbacks may appear as negative side effects of new environ-mentally friendly technologies. The paper conceptualizes a comprehensive ap-proach for analysis and ranking of engineering and ecological requirements in process engineering in order to anticipate secondary problems in eco-design and to avoid compromising the environmental or technological goals. For this purpose, the paper presents a method based on integration of the Quality Func-tion Deployment approach with the Importance-Satisfaction Analysis for the requirements ranking. The proposed method identifies and classifies compre-hensively the potential engineering and eco-engineering contradictions through analysis of correlations within requirements groups such as stakehold-er requirements (SRs) and technical requirements (TRs), and additionally through cross-relationship between SRs and TRs.
Process engineering industries are now facing growing economic pressure and societies' demands to improve their production technologies and equipment, making them more efficient and environmentally friendly. However unexpected additional technical and ecological drawbacks may appear as negative side effects of the new environmentally-friendly technologies. Thus, in their efforts to intensify upstream and downstream processes, industrial companies require a systematic aid to avoid compromising of ecological impact. The paper conceptualises a comprehensive approach for eco-innovation and eco- design in process engineering. The approach combines the advantages of Process Intensification as Knowledge-Based Engineering (KBE), inventive tools of Knowledge-Based Innovation (KBI), and main principles and best-practices of Eco-Design and Sustainable Manufacturing. It includes a correlation matrix for identification of eco-engineering contradictions and a process mapping technique for problem definition, database of Process Intensification methods and equipment, as well as a set of strongest inventive operators for eco-ideation.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
Amongst all the major hazard aspects for the health of people in big conglomerates is the increase of the particulate matter concentration. Traditional systems for particulate matter (PM) monitoring have a great number of drawbacks but the main issues are economical and are related to the installation costs and never ending periodical maintenance expenses. After all there are installations of such systems but their number is limited and having in mind the growth of population, cities and industry areas, there is even a bigger need for more information on air quality because PM changes non-linearly, has a wide range and different sources. In this paper, we propose an approach, based on low-cost sensor nodes, for real-time measuring and obtaining information about the PM concentration. The adoption of that approach allows for a detailed study of the intensities of pollution and its sources. The system power supply is powered by a PV module. The power supply unit is designed using a model-based design that is a new approach to prototyping power-operated electronic devices with guaranteed performance.
In this article we outline the model development planned within the joint projectModel-based city planningand application in climate change (MOSAIK). The MOSAIK project is funded by the German FederalMinistry of Education and Research (BMBF) within the frameworkUrban Climate Under Change ([UC]2)since 2016. The aim of MOSAIK is to develop a highly-efficient, modern, and high-resolution urban climatemodel that allows to be applied for building-resolving simulations of large cities such as Berlin (Germany).The new urban climate model will be based on the well-established large-eddy simulation code PALM, whichalready has numerous features related to this goal, such as an option for prescribing Cartesian obstacles. Inthis article we will outline those components that will be added or modified in the framework of MOSAIK.Moreover, we will discuss the everlasting issue of acquisition of suitable geographical information as inputdata and the underlying requirements from the model's perspective.
Modeling and simulation play a key role in analyzing the complex electrochemical behavior of lithium-ion batteries. We present the development of a thermodynamic and kinetic modeling framework for intercalation electrochemistry within the open-source software Cantera. Instead of using equilibrium potentials and single-step Butler-Volmer kinetics, Cantera is based on molar thermodynamic data and mass-action kinetics, providing a physically-based and flexible means for complex reaction pathways. Herein, we introduce a new thermodynamic class for intercalation materials into the open-source software. We discuss the derivation of molar thermodynamic data from experimental half-cell potentials, and provide practical guidelines. We then demonstrate the new class using a single-particle model of a lithium cobalt oxide/graphite lithium-ion cell, implemented in MATLAB. With the present extensions, Cantera provides a platform for the lithium-ion battery modeling community both for consistent thermodynamic and kinetic models and for exchanging the required thermodynamic and kinetic parameters. We provide the full MATLAB code and parameter files as supplementary material to this article.
The measurement of the active material volume fraction in composite electrodes of lithium-ion battery cells is difficult due to the small (sub-micrometer) and irregular structure and multi-component composition of the electrodes, particularly in the case of blend electrodes. State-of-the-art experimental methods such as focused ion beam/scanning electron microscopy (FIB/SEM) and subsequent image analysis require expensive equipment and significant expertise. We present here a simple method for identifying active material volume fractions in single-material and blend electrodes, based on the comparison of experimental equilibrium cell voltage curve (open-circuit voltage as function of charge throughput) with active material half-cell potential curves (half-cell potential as function of lithium stoichiometry). The method requires only (i) low-current cycling data of full cells, (ii) cell opening for measurement of electrode thickness and active electrode area, and (iii) literature half-cell potentials of the active materials. Mathematical optimization is used to identify volume fractions and lithium stoichiometry ranges in which the active materials are cycled. The method is particularly useful for model parameterization of either physicochemical (e.g., pseudo-two-dimensional) models or equivalent circuit models, as it yields a self-consistent set of stoichiometric and structural parameters. The method is demonstrated using a commercial LCO–NCA/graphite pouch cell with blend cathode, but can also be applied to other blends (e.g., graphite–silicon anode).
Medical devices accompany our everyday life and come across in situations of worse condition, in significant moments concerning the health or during routine checkups. To ensure flawless operations and error-free results it is essential to test applications and devices. High risks for patient’s health come with operating errors [33] so that the presented research project, called Professional UX, identifies signals and irritations caused by the interaction with a certain device by analyzing mimic, voice and eye tracking data during user experience tests. Besides, this paper will provide information on typical errors of interactive applications which are based on an empirical lab-based survey and the evaluated results achieved. The pictured proceeding of user experience tests and the following analysis can also be applied to other fields and serves as a support for the optimization of products and systems.
Top-level staff prefers to live in urban areas with perfect social infrastructure. This is a common problem for excellent companies (“hidden champions”) in rural areas: even if they can provide the services qualified applicants appreciate for daily living, they fail to attract them because important facts are not presented sufficiently in social media or on the corporate website. This is especially true for applicants with families. The contribution of this paper is four-fold: we provide an overview of the current state of online recruiting activities of hidden champions (1). Based on this corpus, we describe the applicant service gap for company information in rural communes (2). A study on user experience (UX) identifies the applicants’ wishes and needs, focusing on a family-oriented information system on living conditions in rural areas (3). Finally, we present the results of an online survey on the value of such information systems with more than 200 participants (4).
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
Background: Pulmonary vein isolation (PVI) using cryoballoon catheters are a recognized method for the treatment of atrial fibrillation (AF). This method offers shorter treatment duration in contrast to the classical therapy with high-frequency (HF) ablation.
Purpose: The aim of this study was to integrate different cryoballoon catheters and a HF catheter into a heart rhythm model and to compare them by means of static and dynamic electromagnetic and thermal simulation in use under AF.
Methods: The cryoballoon catheters from Medtronic and the HF ablation catheter from Osypka were modelled virtually with the aid of manufacturer specifications and the CST (Computer Simulation Technology, Darmstadt) simulation program. The cryoballoon catheter was located in the lower left pulmonary vein of the virtual heart rhythm model for the realization of pulmonary vein isolation (PVI) by cryoenergy. The simulated temperature at the balloon surface was -50°C during the simulation.
Results: During a simulated 20 second application of a cryoballoon catheter at -50°C, a temperature of -24°C was measured at a depth of 0.5 mm in the myocardium. At a depth of 1 mm the temperature was -3°C, at 2 mm depth 18°C and at 3 mm depth 29°C. Under the 15 second application of a RF catheter with a 8 mm electrode and a power of 5 W at 420 kHz, the temperature at the tip of the electrode was 110°C. At a depth of 0.5 mm in the myocardium, the temperature was 75°C, at a depth of 1 mm 58°C, at 2 mm depth 45°C and at 3 mm depth 38°C.
Conclusions: The simulation of temperature profiles during the virtual application of several catheter models in the heart rhythm model allows the static and dynamic simulation of PVI by cryoballoon ablation and RF ablation. The three-dimensional simulation can be used to improve ablation applications by creating a model in personalized cardiac rhythm therapy from MRI or CT data of a heart and finding a favourable position for ablation of AF.
Oxidation of the nickel electrode is a severe aging mechanism of solid oxide fuel cells (SOFC) and solid oxide electrolyzer cells (SOEC). This work presents a modeling study of safe operating conditions with respect to nickel oxide formation. Microkinetic reaction mechanisms for thermochemical and electrochemical nickel oxidation are integrated into a 2D multiphase model of an anode‐supported solid oxide cell. Local oxidation propensity can be separated into four regimes. Simulations show that the thermochemical pathway generally dominates the electrochemical pathway. As a consequence, as long as fuel utilization is low, cell operation considerably below electrochemical oxidation limit of 0.704 V is possible without the risk of reoxidation.
Printed systems spark immense interest in industry, and for several parts such as solar cells or radio frequency identification antennas, printed products are already available on the market. This has led to intense research; however, printed field-effect transistors (FETs) and logics derived thereof still have not been sufficiently developed to be adapted by industry. Among others, one of the reasons for this is the lack of control of the threshold voltage during production. In this work, we show an approach to adjust the threshold voltage (Vth) in printed electrolyte-gated FETs (EGFETs) with high accuracy by doping indium-oxide semiconducting channels with chromium. Despite high doping concentrations achieved by a wet chemical process during precursor ink preparation, good on/off-ratios of more than five orders of magnitude could be demonstrated. The synthesis process is simple, inexpensive, and easily scalable and leads to depletion-mode EGFETs, which are fully functional at operation potentials below 2 V and allows us to increase Vth by approximately 0.5 V.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
This book, now in its second, completely revised and updated edition, offers a critical approach to the challenging interpretation of the latest research data obtained using functional neuroimaging in whiplash injury. Such a comprehensive guide to recent and current international research in the field is more necessary than ever, given that the confusion regarding the condition and the medicolegal discussions surrounding it have increased further despite the publication of much literature on the subject. In recent decades especially the functional imaging methods – such as single-photon emission tomography, positron emission tomography, functional MRI, and hybrid techniques – have demonstrated a variety of significant brain alterations. Functional Neuroimaging in Whiplash Injury - New Approaches covers all aspects, including the imaging tools themselves, the various methods of image analysis, different atlas systems, and diagnostic and clinical aspects. The book will help physicians, patients and their relatives and friends, and others to understand this condition as a disease.
In this paper pathophysiological interrelated deactivation/activation phenomena are set out in the example of whiplash injury. These phenomena could have been underestimated in previous positron emission tomography studies as their focus was on hypoperfusion rather than hyperperfusion. In addition, statistical parametric mapping analysis of cerebral studies is normally not fine-tuned to special interesting areas rather than to obvious clusters of difference.
The Baroque composer Johann Sebastian Bach (1685–1750) has left us with many puzzles. The well-known oil painting by Elias Gottlob Haußmann is the only painting for which Bach actually posed in person. According to this portrait, Bach must have been quite obese. The cheeks and nose are flushed – possibly as signs of hypertension – and the eye lids are narrow – a sign of myopia. Furthermore, there is a thinning of the lateral third of the right eyebrow, which is known as Hertoghe’s sign, and indicated periorbital edema. Both signs are compatible with hypothyroidism. Bach might have been suffering from type-2 diabetes as the origin of his final illness, and the obituary reports two cataract surgeries by oculist John Taylor in March/April 1750, and, four months later, “apoplexy” followed by a high fever, of which Bach died. It may be speculated, however, that Bach’s entire illness was the result of his presumed obesity, possibly in combination with hypothyroidism.
Kommentar zum Artikel "Arthur Willis Goodspeed" von Otto Glasser, veröffentlicht in Science Vol. 98, Issue 2540, Seite 219 (doi.org/10.1126/science.98.2536.125).
The high peak power in comparison to the average transmit power is one of the major long-standing problems in multicarrier modulation and is known as the PAPR (peak to average power ratio) problem. Many PAPR reduction methods have been devised and their comparison is usually based on the complementary cumulative distribution function (CCDF) of the PAPR. While this comparison is straightforward and easy to compute, its relationship with system performance metrics like the (uncoded) BER or the word error rate (WER) for coded systems is considerably more involved. We evaluate the impact of the PAPR on performance metrics like uncoded BER, EVM (error vector magnitude), mutual information and the WER for soft decoding. In this context, we find that system performance is not necessarily degraded by an increasing PAPR. We show that a high number of subcarriers, despite the corresponding high PAPR, is actually not a problem for the system performance and provide a simple explanation for this seemingly counter-intuitive fact.
In numerical calculations, guided acoustic waves, localized in two spatial dimensions, have been shown to exist and their properties have been investigated in three different geometries, (i) a half-space consisting of two elastic media with a planar interface inclined to the common surface, (ii) a wedge made of two elastic media with a planar interface, and (iii) the free edge of an elastic layer between two quarter-spaces or two wedge-shaped pieces of a material with elastic properties and density differing from those of the intermediate layer.
For the special case of Poisson media forming systems (i) and (ii), the existence ranges of these 1D guided waves in parameter space have been determined and found to strongly depend on the inclination angle between surface and interface in case (i) and the wedge angle in case (ii). In a system of type (ii) made of two materials with strong acoustic mismatch and in systems of type (iii), leaky waves have been found with a high degree of spatial localization of the associated displacements, although the two materials constituting these structures are isotropic.
Both the fully guided and the leaky waves analyzed in this work could find applications in non-destructive evaluation of composite structures and should be accounted for in geophysical prospecting, for example.
A critical comparison is presented of the two computational approaches employed, namely a semi-analytical finite element scheme and a method based on an expansion of the displacement field in a double series of special functions.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
Printed Electronics is perceived to have a major impact in the fields of smart sensors, Internet of Things and wearables. Especially low power printed technologies such as electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials and inkjet printing are very promising in such application domains. In this paper, we discuss a modeling approach to describe the variations of printed devices. Incorporating these models and design flows into our previously developed printed design system allows for robust circuit design. Additionally, we propose a reliability-aware routing solution for printed electronics technology based on the technology constraints in printing crossovers. The proposed methodology was validated on multiple benchmark circuits and can be easily integrated with the design automation tools-set.
A car is only useful, when it runs properly – but keeping a car it running is getting more and more complex. Car service providers need a deep knowledge about technical details of the different car models. On the other hand car producers try to keep this information in their ownership. Digital data collection takes place every second on the car´s product life cycle and is stored on the car producers´ servers. The contribution of this paper is three-fold: we will provide an overview of the current concepts of intelligent order assistant technologies (I). This corpus is used to come to a more precise description of the specific service performance aspects (II). Finally, a representative empirical study with German motor mechanics will help to evaluate the wishes and needs regarding an intelligent order assistant in the garage (III).
With the growing share of renewable energies in the electricity supply, transmission and distribution grids have to be adapted. A profound understanding of the structural characteristics of distribution grids is essential to define suitable strategies for grid expansion. Many countries have a large number of distribution system operators (DSOs) whose standards vary widely, which contributes to coordination problems during peak load hours. This study contributes to targeted distribution grid development by classifying DSOs according to their remuneration requirement. To examine the amendment potential, structural and grid development data from 109 distribution grids in South-Western Germany, are collected, referring to publications of the respective DSOs. The resulting data base is assessed statistically to identify clusters of DSOs according to the fit of demographic requirements and grid-construction status and thus identify development needs to enable a broader use of regenerative energy resources. Three alternative algorithms are explored to manage this task. The study finds the novel Gauss-Newton algorithm optimal to analyse the fit of grid conditions to regional requirements and successfully identifies grids with remuneration needs. It is superior to the so far used K-Means algorithm. The method developed here is transferable to other areas for grid analysis and targeted, cost-efficient development.
Protecting software from illegal access, intentional modification or reverse engineering is an inherently difficult practical problem involving code obfuscation techniques and real-time cryptographic protection of code. In traditional systems a secure element (the "dongle") is used to protect software. However, this approach suffers from several technical and economical drawbacks such as the dongle being lost or broken.
We present a system that provides such dongles as a cloud service, and more importantly, provides the required cryptographic material to control access to software functionality in real-time.
This system is developed as part of an ongoing nationally funded research project and is now entering a first trial stage with stakeholders from different industrial sectors.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously throughout development iterations, threat modeling can help to establish a "Secure by Design" approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this paper the current state of threat modeling is investigated. This investigation drove the derivation of requirements for the development of a new threat modelling framework and tool, called OVVL. OVVL utilizes concepts of established threat modeling methodologies, as well as functionality not available in existing solutions.
Model-based analysis of Electrochemical Pressure Impedance Spectroscopy (EPIS) for PEM Fuel Cells
(2019)
Electrochemical impedance spectroscopy (EIS) is a widely-used diagnostic technique to characterize electrochemical processes. It is based on the dynamic analysis of two electrical observables, that is, current and voltage. Electrochemical cells with gaseous reactants or products, in particular fuel cells, offer an additional observable, that is, the gas pressure. The dynamic coupling of current or voltage with gas pressure gives rise to a number of additional impedance definitions, for which we have previously introduced the term electrochemical pressure impedance spectroscopy (EPIS) [1,2]. EPIS shows a particular sensitivity towards transport processes of gas-phase or dissolved species, in particular, diffusion coefficients and transport pathway lengths. It is as such complementary to standard EIS, which is mainly sensitive towards electrochemical processes. First EPIS experiments on PEM fuel cells have recently been shown [3].
We present a detailed modeling and simulation analysis of EPIS of a PEM fuel cell. We use a 1D+1D continuum model of a fuel/air channel pair with GDL and MEA. Backpressure is dynamically varied, and the resulting simulated oscillation in cell voltage is evaluated to yield the ▁Z_( V⁄p_ca ) EPIS signal. Results are obtained for different transport situations of the fuel cell, giving rise to very complex EPIS shapes in the Nyquist plot. This complexity shows the necessity of model-based interpretation of the complex EPIS shapes. Based on the simulation results, specific features in the EPIS spectra can be assigned to different transport domains (gas channel, GDL, membrane water transport).
Spinal cord stimulation (SCS) is the most commonly used technique of neurostimulation. It involves the stimulation of the spinal cord and is therefore used to treat chronic pain. The existing esophageal catheters are used for temperature monitoring during an electrophysiology study with ablation and transesophageal echocardiography. The aim of the study was to model the spine and new esophageal electrodes for the transesophageal electrical pacing of the spinal cord, and to integrate them in the Offenburg heart rhythm model for the static and dynamic simulation of transesophageal neurostimulation. The modeling and simulation were both performed with the electromagnetic and thermal simulation software CST (Computer Simulation Technology, Darmstadt). Two new esophageal catheters were modelled as well as a thoracic spine based on the dimensions of a human skeleton. The simulation of directed transesophageal neurostimulation is performed using the esophageal balloon catheter with an electric pacing potential of 5 V and a trapezoidal signal. A potential of 4.33 V can be measured directly at the electrode, 3.71 V in the myocardium at a depth of 2 mm, 2.68 V in the thoracic vertebra at a depth of 10 mm, 2.1 V in the thoracic vertebra at a depth of 50 mm and 2.09 V in the spinal cord at a depth of 70 mm. The relation between the voltage delivered to the electrodes and the voltage applied to the spinal cord is linear. Virtual heart rhythm and catheter models as well as the simulation of electrical pacing fields and electrical sensing fields allow the static and dynamic simulation of directed transesophageal electrical pacing of the spinal cord. The 3D simulation of the electrical sensing and pacing fields may be used to optimize transesophageal neurostimulation.
Spinal cord stimulation (SCS) is the most commonly used technique of neurostimulation. It involves the stimulation of the spinal cord and is therefore used to treat chronic pain. The existing esophageal catheters are used for temperature monitoring during an electrophysiology study with ablation and transesophageal echocardiography. The aim of the study was to model the spine and new esophageal electrodes for the transesophageal electrical pacing of the spinal cord, and to integrate them in the Offenburg heart rhythm model for the static and dynamic simulation of transesophageal neurostimulation. The modeling and simulation were both performed with the electromagnetic and thermal simulation software CST (Computer Simulation Technology, Darmstadt). Two new esophageal catheters were modelled as well as a thoracic spine based on the dimensions of a human skeleton. The simulation of directed transesophageal neurostimulation is performed using the esophageal balloon catheter with an electric pacing potential of 5 V and a trapezoidal signal. A potential of 4.33 V can be measured directly at the electrode, 3.71 V in the myocardium at a depth of 2 mm, 2.68 V in the thoracic vertebra at a depth of 10 mm, 2.1 V in the thoracic vertebra at a depth of 50 mm and 2.09 V in the spinal cord at a depth of 70 mm. The relation between the voltage delivered to the electrodes and the voltage applied to the spinal cord is linear. Virtual heart rhythm and catheter models as well as the simulation of electrical pacing fields and electrical sensing fields allow the static and dynamic simulation of directed transesophageal electrical pacing of the spinal cord. The 3D simulation of the electrical sensing and pacing fields may be used to optimize transesophageal neurostimulation.
Cast aluminum alloys are frequently used as materials for cylinder head applications in internal combustion gasoline engines. These components must withstand severe cyclic mechanical and thermal loads throughout their lifetime. Reliable computational methods allow for accurate estimation of stresses, strains, and temperature fields and lead to more realistic Thermomechanical Fatigue (TMF) lifetime predictions. With accurate numerical methods, the components could be optimized via computer simulations and the number of required bench tests could be reduced significantly. These types of alloys are normally optimized for peak hardness from a quenched state that maximizes the strength of the material. However due to high temperature exposure, in service or under test conditions, the material would experience an over-ageing effect that leads to a significant reduction in the strength of the material. To numerically account for ageing effects, the Shercliff & Ashby ageing model is combined with a Chaboche-type viscoplasticity model available in the finite-element program ABAQUS by defining field variables. The constitutive model with ageing effects is correlated with uniaxial cyclic isothermal tests in the T6 state, the overaged state, as well as thermomechanical tests. On the other hand, the mechanism-based TMF damage model (DTMF) is calibrated for both T6 and over-aged state. Both the constitutive and the damage model are applied to a cylinder head component simulating several cycles on an engine dynamometer test. The effects of including ageing for both models are shown.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.