Refine
Year of publication
- 2017 (117) (remove)
Document Type
- Conference Proceeding (70)
- Article (reviewed) (29)
- Part of a Book (5)
- Article (unreviewed) (5)
- Letter to Editor (4)
- Book (2)
- Periodical Part (1)
- Report (1)
Conference Type
- Konferenzartikel (44)
- Konferenz-Abstract (19)
- Konferenz-Poster (3)
- Sonstiges (3)
- Konferenzband (1)
Language
- English (117) (remove)
Has Fulltext
- no (117) (remove)
Is part of the Bibliography
- yes (117)
Keywords
- CST (5)
- HF-Ablation (5)
- Games (4)
- CRT (3)
- Computer Games (3)
- Computerspiele (3)
- Ermüdung (3)
- Game Design (3)
- Gamification (3)
- RoboCup (3)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (46)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (40)
- Fakultät Wirtschaft (W) (17)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (15)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (7)
- INES - Institut für nachhaltige Energiesysteme (4)
- IfTI - Institute for Trade and Innovation (4)
- WLRI - Work-Life Robotics Institute (3)
- Zentrale Einrichtungen (2)
Open Access
- Closed Access (55)
- Open Access (46)
- Bronze (4)
- Closed (4)
- Diamond (1)
- Gold (1)
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
Automatic Identification of Travel Locations in Rare Books - Object Oriented Information Management
(2017)
The digital content of the Internet is growing exponentially and mass digitization of printed media opens access to literature, in particular the genre of travel literature from the 18th and 19th century, which consists of diaries or travel books describing routes, observations or inspirations. The identification of described locations in the digital text is a long-standing challenge which requires information technology to supply dynamic links to sources by new forms of interaction and synthesis between humanistic texts and scientific observations.
Using object oriented information technology, a prototype of a software tool is developed which makes it possible to automatically identify geographic locations and travel routes mentioned in rare books. The information objects contain properties such as names and classification codes for populated places, streams, mountains and regions. Together, with the latitudes and longitudes of every single location, it is possible to geo-reference this information in order that all processed and filtered datasets can be displayed by a map application. This method has already been used in the Humboldt Digital Library to present Alexander von Humboldt’s maps and was tested in a case study to prove the correctness and reliability of the automatic identification of locations based on the work of Alexander von Humboldt and Johann Wolfgang von Goethe.
The results reveal numerous errors due to misspellings, change of location names, equality of terms and location names. But on the other hand it becomes very clear that results of the automatic object detection and recognition can be improved by error-free and comprehensive sources. As a result an increase in quality and usability of the service can be expected, accompanied by more options to detect unknown locations in the descriptions of rare books.
Technology and computer applications influence our daily lives and questions arise concerning the role of artificial intelligence and decision-making algorithms. There are warning voices, that computers can, in theory, emulate human intelligence-and exceed it. This paper points out that a replacement of humans by computers is unlikely, because human thinking is characterized by cognitive heuristics and emotions, which cannot simply be implemented in machines operating with algorithms, procedural data processing or artificial neural networks. However, we are going to share our responsibilities with superior computer systems, which are tracking and surveying all of our digital activities, whereas we have no idea of the decision-making processes inside the machines. It is shown that we need a new digital humanism defining rules of computer responsibilities to avoid digital totalism and comprehensive monitoring and controlling of individuals within the planet Earth.
Objective: This paper deals with the design and the optimization of mechatronic devices.
Introduction: Comparing with existing works, the design approach presented in this paper aims to integrate optimization in the design phase of complex mechatronic systems in order to increase the efficiency of this method.
Methods: To solve this problem, a novel mechatronic system design approach has been developed in order to take the multidisciplinary aspect and to consider optimization as a tool that can be used within the embodiment design process to build mechatronic solutions from a set of solution concepts designed with innovative or routine design methods.
Conclusions: This approach has then been applied to the design and optimization of a wind turbine system that can be implemented to autonomously supply a mountain cottage.
Objectives: Speech recognition on the telephone poses a challenge for patients with cochlear implants (CIs) due to a reduced bandwidth of transmission. This trial evaluates a home-based auditory training with telephone-specific filtered speech material to improve sentence recognition. Design: Randomised controlled parallel double-blind. Setting: One tertiary referral centre. Participants: A total of 20 postlingually deafened patients with CIs. Main outcome measures: Primary outcome measure was sentence recognition assessed by a modified version of the Oldenburg Sentence Test filtered to the telephone bandwidth of 0.3-3.4 kHz. Additionally, pure tone thresholds, recognition of monosyllables and subjective hearing benefit were acquired at two separate visits before and after a home-based training period of 10-14 weeks. For training, patients received a CD with speech material, either unmodified for the unfiltered training group or filtered to the telephone bandwidth in the filtered group. Results: Patients in the unfiltered training group achieved an average sentence recognition score of 70.0%±13.6% (mean±SD) before and 73.6%±16.5% after training. Patients in the filtered training group achieved 70.7%±13.8% and 78.9%±7.0%, a statistically significant difference (P=.034, t10 =2.292; two-way RM ANOVA/Bonferroni). An increase in the recognition of monosyllabic words was noted in both groups. The subjective benefit was positive for filtered and negative for unfiltered training. Conclusions: Auditory training with specifically filtered speech material provided an improvement in sentence recognition on the telephone compared to training with unfiltered material.
The following contribution deals with the growth of cracks in low-cycle fatigue (LCF) and thermomechanical fatigue (TMF) tested specimens of Inconel 718 measured by using the replica method. The specimens are loaded with different strain rates. The material shows a significantly higher crack growth rate if the strain rate is decreased. Electron backscatter diffraction (EBSD) is adopted to identify the failure mechanism and the misorientation relationship of failed grain boundaries in secondary cracks. The analyzed cracks propagated mainly transgranular but also intergranular failure can be observed in some areas. It is found that grain boundaries with coincidence site lattice (CSL) boundary structure are generally less susceptible for intergranular failure than grain boundaries with random misorientation. For modeling the experimentally identified crack behavior an existing model for fatigue crack growth based on the mechanism of time dependent elastic–plastic crack tip blunting is enhanced to describe environmental effects based on the mechanism of oxygen diffusion at the crack tip. For the diffusion process the temperature dependent parabolic diffusion law is assumed. As a result, the time dependent cyclic crack tip opening displacement (DCTOD) is used as representative value to describe both mechanisms. Thus, most
of the included model parameters characterize the deformation behavior of the material and can be determined by independent material tests. With the determined material properties, the proposed model describes the experimentally measured crack growth curves very well. The model is validated based on predictions of the number of cycles to failure of LCF as well as in-phase and out-of-phase TMF tests in the temperature range between room temperature and 650 °C.
Cast iron materials are used as materials for cylinder heads for heavy duty internal combustion engines. These components must withstand severe cyclic mechanical and thermal loads throughout their service life. While high-cycle fatigue (HCF) is dominant for the material in the water jacket region, the combination of thermal transients with mechanical load cycles results in thermomechanical fatigue (TMF) of the material in the fire deck region, even including superimposed TMF and HCF loads. Increasing the efficiency of the engines directly leads to increasing combustion pressure and temperature and, thus, lower safety margins for the currently used cast iron materials or alternatively the need for superior cast iron materials. In this paper (Part I), the TMF properties of the lamellar graphite cast iron GJL250 and the vermicular graphite cast iron GJV450 are characterized in uniaxial tests and a mechanism-based model for TMF life prediction is developed for both materials. The model can be used to estimate the fatigue life of components by means of finite-element calculations (Part II of the paper) and supports engineers in finding the appropriate material and design. Furthermore, the effect of the elastic, plastic and creep properties of the materials on the fatigue life can be evaluated with the model. However, for a material selection also the thermophysical properties, controlling to a high level the thermal stresses in the component, must be considered. Hence, the need for integral concepts for material characterization and selection from a multitude of existing and soon-to-be developed cast iron materials is discussed.
The electrical field (E-field) of the biventricular (BV) stimulation is important for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex.
The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
The modeling and simulation was carried out using the electromagnetic simulation software CST. Five multipolar left ventricular (LV) electrodes, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modelled. A selection were integrated into the heart rhythm model (Schalk, Offenburg) for the electrical field simulation. The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
The BV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode with a pulse width of 0.5 ms each. The far-field potential at the RA electrode tip was 32.86 mV and 185.97 mV at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal ablation electrode. The temperature at the catheter tip was 103.87 °C after 5 s ablation time and 37.61 °C at a distance of 2 mm inside the myocardium. After 15 s, the temperature was 118.42 °C and 42.13 °C.
Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF and could be used to optimize the CRT and AF ablation.
A complete thermomechanical fatigue (TMF) life prediction methodology is developed for predicting the TMF life of cast iron cylinder heads for efficient heavy duty internal combustion engines. The methodology uses transient temperature fields as thermal loads for the non-linear structural finite-element analysis (FEA). To obtain reliable stress and strain histories in the FEA for cast iron materials, a time and temperature dependent plasticity model which accounts for viscous effects, non-linear kinematic hardening and tensioncompression asymmetry is required. For this purpose a unified elasto-viscoplastic Chaboche model coupled with damage is developed and implemented as a user material model (USERMAT) in the general purpose FEA program ANSYS. In addition, the mechanismbased DTMF model for TMF life prediction developed in Part I of the paper is extended to three-dimensional stress states under transient non-proportional loading conditions. The material properties of the plasticity model are determined for lamellar graphite cast iron GJL250 and vermicular graphite cast iron GJV450 from isothermal and non-isothermal uniaxial tests. The methodology is applied to obtain a TMF life prediction on two cast iron cylinder heads for heavy duty diesel engine applications made from both cast iron materials. It is shown that the life predictions using the developed methodology correlate very well with observed lives from two bench tests in terms of location as well as number of cycles to failure.
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPDs is highly desirable. Outcomes of a previous study (Zirn, Arndt et al. 2016) revealed that a subset of BiCI users showed improved IPD detection thresholds with the fine structure processing strategy FS4 compared to the constant rate strategy HDCIS using narrowband stimuli. In contrast, little differences between the coding strategies were found for broadband stimuli with regard to binaural speech intelligibility level differences (BILD) as an estimate of binaural unmasking. Compared to normalhearing listeners (7.5 ± 1.2 dB) BILD were small in BiCI users (around 0.5 dB with both coding strategies).
In the present work, we investigated the influence of binaural fitting parameters on BILD. In our cohort of BiCI users many were implanted with electrode arrays differing in length left versus right. Because this length difference typically corresponded to the distance of two electrode contacts the first modification of bilateral fitting was a tonotopic adjustment by deactivation of the most apical electrode contact on the side with the deeper inserted array (tonotopic approach).
The second modification was the isolation of the residual, most apical electrode contacts by deactivation of the basally adjacent electrode contact on each side (tonotopic sparse approach). Applying these modifications, BILD improved by up to 1.5 dB.
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
In the course of the last few years, our students are becoming increasingly unhappy. Sometimes they stop attending lectures and even seem not to know how to behave correctly. It feels like they are getting on strike. Consequently, drop-out rates are sky-rocketing. The lecturers/professors are not happy either, adopting an “I-don’t-care” attitude.
An interdisciplinary, international team set in to find out: (1) What are the students unhappy about? Why is it becoming so difficult for them to cope? (2) What does the “I-don’t-care” attitude of professors actually mean? What do they care or not care about? (3) How far do the views of the parties correlate? Could some kind of mutual understanding be achieved?
The findings indicate that, at least at our universities, there is rather a long way to go from “Engineering versus Pedagogy” to “Engineering Pedagogy”.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
We present a two-dimensional (2D) planar chromatographic separation method for phytoestrogenic active compounds on RP-18 W (Merck, 1.14296) phase. It could be shown that an ethanolic extract of liquorice (Glycyrrhiza glabra) roots contains four phytoestrogenic active compounds. As solvent, in the first direction, the mix of hexane, ethyl acetate, and acetone (45:15:10, v/v) was used, and, in the second direction, that of acetone and water (15:10, v/v) was used. After separation, a modified yeast estrogen screen (YES) test was applied, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside). The enzyme can also hydrolyse X-β-Gal (5-bromo-4-chloro-3-indoxyl-β-d-galactopyranosid) into β-galactose and 5-bromo-4-chloro-3-indoxyl. The indoxyl compound is oxidized by oxygen forming the deep-blue dye 5,5β-dibromo-4,4β-dichloro-indigo which allows to detect phytoestrogenic activity more specific in the presence of native fluorescing compounds.
eLetter zum Artikel "How hair can reveal a history" von Hanae Armitage & Nala Rogers, veröffentlicht in Science, Vol. 351, Issue 6278, Seite 1134 (doi.org/10.1126/science.351.6278.1134)
eLetter zum Artikel "Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia" von Surjo R. Soekadar et al., veröffentlicht in Science Robotics, Vol. 1, No. 1 (DOI: 10.1126/scirobotics.aag3296)
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
In this paper we show that a model-free approach to learn behaviors in joint space can be successfully used to utilize toes of a humanoid robot. Keeping the approach model-free makes it applicable to any kind of humanoid robot, or robot in general. Here we focus on the benefit on robots with toes which is otherwise more difficult to exploit. The task has been to learn different kick behaviors on simulated Nao robots with toes in the RoboCup 3D soccer simulator. As a result, the robot learned to step on its toe for a kick that performs 30% better than learning the same kick without toes.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions. We present a model-based analysis of lithium-ion battery degradation in a stationary photovoltaic battery system. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including solid electrolyte interphase (SEI) formation. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics (PV), inverter, load, grid interaction, and energy management system, fed with historic weather data. Simulations are carried out for two load scenarios, a single-family house and an office tract, over annual operation cycles with one-minute time resolution. As key result, we show that the charging process causes a peak in degradation rate due to electrochemical charge overpotentials. The main drivers for cell ageing are therefore not only a high state of charge (SOC), but the charging process leading towards high SOC. We also show that the load situation not only influences system parameters like self-sufficiency and self-consumption, but also has a significant impact on battery ageing. We assess reduced charge cut-off voltage as ageing mitigation strategy.
The DMFC is a promising option for backup power systems and for the power supply of portable devices. However, from the modeling point of view liquid-feed DMFC are challenging systems due to the complex electrochemistry, the inherent two-phase transport and the effect of methanol crossover. In this paper we present a physical 1D cell model to describe the relevant processes for DMFC performance ranging from electrochemistry on the surface of the catalyst up to transport on the cell level. A two-phase flow model is implemented describing the transport in gas diffusion layer and catalyst layer at the anode side. Electrochemistry is described by elementary steps for the reactions occurring at anode and cathode, including adsorbed intermediate species on the platinum and ruthenium surfaces. Furthermore, a detailed membrane model including methanol crossover is employed. The model is validated using polarization curves, methanol crossover measurements and impedance spectra. It permits to analyze both steady-state and transient behavior with a high level of predictive capabilities. Steady-state simulations are used to investigate the open circuit voltage as well as the overpotentials of anode, cathode and electrolyte. Finally, the transient behavior after current interruption is studied in detail.
This book offers a compendium of best practices in game dynamics. It covers a wide range of dynamic game elements ranging from player behavior over artificial intelligence to procedural content generation. Such dynamics make virtual worlds more lively and realistic and they also create the potential for moments of amazement and surprise. In many cases, game dynamics are driven by a combination of random seeds, player records and procedural algorithms. Games can even incorporate the player’s real-world behavior to create dynamic responses. The best practices illustrate how dynamic elements improve the user experience and increase the replay value.
The book draws upon interdisciplinary approaches; researchers and practitioners from Game Studies, Computer Science, Human-Computer Interaction, Psychology and other disciplines will find this book to be an exceptional resource of both creative inspiration and hands-on process knowledge.
Defining Recrutainment: A Model and a Survey on the Gamification of Recruiting and Human Resources
(2017)
Recrutainment, is a hybrid word combining recruiting and entertainment. It describes the combination of activities in human resources and gamification. Concepts and methods from game design are now used to assess and select future employees. Beyond this area, recrutainment is also applied for internal processes like professional development or even marketing campaigns. This paper’s contribution has four components: (1) we provide a conceptual background, leading to a more precise definition of recrutainment; (2) we develop a new model for analyzing solutions in recrutainment; (3) we present a corpus of 42 applications and use the new model to assess their strengths and potentials; (4) we provide a bird’s eye view on the state of the art in recrutainment and show the current weighting of gamification and recruiting aspects.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
The present-day methods of numerical simulation offer a great variety of options for optimizing metal forming processes. Although it is possible to simulate complex forming processes, the results are typically available only as 2D projections on screens. Some forming processes have reached a level of complexity beyond the level of spatial sense, which makes it necessary to use physical 3D representations to develop a deeper understanding of the material flow, microstructural processes, process and design limits, or to design the required tooling. Physical 3D models can be produced in a short amount of time using 3D printing, and indexed with a wide range of colors. In this paper, the additive manufacturing of 3D color models based on simulation results are explored by means of examples from metal forming. Different 3D-printing processes are compared on the basis of quality as well as technical and economic criteria. Other examples from the fields joining by upset-bulging of tubes and microstructure simulation are also analyzed. This paper discusses the possibilities offered by the rapid progress and wide availability of 3D printers for the design and optimization of complex metal forming processes.
Architecture models are an essential component of the development process and enable a physical representation of virtual designs. In addition to the conventional methods of model production using the machining of models made of wood, metal, plastic or glass, a number of additive manufacturing processes are now available. These new processes enable the additive manufacturing of architectural models directly from CAAD or BIM data. However, the boundary conditions applicable to the ability to manufacture models with additive manufacturing processes must also be considered. Such conditions include the minimum wall thickness, which depends on the applied additive manufacturing process and the materials used. Moreover, the need for the removal of support structures after the additive manufacturing process must also be considered. In general, a change in the scale of these models is only possible at very high effort. In order to allow these restrictions to be adequately incorporated into the CAAD model, this contribution develops a parametrized CAAD model that allows such boundary conditions to be modified and adapted while complying with the scale. Usability of this new method is illustrated and explained in detail in a case study. In addition, this article addresses the additive manufacturing processes including subsequent post-processing.
Implementation of lightweight design in the product development process of unmanned aerial vehicles
(2017)
The development and manufacturing of unmanned aerial vehicles (UAVs) require a multitude of design rules. Thereby, additive manufacturing (AM) processes provide a number of significant advantages over conventional production methods, particularly for implementing requirements with regard to lightweight construction and sustainability. A new, promising approach is presented, with which, through the combination of very light structural elements with a ribbed construction, an attached covering by means of foil is used. This contribution develops and presents a development process that is based on various development cycles. Such cycles differ in their effort and scope within the overall development, and may only comprise one part of the development process, or the entire development process. The applicability of this development process is demonstrated within the framework of a comprehensive case study. The aim is to develop an additively manufactured product that is as light as possible in the form of a UAV, along with a sustainable manufacturing process for such product. Finally, the results of this case study are analyzed with regard to the improvement of lightweight construction.
Background: The electrical field (E-field) of the biventricular (BV) stimulation is important for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. The 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of atrial and BV stimulation and high frequency (HF) ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software CST (CST Darmstadt). Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Selected electrodes were integrated into the Offenburg heart rhythm model for the electrical field simulation. The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The right atrial stimulation was performed with an amplitude of 1.5 V with a pulse width of 0.5. The far-field potentials generated by the atrial stimulation were perceived by the right and left ventricular electrode. The far-field potential at a distance of 1 mm from the right ventricular electrode tip was 36.1 mV. The far-field potential at a distance of 1 mm from the left ventricular electrode tip was measured with 37.1 mV. The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode with a pulse width of 0.5 ms each. The far-field potentials generated by the BV stimulations could be perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. AV node ablation was simulated with an applied power of 5 W at 420 kHz and 10 W at 500 kHz at the distal 8 mm ablation electrode.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
Process engineering focuses on the design, operation, control and optimization of chemical, physical and biological processes and has applications in many industries. Process Intensification is the key development approach in the modern process engineering. The proposed Advanced Innovation Design Approach (AIDA) combines the holistic innovation process with the systematic analytical and problem solving tools of the theory of inventive problem solving TRIZ. The present paper conceptualizes the AIDA application in the field of process engineering and especially in combination with the Process Intensification. It defines the AIDA innovation algorithm for process engineering and describes process mapping, problem ranking, and concept design techniques. The approach has been validated in several industrial case studies. The presented research work is a part of the European project “Intensified by Design® platform for the intensification of processes involving solids handling”.
The collection of selected papers of the TRIZ Future Conference 2017 is in open access and is included to the Innovator, the journal of the European TRIZ Assocation.
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
Elastic constants of components are usually determined by tensile tests in combination with ultrasonic experiments. However, these properties may change due to e.g. mechanical treatments or service conditions during their lifetime. Knowledge of the actual material parameters is key to the determination of quantities like residual stresses present in the medium. In this work the acoustic nonlinearity parameter (ANP) for surface acoustic waves is examined through the derivation of an evolution equation for the amplitude of the second harmonic. Given a certain depth profile of the third-order elastic constants, the dependence of the ANP with respect to the input frequency is determined and on the basis of these results, an appropriate inversion method is developed. This method is intended for the extraction of the depth dependence of the third-order elastic constants of the material from second-harmonic generation and guided wave mixing experiments, assuming that the change in the linear Rayleigh wave velocity is small. The latter assumption is supported by a 3D-FEM model study of a medium with randomly distributed microcracks as well as theoretical works on this topic in the literature.
Spectral analysis of signal averaging electrocardiography in atrial and ventricular tachyarrhythmias
(2017)
Background: Targeting complex fractionated atrial electrograms detected by automated algorithms during ablation of persistent atrial fibrillation has produced conflicting outcomes in previous electrophysiological studies. The aim of the investigation was to evaluate atrial and ventricular high frequency fractionated electrical signals with signal averaging technique.
Methods: Signal averaging electrocardiography (ECG) allows high resolution ECG technique to eliminate interference noise signals in the recorded ECG. The algorithm uses automatic ECG trigger function for signal averaged transthoracic, transesophageal and intracardiac ECG signals with novel LabVIEW software (National Instruments, Austin, Texas, USA). For spectral analysis we used fast fourier transformation in combination with spectro-temporal mapping and wavelet transformation for evaluation of detailed information about the frequency and intensity of high frequency atrial and ventricular signals.
Results: Spectral-temporal mapping and wavelet transformation of the signal averaged ECG allowed the evaluation of high frequency fractionated atrial signals in patients with atrial fibrillation and high frequency ventricular signals in patients with ventricular tachycardia. The analysis in the time domain evaluated fractionated atrial signals at the end of the signal averaged P-wave and fractionated ventricular signals at the end of the QRS complex. The analysis in the frequency domain evaluated high frequency fractionated atrial signals during the P-wave and high frequency fractionated ventricular signals during QRS complex. The combination of analysis in the time and frequency domain allowed the evaluation of fractionated signals during atrial and ventricular conduction.
Conclusions: Spectral analysis of signal averaging electrocardiography with novel LabVIEW software can utilized to evaluate atrial and ventricular conduction delays in patients with atrial fibrillation and ventricular tachycardia. Complex fractionated atrial electrograms may be useful parameters to evaluate electrical cardiac arrhythmogenic signals in atrial fibrillation ablation.
Heart rhythm model and simulation of electrophysiological studies and high-frequency ablations
(2017)
Background: Target of the study was to create an accurate anatomic CAD heart rhythm model, and to show its usefulness for cardiac electrophysiological studies and high-frequency ablations. The method is more careful for the patients’ health and has the potential to replace clinical studies due to its high efficiency regarding time and costs.
Methods: All natural heart components of the new HRM were based on MRI records, which guaranteed electronic functionality. The software CST was used for the construction, while CST’s material library assured genuine tissue properties. It should be applicable to simulate different heart rhythm diseases as well as various diffusions of electromagnetic fields, caused by electrophysiological conduction, inside the heart tissue.
Results: It was achievable to simulate sinus rhythm and fourteen different heart rhythm disturbance with different atrial and ventricular conduction delays. The simulated biological excitation of healthy and sick HRM were plotted by simulated electrodes of four polar right atrial catheter, six polar His bundle catheter, ten polar coronary sinus catheter, four polar ablation catheter and eight polar transesophageal left cardiac catheter. Accordingly, six variables were rebuilt and inserted into the anatomic HRM in order to establish heart catheters for ECG monitoring and HF ablation. The HF ablation catheters made it possible to simulate various types of heart rhythm disturbance ablations with different HF ablation catheters and also showed a functional visualisation of tissue heating. The use of tetrahedral meshing HRM made it attainable to store the results faster accompanied by a higher degree of space saving. The smart meshing function reduced unnecessary high resolutions for coarse structures.
Conclusions: The new HRM for EPS simulation may be additional useful for simulation of heart rhythm disturbance, cardiac pacing, HF ablation and for locating and identification of complex fractioned signals within the atrium during atrial fibrillation HF ablation.
Background: Cardiac resynchronization therapy (CRT) with biventricular (BV) pacing is an established therapy for heart failure (HF) patients (P) with sinus rhythm, reduced left ventricular (LV) ejection fraction (EF) and electrical ventricular desynchronization. The aim of the study was to evaluate electrical interventricular delay (IVD) and left ventricular delay (LVD) in right ventricular (RV) pacemaker pacing before upgrading to CRT BV pacing.
Methods: HF P (n=11, age 69.0 ± 7.9 years, 1 female, 10 males) with DDD pacemaker (n=10), DDD defibrillator (n=1), RV pacing, New York Heart Association (NYHA) class 3.0 ± 0.2 and 24.5 ± 4.9 % LVEF were measured by surface ECG and transesophageal bipolar LV ECG before upgrading to CRT defibrillator (n=8) and CRT pacemaker (n=3). IVD was measured between onset of QRS in the surface ECG and onset of LV signal in the transesophageal ECG. LVD was measured between onset and offset of LV signal in the transesophageal ECG. CRT atrioventricular (AV) and BV pacing delay were optimized by impedance cardiography.
Results: Interventricular and intraventricular desynchronization in RV pacemaker pacing were 228.2 ± 44.8 ms QRS duration, 86.5 ± 32.8ms IVD, 94.4 ± 23.8ms LVD, 2.6 ± 0.8 QRS-IVD-ratio with correlation between IVD and QRS-IVD-ratio (r=-0.668 P=0.0248) and 2.3 ± 0.7 QRS-LVD-ratio. The LVEF-IVD-ratio was 0.3 ± 0.1 with correlation between IVD and LVEF-IVD-ratio (r=-0.8063 P=0.00272) and with correlation between QRS duration and LVEF-IVD-ratio (r=-0.7251 P=0.01157). Optimal sensing and pacing AV delay were 128.3 ± 24.8 ms AV delay after atrial sensing (n=6) and 173.3 ± 40.4 ms AV delay after atrial pacing (n=3). Optimal BV pacing delay was -4.3 ± 11.3 ms between LV and RV pacing (n=7). During 30.4 ± 29.6 month CRT follow-up, the NYHA class improved from 3.1 ± 0.2 to 2.2 ± 0.3.
Conclusions: Transesophageal electrical IVD and LVD in RV pacemaker pacing may be additional useful ventricular desynchronization parameters to improve P selection for upgrading RV pacemaker pacing to CRT BV pacing.
Background: The electrical field (E-field) of the biventricular (BV) stimulation is essential for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software. Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Different models of electrodes were integrated into a heart rhythm model for the electrical field simulation (fig.1). The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode, each with a pulse width of 0.5 ms. The far-field potentials generated by the BV stimulations were perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. A far-field potential of 185.97 mV resulted at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal 8 mm ablation electrode. The temperature at the catheter tip was 103.87 ° C after 5 s ablation time, 44.17 ° C from the catheter tip in the myocardium and 37.61 ° C at a distance of 2 mm. After 10 s, the temperature at the three measuring points described above was 107.33 ° C, 50.87 ° C, 40.05 ° C and after 15 seconds 118.42 ° C, 55.75 ° C and 42.13 ° C.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
Electrochemical impedance spectroscopy (EIS) is a widely-used diagnostic technique to characterize electrochemical processes. It is based on the dynamic analysis of two electrical observables, that is, current and voltage. Electrochemical cells with gaseous reactants or products (e.g., fuel cells, metal/air cells, electrolyzers) offer an additional observable, that is, the gas pressure. The dynamic coupling of current and/or voltage with gas pressure gives rise to a number of additional impedance definitions, for which we have introduced the term electrochemical pressure impedance spectroscopy (EPIS) [1,2]. EPIS shows a particular sensitivity towards transport processes of gas-phase or dissolved species, in particular, diffusion coefficients and transport pathway lengths. It is as such complementary to standard EIS, which is mainly sensitive towards electrochemical processes. This sensitivity can be exploited for model parameterization and validation. A general analysis of EPIS is presented, which shows the necessity of model-based interpretation of the complex EPIS shapes in the Nyquist plot (cf. Figure). We then present EPIS simulations for two different electrochemical cells: (1) a sodium/oxygen battery cell and (2) a hydrogen/air fuel cell. We use 1D or 2D electrochemical and transport models to simulate current excitation/pressure detection or pressure excitation/voltage detection. The results are compared to first EPIS experimental data available in literature [2,3].
Simulation-based degradation assessment of lithium-ion batteries in a hybrid electric vehicle
(2017)
The insufficient lifetime of lithium-ion batteries is one of the major cost driver for mobile applications. The battery pack in vehicles is one of the most expensive single components that practically must be excluded from premature replacement (i.e., before the life span of the other components end). Battery degradation is a complex physicochemical process that strongly depends on operating condition and environment. We present a simulation-based analysis of lithium-ion battery degradation during operation with a standard PHEV test cycle. We use detailed multiphysics (extended Newman-type) cell models that allow the assessment of local electrochemical potential, species and temperature distributions as driving forces for degradation, including solid electrolyte interphase (SEI) formation [1]. Fig. 1 shows an exemplary test cycle and the predicted resulting spatially-averaged SEI formation rate. We apply a time-upscaling approach to extrapolate the degradation analysis over long time scales, keeping physical accuracy while allowing end-of-life assessment [2]. Results are presented for lithium-ion battery cells with graphite/LFP chemistry. The behavior of these cells in terms of degradation propensity, performance, state of charge and other internal states is predicted during long-term cycling. State of health (SOH) is quantified as capacity fade and internal resistance increase as function of operation time.
Practical bottlenecks associated with commercialization of Lithium-air cells include capacity limitation and low cycling efficiency. The origin of such losses can be traced to complex electrochemical side reactions and reactant mass transport losses[1]. The efforts to minimize such losses include exploration of various electrolytes with additives[2], and cell component geometry and material design. Given the wide range of options for such materials, it is almost impractical to experimentally setup and characterize all those cells. Consequently, modeling and simulation studies are efficient alternatives to analyze spatially and temporally resolved cell behavior for various combinations of materials[3]. In this study, with the help of a two-dimensional multi physics model, we have focused on the effect of electrode and electrolyte interaction (electrochemistry), choice of electrolyte (species transport), and electrode geometry (electrode design) on the performance of a lithium-air button cell. Figure1a shows the schematics of the 2D axisymmetric computational domain. A comparative analysis of five different electrolytes was performed while focusing on the 2D distribution of local current density and the concentration of electro-chemically active species in the cell, that is, O2and Li+. Using two different cathode configurations, namely, flooded electrode and gas diffusion electrode (GDE)[4] at different cathode thickness, the effect of cell geometry and electrolyte saturation on cell performance was explored. Further, a detailed discussion on electrode volume utilization (cf. Figure1b) is presented via changes in the active volume of cathode that produces 90% of the total current with the cell current density for different combinations of electrolyte saturations and cathode thickness.
The building sector is one of the main consumers of energy. Therefore, heating and cooling concepts for renewable energy sources become increasingly important. For this purpose, low-temperature systems such as thermo-active building systems (TABS) are particularly suitable. This paper presents results of the use of a novel adaptive and predictive computation method, based on multiple linear regression (AMLR) for the control of TABS in a passive seminar building. Detailed comparisons are shown between the standard TABS and AMLR strategies over a period of nine months each. In addition to the reduction of thermal energy use by approx. 26% and a significant reduction of the TABS pump operation time, this paper focuses on investment savings in a passive seminar building through the use of the AMLR strategy. This includes the reduction of peak power of the chilled beams (auxiliary system) as well as a simplification of the TABS hydronic circuit and the saving of an external temperature sensor. The AMLR proves its practicality by learning from the historical building operation, by dealing with forecasting errors and it is easy to integrate into a building automation system.
Three real-lab trigeneration microgrids are investigated in non-residential environments (educational, office/administrational, companies/production) with a special focus on domain-specific load characteristics. For accurate load forecasting on such a local level, à priori information on scheduled events have been combined with statistical insight from historical load data (capturing information on not explicitly-known consumer behavior). The load forecasts are then used as data input for (predictive) energy management systems that are implemented in the trigeneration microgrids. In real-world applications, these energy management systems must especially be able to carry out a number of safety and maintenance operations on components such as the battery (e.g. gassing) or CHP unit (e.g. regular test runs). Therefore, energy management systems should combine heuristics with advanced predictive optimization methods. Reducing the effort in IT infrastructure the main and safety relevant management process steps are done on site using a Smart & Local Energy Controller (SLEC) assisted by locally measured signals or operator given information as default and external inputs for any advanced optimization. Heuristic aspects for local fine adjustment of energy flows are presented.
Designing Authentic Emotions for Non-Human Characters. A Study Evaluating Virtual Affective Behavior
(2017)
While human emotions have been researched for decades, designing authentic emotional behavior for non-human characters has received less attention. However, virtual behavior not only affects game design, but also allows creating authentic avatars or robotic companions. After a discussion of methods to model and recognize emotions, we present three characters with a decreasing level of human features and describe how established design techniques can be adapted for such characters. In a study, 220 participants assess these characters' emotional behavior, focusing on the emotion "anger". We want to determine how reliable users can recognize emotional behavior, if characters increasingly do not look and behave like humans. A secondary aim is determining if gender has an impact on the competence in emotion recognition. The findings indicate that there is an area of insecure attribution of virtual affective behavior not distant but close to human behavior. We also found that at least for anger, men and women assess emotional behavior equally well.
This work demonstrates the potentials of procedural content generation (PCG) for games, focusing on the generation of specific graphic props (reefs) in an explorer game. We briefly portray the state-of-the-art of PCG and compare various methods to create random patterns at runtime. Taking a step towards the game industry, we describe an actual game production and provide a detailed pseudocode implementation showing how Perlin or Simplex noise can be used efficiently. In a comparative study, we investigate two alternative implementations of a decisive game prop: once created traditionally by artists and once generated by procedural algorithms. 41 test subjects played both implementations. The analysis shows that PCG can create a user experience that is significantly more realistic and at the same time perceived as more aesthetically pleasing. In addition, the ever-changing nature of the procedurally generated environments is preferred with high significance, especially by players aged 45 and above.
This chapter portrays the historical and mathematical background of dynamic and procedural content generation (PCG). We portray and compare various PCG methods and analyze which mathematical approach is suited for typical applications in game design. In the next step, a structural overview of games applying PCG as well as types of PCG is presented. As abundant PCG content can be overwhelming, we discuss context-aware adaptation as a way to adapt the challenge to individual players’ requirements. Finally, we take a brief look at the future of PCG.
We present the design outline of a context-aware interactive system for smart learning in the STEM curriculum (science, technology, engineering, and mathematics). It is based on a gameful design approach and enables "playful coached learning" (PCL): a learning process enriched by gamification but also close to the learner's activities and emotional setting. After a brief introduction on related work, we describe the technological setup, the integration of projected visual feedback and the use of object and motion recognition to interpret the learner's actions. We explain how this combination enables rapid feedback and why this is particularly important for correct habit formation in practical skills training. In a second step, we discuss gamification methods and analyze which are best suited for the PCL system. Finally, emotion recognition, a major element of the final PCL design not yet implemented, is briefly outlined.
The aim of the smart grid is to achieve more efficient, distributed and secure supply of energy over the traditional power grid by using a bidirectional information flow between the grid agents (e.g. generator node, customer). One of the key optimization problems in smart grid is to produce power among generator nodes with a minimum cost while meeting the customer demand, known as Economic Dispatch Problem (EDP). In recent years, many distributed approaches to solve EDP have been proposed. However, protecting the privacy-sensitive data of individual generator nodes has been largely overlooked in the existing solutions. In this work, we show an attack against an existing auction-based EDP protocol considering a non-colluding semi-honest adversary. We briefly introduce our approach to a practical privacy-preserving EDP solution as our work in progress.
For the RoboCup Soccer AdultSize League the humanoid robot Sweaty uses a single fully convolutional neural network to detect and localize the ball, opponents and other features on the field of play. This neural network can be trained from scratch in a few hours and is able to perform in real-time within the constraints of computational resources available on the robot. The time it takes to precess an image is approximately 11 ms. Balls and goal posts are recalled in 99 % of all cases (94.5 % for all objects) accompanied by a false detection rate of 1.2 % (5.2 % for all). The object detection and localization helped Sweaty to become finalist for the RoboCup 2017 in Nagoya.
One of the challenges in humanoid robotics is motion control. Interacting with humans requires impedance control algorithms, as well as tackling the problem of the closed kinematic chains which occur when both feet touch the ground. However, pure impedance control for totally autonomous robots is difficult to realize, as this algorithm needs very precise sensors for force and speed of the actuated parts, as well as very high sampling rates for the controller input signals. Both requirements lead to a complex and heavy weight design, which makes up for heavy machines unusable in RoboCup Soccer competitions.
A lightweight motor controller was developed that can be used for admittance and impedance control as well as for model predictive control algorithms to further improve the gait of the robot.
Comparing anomalies and exceptions to multilateral dysfunction across a number of spheres of world politics, the book chapter explores pathways through and beyond gridlock in trade. It provides a vital new perspective on world politics as well as a practical guide for positive change in global policy.
Risk aversion, financing and real servicThe Global CEO Survey was launched in 2015 by researchers from Offenburg University, the University of Westminster and the London School of Economics and Political Science (LSE) to better understand and discover what factors influence exporters’ demand for credit insurance. Although some scholars discussed aspects of corporate insurance demand with regard to exporters, there is limited research concerning the demand for export credit insurance associated with firm-specific factors. Only few empirical studies support existing theories on corporate insurance demand and export credits. This project investigates and fills the relevant gap of official export credit insurance demand.es
In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator’s cabin are tracked while satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated in computer simulations. Furthermore, for motion simulation scenarios where the reference trajectories are not known beforehand, we derive an estimate on how much motion simulation fidelity can maximally be improved by any reference prediction scheme compared to the case when no prediction scheme is applied.
Micro gas turbines (MGTs) are regarded as combined heat and power (CHP) units which offer high fuel utilization and low emissions. They are applied in decentralized energy neration.
To facilitate the planning process of energy systems, namely in the context of the increasing application of optimization techniques, there is a need for easy-to-parametrize component models with sufficient accuracy which allow a fast computation. In this paper, a model is proposed where the non-linear part load characteristics of the MGT are linearized by means of physical insight of the working principles of turbomachinery. Further, it is shown that the model can be parametrized by the data usually available in spec sheets. With this model a uniform description of MGTs from several manufacturers
covering an electrical power range from 30kW to 333kW can be obtained. The MGT model was
implemented by means of Modelica/Dymola. The resulting MGT system model, comprising further heat exchangers and hydraulic components, was validated using the experimental data of a 65kW MGT from a trigeneration energy system.
Radiation is an important means of heat transfer inside an electric arc furnace (EAF).
To gain insight into the complex processes of heat transfer inside the EAF vessel, not only radiation from the surfaces but also emission and absorption of the gas phase and the dust cloud need to be considered.
Furthermore, the radiative heat exchange depends on the geometrical configuration which is continuously changing throughout the process.
The present paper introduces a system model of the EAF which takes into account the radiative heat transfer between the surfaces
and the participating medium. This is attained by the development of a simplified geometrical model,
the use of a weighted-sum-of-gray-gases model, and a simplified consideration of dust radiation.
The simulation results were compared with the data of real EAF plants available in literature.
Polygeneration systems are a key technology for the reduction of primary energy usage and emissions. High costs, lack of flexibility and effort for parameterization hinder the wide usage of modeling tools during their conceptual design. This paper describes how planning tools can be structured for the conceptual design phase where only little information is available to the planner. A library concept was developed using the principles of object-oriented modeling to address the flexibility issue. With respect to cost and expandability, the open-source modeling language Modelica was chosen. Furthermore, easy-to-parameterize component models were developed. In addition to the improved library concept and novel component models, an easy-to-adapt control concept is proposed. The component models were validated and the applicability of the library was demonstrated by means of an example. It was shown that the data usually obtained from spec sheets are sufficient to parameterize the models. In addition to this, the control concept was approved.
Simulation-based degradation assessment of lithium-ion batteries in a hybrid electric vehicle
(2017)
Muli-scale thermos-electrochemical modelling of aging mechanisms in an LFP/graphite lithium-ion cell
(2017)
Passive hybridization of battery cell and photovoltaic cell: modeling and experimental validation
(2017)
Heart rhythm model and simulation of electrophysiological studies and high-frequency ablations
(2017)
Background: The simulation of complex cardiologic structures has the potential to replace clinical studies due to its high efficiency regarding time and costs. Furthermore, the method is more careful for the patients’ health than the conventional ways. The aim of the study was to create an anatomic CAD heart rhythm model (HRM) as accurate as possible, and to show its usefulness for cardiac electrophysiological studies (EPS) and high-frequency (HF) ablations.
Methods: All natural heart components of the new HRM were based on MRI records, which guaranteed electronic functionality. The software CST (Computer Simulation Technology, Darmstadt) was used for the construction, while CST’s material library assured genuine tissue properties. It should be applicable to simulate different heart rhythm diseases as well as various diffusions of electromagnetic fields, caused by electrophysiological conduction, inside the heart tissue.
Results: It was achievable to simulate normal sinus rhythm and fourteen different heart rhythm disturbance with different atrial and ventricular conduction delays. The simulated biological excitation of healthy and sick HRM were plotted by simulated electrodes of four polar right atrial catheter, six polar His bundle catheter, ten polar coronary sinus catheter, four polar ablation catheter and eight polar transesophageal left cardiac catheter (Fig.). Accordingly, six variables were rebuilt and inserted into the anatomic HRM in order to establish heart catheters for ECG monitoring and HF ablation. The HF ablation catheters made it possible to simulate various types of heart rhythm disturbance ablations with different HF ablation catheters and also showed a functional visualisation of tissue heating. The use of tetrahedral meshing HRM made it attainable to store the results faster accompanied by a higher degree of space saving. The smart meshing function reduced unnecessary high resolutions for coarse structures.
Conclusions: The new HRM for EPS simulation may be additional useful for simulation of heart rhythm disturbance, cardiac pacing, HF ablation and for locating and identification of complex fractioned signals within the atrium during atrial fibrillation HF ablation.
Heart rhythm model and simulation of electrophysiological studies and high-frequency ablations
(2017)
Background: The simulation of complex cardiologic structures has the potential to replace clinical studies due to its high efficiency regarding time and costs. Furthermore, the method is more careful for the patients’ health than the conventional ways. The aim of the study was to create an anatomic CAD heart rhythm model (HRM) as accurate as possible, and to show its usefulness for cardiac electrophysiological studies (EPS) and high-frequency (HF) ablations.
Methods: All natural heart components of the new HRM were based on MRI records, which guaranteed electronic functionality. The software CST (Computer Simulation Technology, Darmstadt) was used for the construction, while CST’s material library assured genuine tissue properties. It should be applicable to simulate different heart rhythm diseases as well as various diffusions of electromagnetic fields, caused by electrophysiological conduction, inside the heart tissue.
Results: It was achievable to simulate normal sinus rhythm and fourteen different heart rhythm disturbance with different atrial and ventricular conduction delays. The simulated biological excitation of healthy and sick HRM were plotted by simulated electrodes of four polar right atrial catheter, six polar His bundle catheter, ten polar coronary sinus catheter, four polar ablation catheter and eight polar transesophageal left cardiac catheter (Fig.). Accordingly, six variables were rebuilt and inserted into the anatomic HRM in order to establish heart catheters for ECG monitoring and HF ablation. The HF ablation catheters made it possible to simulate various types of heart rhythm disturbance ablations with different HF ablation catheters and also showed a functional visualisation of tissue heating. The use of tetrahedral meshing HRM made it attainable to store the results faster accompanied by a higher degree of space saving. The smart meshing function reduced unnecessary high resolutions for coarse structures.
Conclusions: The new HRM for EPS simulation may be additional useful for simulation of heart rhythm disturbance, cardiac pacing, HF ablation and for locating and identification of complex fractioned signals within the atrium during atrial fibrillation HF ablation.
This book has emerged from lectures and courses given in recent years by the authors at their universities and shows how theoretical concepts of Business Intelligence are applied in SAP BW on HANA.
The authors developed a set of case studies guiding the student through the complete process of building an end-to-end BI system, based on a simple but realistic business scenario. The cases are designed in such a way that the application of many concepts such as staging, core data warehouse, data mart, reporting, etc., in SAP BW on HANA is introduced and demonstrated step by step.
Target Audience:
The cases are primarily designed for SAP BW beginners, who want a first introduction and hands-on experience with the latest version of BW on HANA. We briefly touch the general concepts of Business Intelligence and Data Warehousing. These concepts are discussed in many excellent books out in the market, which we don’t want to replace. The reader should either already be familiar with these concepts or should be willing to use the references we provide. Also, this book can NOT replace a complete consultant training for BW, but it can serve as a starting point for a journey into the world of SAP BW on HANA.
Economic growth is usually driven by improvements in productivity, economic efficiency, trade and innovation. Increasing efficiency means to produce larger output using the same amount of factors for production such as raw materials, labour, and capital. However, regardless of the driver, growth is often investment-hungry and it is not rare to find an economy with potential for growth but lacking locally available investment. In this scenario, Foreign Direct Investment (FDI) can fill the gap between investment needed to promote economic growth and locally available investments.
Comparison of Time Warping Algorithms for Rail Vehicle Velocity Estimation in Low Speed Scenarios
(2017)
In the past two decades much has been published on whiplash injury, yet both the confusion regarding the condition, and the medicolegal discussion about it have increased. In this paper, functional imaging research results are summarized using MRIcroGL3D visualization software and assembled in an image comprising regions of cerebral activation and deactivation.
Microscale trigeneration systems are highly flexible in their operation and thus offer the technical possibility for peak load shifting in building demand side management. However to harness their potential modern control methods such as model predictive control must be implemented for their optimal scheduling. In literature the need for experimental investigation of microscale trigeneration systems to identify typical characteristics of the components and their interactions has been identified. On a real-life setup control specific information of the components is collected and lessons learnt during commissioning of the equipment is shared. The data is analysed to draw the vital characteristics of the system and it will be used for creating models of the components that can be utilised for optimal control.
Modelling and Simulation of Microscale Trigeneration Systems Based on Real- Life Experimental Data
(2017)
For the shift of the energy grid towards a smarter decentralised system flexible microscale trigeneration systems will play an important role due to their ability to support the demand side management in buildings. However to harness their potential modern control methods like model predictive control must be implemented for their optimal scheduling and control. To implement such supervisory control methods, first, simple analytical models representing the behaviour of the components need to be developed. At the Institute of Energy System Technologies in Offenburg we have built a real-life microscale trigeneration plant and present in this paper the models based on experimental data. These models are qualitatively validated and their application in the future for the optimal scheduling problem is briefly motivated.
Within this work, the benefits of using predictive control methods for the operation of Adsorption Cooling Machines (ACMs) are shown on a simulation study. Since the internal control decisions of series-manufactured ACMs often cannot be influenced, the work focuses on optimized scheduling of an ACM considering its internal functioning as well as forecasts for load and driving energy occurrence. For illustration, an assumed solar thermal climate system is introduced and a system model suitable for use within gradient-based optimization methods is developed. The results of a system simulation using a conventional scheme for ACM scheduling are compared to the results of a predictive, optimization-based scheduling approach for the same exemplary scenario of load and driving energy occurrence. The benefits of the latter approach are shown and future actions for application of these methods for system control are addressed.
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2017)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
In safety critical applications wireless technologies are not widely spread. This is mainly due to reliability and latency requirements. In this paper a new wireless architecture is presented which will allow for customizing the latency and reliability for every single participant within the network. The architecture allows for building up a network of inhomogeneous participants with different reliability and latency requirements. The used TDMA scheme with TDD as duplex method is acting gentle on resources. Therefore participants with different processing and energy resources are able to participate.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
Biological in situ methanation: Gassing concept and feeding strategy for enhanced performance
(2017)
The expansion of fluctuating renewable electricity production from wind and solar energy requires huge storage capacities. Power-to-gas (PtG) can contribute to tackle that issue via a two-step process, the electrolytic production of hydrogen and a subsequent methanation step (with additional CO2). The resulting fully grid compatible methane, also known as synthetic natural gas (SNG), can be both stored and transported in the vast existing natural gas infrastructure.
To overcome current major drawbacks of PtG, the relatively low efficiency and the high costs, we developed an improved method for the methanation step. In our approach we use a further development of the biological in situ methanation of hydrogen in biogas plants. Because this strategy uses directly internal residual CO2 from the biogas process in the biogas plant, you neither need additional external CO2 nor special reactors. Thus, PtG is combined with the production of an upgraded highly methane rich raw biogas.
However, the low solubility of hydrogen in aqueous solutions and the exploitation of the maximum biological production rates are still an engineering challenge for high performance biological in situ methanation.
In our experiments a setup with membrane gassing turned out to be most promising to ensure a sufficient gas liquid mass transfer of the hydrogen. The monitoring of hydrogenotrophic and aceticlastic archaea showed some adaption of these microbial subgroups to the hydrogen feed.
In order to achieve high methane concentrations of more than 90 % in the raw biogas a CO2-controlled hydrogen feed flow rate is suggested. For methane concentrations lower than 90 % simple current controlled hydrogen supply can be applied.
Gaps in basic math knowledge are among the biggest obstacles to a successful start in university. Students starting their studies in STEM disciplines display significant diversity, “math anxiety” is a widespread phenomenon, and the transition to a self-determined way of studying presents a huge challenge. Universities offer support measures such as preparatory courses. Over the years, Offenburg University realized that with increased diversity, traditional ways of teaching in front of the class have become inefficient. The majority of the students remained inactive and just listened to the teachers’ explanations and the few active participants’ answers.
Since 2013 our new course concept fosters a shift from teaching to active learning on a large scale, involving several hundred participants of our on-site preparatory math courses. This switch to broad active practicing, however, must go hand in hand with providing individual support for an increasingly diverse student body. Meanwhile students bring along their mobile devices, and the training App TeachMatics serves as a facilitator. The course concept has been very well received by both students and teachers.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
Electrolyte-Gated Field-Effect Transistors Based on Oxide Semiconductors: Fabrication and Modeling
(2017)
The Advanced Innovation Design Approach is a holistic methodology for enhancing innovative and competitive capability of industrial companies. AIDA can be considered as an open mindset, an individually adaptable range of strongest innovation techniques such as comprehensive front-end innovation process, advanced innovation methods, best tools and methods of the TRIZ methodology, organizational measures for accelerating innovation, IT-solutions for Computer-Aided Innovation, and other innovation methods, elaborated in the recent decade in the industry and academia
Background: R-wave synchronised atrial pacing is an effective temporary pacing
therapy in infants with postoperative junctional ectopic tachycardia. In the technique
currently used, adverse short or long intervals between atrial pacing and ventricular
sensing (AP–VS) may be observed during routine clinical practice.
Objectives: The aim of the study was to analyse outcomes of R-wave synchronised
atrial pacing and the relationship between maximum tracking rates and AP–VS
intervals.
Methods: Calculated AP–VS intervals were compared with those predicted by experienced
pediatric cardiologist.
Results: A maximum tracking rate (MTR) set 10 bpm higher than the heart rate (HR)
may result in undesirable short AP–VS intervals (minimum 83 ms). A MTR set 20 bpm
above the HR is the hemodynamically better choice (minimum 96 ms). Effects of either
setting on the AP–VS interval could not be predicted by experienced observers. In our
newly proposed technique the AP–VS interval approaches 95 ms for HR > 210 bpm
and 130 ms for HR < 130 bpm. The progression is linear and decreases strictly
(− 0.4 ms/bpm) between the two extreme levels.
Conclusions: Adjusting the AP–VS interval in the currently used technique is complex
and may imply unfavorable pacemaker settings. A new pacemaker design is advisable
to allow direct control of the AP–VS interval.
Computing Aggregates on Autonomous, Self-organizing Multi-Agent System: Application "Smart Grid"
(2017)
Decentralized data aggregation plays an important role in estimating the state of the smart grid, allowing the determination of meaningful system-wide measures (such as the current power generation, consumption, etc.) to balance the power in the grid environment. Data aggregation is often practicable if the aggregation is performed effectively. However, many existing approaches are lacking in terms of fault-tolerance. We present an approach to construct a robust self-organizing overlay by exploiting the heterogeneous characteristics of the nodes and interlinking the most reliable nodes to form an stable unstructured overlay. The network structure can recover from random state perturbations in finite time and tolerates substantial message loss. Our approach is inspired from biological and sociological self-organizing mechanisms.