Refine
Year of publication
- 2017 (125) (remove)
Document Type
- Conference Proceeding (74)
- Article (reviewed) (31)
- Part of a Book (5)
- Article (unreviewed) (5)
- Letter to Editor (4)
- Book (2)
- Bachelor Thesis (1)
- Master's Thesis (1)
- Periodical Part (1)
- Report (1)
Conference Type
- Konferenzartikel (48)
- Konferenz-Abstract (19)
- Konferenz-Poster (3)
- Sonstiges (3)
- Konferenzband (1)
Language
- English (125) (remove)
Keywords
- CST (5)
- HF-Ablation (5)
- Games (4)
- CRT (3)
- Computer Games (3)
- Computerspiele (3)
- Ermüdung (3)
- Game Design (3)
- Gamification (3)
- RoboCup (3)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (50)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (43)
- Fakultät Wirtschaft (W) (18)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (15)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (13)
- ACI - Affective and Cognitive Institute (7)
- INES - Institut für nachhaltige Energiesysteme (6)
- IfTI - Institute for Trade and Innovation (4)
- WLRI - Work-Life Robotics Institute (3)
- Zentrale Einrichtungen (2)
Open Access
- Closed Access (56)
- Open Access (52)
- Closed (5)
- Bronze (4)
- Diamond (1)
- Gold (1)
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
Automatic Identification of Travel Locations in Rare Books - Object Oriented Information Management
(2017)
The digital content of the Internet is growing exponentially and mass digitization of printed media opens access to literature, in particular the genre of travel literature from the 18th and 19th century, which consists of diaries or travel books describing routes, observations or inspirations. The identification of described locations in the digital text is a long-standing challenge which requires information technology to supply dynamic links to sources by new forms of interaction and synthesis between humanistic texts and scientific observations.
Using object oriented information technology, a prototype of a software tool is developed which makes it possible to automatically identify geographic locations and travel routes mentioned in rare books. The information objects contain properties such as names and classification codes for populated places, streams, mountains and regions. Together, with the latitudes and longitudes of every single location, it is possible to geo-reference this information in order that all processed and filtered datasets can be displayed by a map application. This method has already been used in the Humboldt Digital Library to present Alexander von Humboldt’s maps and was tested in a case study to prove the correctness and reliability of the automatic identification of locations based on the work of Alexander von Humboldt and Johann Wolfgang von Goethe.
The results reveal numerous errors due to misspellings, change of location names, equality of terms and location names. But on the other hand it becomes very clear that results of the automatic object detection and recognition can be improved by error-free and comprehensive sources. As a result an increase in quality and usability of the service can be expected, accompanied by more options to detect unknown locations in the descriptions of rare books.
Technology and computer applications influence our daily lives and questions arise concerning the role of artificial intelligence and decision-making algorithms. There are warning voices, that computers can, in theory, emulate human intelligence-and exceed it. This paper points out that a replacement of humans by computers is unlikely, because human thinking is characterized by cognitive heuristics and emotions, which cannot simply be implemented in machines operating with algorithms, procedural data processing or artificial neural networks. However, we are going to share our responsibilities with superior computer systems, which are tracking and surveying all of our digital activities, whereas we have no idea of the decision-making processes inside the machines. It is shown that we need a new digital humanism defining rules of computer responsibilities to avoid digital totalism and comprehensive monitoring and controlling of individuals within the planet Earth.
Objective: This paper deals with the design and the optimization of mechatronic devices.
Introduction: Comparing with existing works, the design approach presented in this paper aims to integrate optimization in the design phase of complex mechatronic systems in order to increase the efficiency of this method.
Methods: To solve this problem, a novel mechatronic system design approach has been developed in order to take the multidisciplinary aspect and to consider optimization as a tool that can be used within the embodiment design process to build mechatronic solutions from a set of solution concepts designed with innovative or routine design methods.
Conclusions: This approach has then been applied to the design and optimization of a wind turbine system that can be implemented to autonomously supply a mountain cottage.
Singapore’s success in transforming itself from a poor, vulnerable economy to one of the richest countries in the world (IMF, 2016) is nothing short of inspirational to many small economies around the globe. Given its lack of resources, Singapore relied upon foreign investors to fuel its growth not only through cash injection into the economy in the form of Foreign Direct Investments (FDI) but also to help upgrade its skills and technological stock. This study looks at how Singapore inspired many Multi-National Corporations (MNCs) into pouring a large sum of investments into this small ailing citystate and if this idea can be generalized to apply it in other economies, especially in Oman.
In a bid to explain the large flow of Capital into an economy, this study moves on further to review most prominent literature in the field since Macdougall (1958) first laid the groundwork for the subsequent theories on FDI. Based on the review of several previous studies, the most significant determinants of FDI were found to be government policy and political stability, inflation rate as a proxy for economic stability, quality of infrastructure and institutions, market size of the host country, openness to trade, tax policies and access to low cost factors of production.
Through a case study method with the inductive approach, this study finds that Singapore excels in all of the determinants of FDI except for the market size of the host country and access to low-cost factors of production. However, it more than compensates for these shortcomings with its strategic geographical location and numerous bilateral and regional trade agreements that give it access to markets around the region. Oman like Singapore ranks well in many of these determinants that make it a potential destination for investment. However, the sultanate could gain more interest from the MNC’s to help its growth by optimizing its policies to lower existing barriers, easing immigration laws to meet the short term skill shortage, allowing for 100 percent foreign ownership, allowing for more liberal property rights, working to improve corruption perception and opting for more trade agreements to give it easy access to larger markets. Moreover, the economy’s heavy reliance on hydrocarbon exports is seen as a major risk by investors as it creates an economic vulnerability which could potentially overshadow many other benefits of investing in the sultanate. Besides the aforementioned determinants, a lot also depends on the success of Oman’s diversification plans.
Objectives: Speech recognition on the telephone poses a challenge for patients with cochlear implants (CIs) due to a reduced bandwidth of transmission. This trial evaluates a home-based auditory training with telephone-specific filtered speech material to improve sentence recognition. Design: Randomised controlled parallel double-blind. Setting: One tertiary referral centre. Participants: A total of 20 postlingually deafened patients with CIs. Main outcome measures: Primary outcome measure was sentence recognition assessed by a modified version of the Oldenburg Sentence Test filtered to the telephone bandwidth of 0.3-3.4 kHz. Additionally, pure tone thresholds, recognition of monosyllables and subjective hearing benefit were acquired at two separate visits before and after a home-based training period of 10-14 weeks. For training, patients received a CD with speech material, either unmodified for the unfiltered training group or filtered to the telephone bandwidth in the filtered group. Results: Patients in the unfiltered training group achieved an average sentence recognition score of 70.0%±13.6% (mean±SD) before and 73.6%±16.5% after training. Patients in the filtered training group achieved 70.7%±13.8% and 78.9%±7.0%, a statistically significant difference (P=.034, t10 =2.292; two-way RM ANOVA/Bonferroni). An increase in the recognition of monosyllabic words was noted in both groups. The subjective benefit was positive for filtered and negative for unfiltered training. Conclusions: Auditory training with specifically filtered speech material provided an improvement in sentence recognition on the telephone compared to training with unfiltered material.
The following contribution deals with the growth of cracks in low-cycle fatigue (LCF) and thermomechanical fatigue (TMF) tested specimens of Inconel 718 measured by using the replica method. The specimens are loaded with different strain rates. The material shows a significantly higher crack growth rate if the strain rate is decreased. Electron backscatter diffraction (EBSD) is adopted to identify the failure mechanism and the misorientation relationship of failed grain boundaries in secondary cracks. The analyzed cracks propagated mainly transgranular but also intergranular failure can be observed in some areas. It is found that grain boundaries with coincidence site lattice (CSL) boundary structure are generally less susceptible for intergranular failure than grain boundaries with random misorientation. For modeling the experimentally identified crack behavior an existing model for fatigue crack growth based on the mechanism of time dependent elastic–plastic crack tip blunting is enhanced to describe environmental effects based on the mechanism of oxygen diffusion at the crack tip. For the diffusion process the temperature dependent parabolic diffusion law is assumed. As a result, the time dependent cyclic crack tip opening displacement (DCTOD) is used as representative value to describe both mechanisms. Thus, most
of the included model parameters characterize the deformation behavior of the material and can be determined by independent material tests. With the determined material properties, the proposed model describes the experimentally measured crack growth curves very well. The model is validated based on predictions of the number of cycles to failure of LCF as well as in-phase and out-of-phase TMF tests in the temperature range between room temperature and 650 °C.
Cast iron materials are used as materials for cylinder heads for heavy duty internal combustion engines. These components must withstand severe cyclic mechanical and thermal loads throughout their service life. While high-cycle fatigue (HCF) is dominant for the material in the water jacket region, the combination of thermal transients with mechanical load cycles results in thermomechanical fatigue (TMF) of the material in the fire deck region, even including superimposed TMF and HCF loads. Increasing the efficiency of the engines directly leads to increasing combustion pressure and temperature and, thus, lower safety margins for the currently used cast iron materials or alternatively the need for superior cast iron materials. In this paper (Part I), the TMF properties of the lamellar graphite cast iron GJL250 and the vermicular graphite cast iron GJV450 are characterized in uniaxial tests and a mechanism-based model for TMF life prediction is developed for both materials. The model can be used to estimate the fatigue life of components by means of finite-element calculations (Part II of the paper) and supports engineers in finding the appropriate material and design. Furthermore, the effect of the elastic, plastic and creep properties of the materials on the fatigue life can be evaluated with the model. However, for a material selection also the thermophysical properties, controlling to a high level the thermal stresses in the component, must be considered. Hence, the need for integral concepts for material characterization and selection from a multitude of existing and soon-to-be developed cast iron materials is discussed.
The electrical field (E-field) of the biventricular (BV) stimulation is important for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex.
The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
The modeling and simulation was carried out using the electromagnetic simulation software CST. Five multipolar left ventricular (LV) electrodes, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modelled. A selection were integrated into the heart rhythm model (Schalk, Offenburg) for the electrical field simulation. The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
The BV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode with a pulse width of 0.5 ms each. The far-field potential at the RA electrode tip was 32.86 mV and 185.97 mV at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal ablation electrode. The temperature at the catheter tip was 103.87 °C after 5 s ablation time and 37.61 °C at a distance of 2 mm inside the myocardium. After 15 s, the temperature was 118.42 °C and 42.13 °C.
Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF and could be used to optimize the CRT and AF ablation.
A complete thermomechanical fatigue (TMF) life prediction methodology is developed for predicting the TMF life of cast iron cylinder heads for efficient heavy duty internal combustion engines. The methodology uses transient temperature fields as thermal loads for the non-linear structural finite-element analysis (FEA). To obtain reliable stress and strain histories in the FEA for cast iron materials, a time and temperature dependent plasticity model which accounts for viscous effects, non-linear kinematic hardening and tensioncompression asymmetry is required. For this purpose a unified elasto-viscoplastic Chaboche model coupled with damage is developed and implemented as a user material model (USERMAT) in the general purpose FEA program ANSYS. In addition, the mechanismbased DTMF model for TMF life prediction developed in Part I of the paper is extended to three-dimensional stress states under transient non-proportional loading conditions. The material properties of the plasticity model are determined for lamellar graphite cast iron GJL250 and vermicular graphite cast iron GJV450 from isothermal and non-isothermal uniaxial tests. The methodology is applied to obtain a TMF life prediction on two cast iron cylinder heads for heavy duty diesel engine applications made from both cast iron materials. It is shown that the life predictions using the developed methodology correlate very well with observed lives from two bench tests in terms of location as well as number of cycles to failure.
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPDs is highly desirable. Outcomes of a previous study (Zirn, Arndt et al. 2016) revealed that a subset of BiCI users showed improved IPD detection thresholds with the fine structure processing strategy FS4 compared to the constant rate strategy HDCIS using narrowband stimuli. In contrast, little differences between the coding strategies were found for broadband stimuli with regard to binaural speech intelligibility level differences (BILD) as an estimate of binaural unmasking. Compared to normalhearing listeners (7.5 ± 1.2 dB) BILD were small in BiCI users (around 0.5 dB with both coding strategies).
In the present work, we investigated the influence of binaural fitting parameters on BILD. In our cohort of BiCI users many were implanted with electrode arrays differing in length left versus right. Because this length difference typically corresponded to the distance of two electrode contacts the first modification of bilateral fitting was a tonotopic adjustment by deactivation of the most apical electrode contact on the side with the deeper inserted array (tonotopic approach).
The second modification was the isolation of the residual, most apical electrode contacts by deactivation of the basally adjacent electrode contact on each side (tonotopic sparse approach). Applying these modifications, BILD improved by up to 1.5 dB.
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
In the course of the last few years, our students are becoming increasingly unhappy. Sometimes they stop attending lectures and even seem not to know how to behave correctly. It feels like they are getting on strike. Consequently, drop-out rates are sky-rocketing. The lecturers/professors are not happy either, adopting an “I-don’t-care” attitude.
An interdisciplinary, international team set in to find out: (1) What are the students unhappy about? Why is it becoming so difficult for them to cope? (2) What does the “I-don’t-care” attitude of professors actually mean? What do they care or not care about? (3) How far do the views of the parties correlate? Could some kind of mutual understanding be achieved?
The findings indicate that, at least at our universities, there is rather a long way to go from “Engineering versus Pedagogy” to “Engineering Pedagogy”.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
We present a two-dimensional (2D) planar chromatographic separation method for phytoestrogenic active compounds on RP-18 W (Merck, 1.14296) phase. It could be shown that an ethanolic extract of liquorice (Glycyrrhiza glabra) roots contains four phytoestrogenic active compounds. As solvent, in the first direction, the mix of hexane, ethyl acetate, and acetone (45:15:10, v/v) was used, and, in the second direction, that of acetone and water (15:10, v/v) was used. After separation, a modified yeast estrogen screen (YES) test was applied, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside). The enzyme can also hydrolyse X-β-Gal (5-bromo-4-chloro-3-indoxyl-β-d-galactopyranosid) into β-galactose and 5-bromo-4-chloro-3-indoxyl. The indoxyl compound is oxidized by oxygen forming the deep-blue dye 5,5β-dibromo-4,4β-dichloro-indigo which allows to detect phytoestrogenic activity more specific in the presence of native fluorescing compounds.
eLetter zum Artikel "How hair can reveal a history" von Hanae Armitage & Nala Rogers, veröffentlicht in Science, Vol. 351, Issue 6278, Seite 1134 (doi.org/10.1126/science.351.6278.1134)
eLetter zum Artikel "Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia" von Surjo R. Soekadar et al., veröffentlicht in Science Robotics, Vol. 1, No. 1 (DOI: 10.1126/scirobotics.aag3296)
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
In this paper we show that a model-free approach to learn behaviors in joint space can be successfully used to utilize toes of a humanoid robot. Keeping the approach model-free makes it applicable to any kind of humanoid robot, or robot in general. Here we focus on the benefit on robots with toes which is otherwise more difficult to exploit. The task has been to learn different kick behaviors on simulated Nao robots with toes in the RoboCup 3D soccer simulator. As a result, the robot learned to step on its toe for a kick that performs 30% better than learning the same kick without toes.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions. We present a model-based analysis of lithium-ion battery degradation in a stationary photovoltaic battery system. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including solid electrolyte interphase (SEI) formation. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics (PV), inverter, load, grid interaction, and energy management system, fed with historic weather data. Simulations are carried out for two load scenarios, a single-family house and an office tract, over annual operation cycles with one-minute time resolution. As key result, we show that the charging process causes a peak in degradation rate due to electrochemical charge overpotentials. The main drivers for cell ageing are therefore not only a high state of charge (SOC), but the charging process leading towards high SOC. We also show that the load situation not only influences system parameters like self-sufficiency and self-consumption, but also has a significant impact on battery ageing. We assess reduced charge cut-off voltage as ageing mitigation strategy.
One of the practical bottlenecks associated with commercialization of lithium-air cells is the choice of an appropriate electrolyte that provides the required combination of cell performance, cyclability and safety. With the help of a two-dimensional multiphysics model, we attempt to narrow down the electrolyte choice by providing insights into the effect of the transport properties of electrolyte, electrode saturation (flooded versus gas diffusion), and electrode thickness on a single discharge performance of a lithium-air button cell cathode for five different electrolytes including water, ionic liquid, carbonate, ether, and sulfoxide. The 2D distribution of local current density and concentrations of electrochemically active species (O2 and Li+) in the cathode is also discussed w.r.t electrode saturation. Furthermore, the efficacy of species transport in the cathode is quantified by introducing two parameters, firstly, a transport efficiency that gives local insight into the distribution of mass transfer losses, and secondly, an active electrode volume that gives global insight into the cathode volume utilization at different current densities. A detailed discussion is presented toward understanding the design-induced performance limitations in a Li-air button cell prototype.
The DMFC is a promising option for backup power systems and for the power supply of portable devices. However, from the modeling point of view liquid-feed DMFC are challenging systems due to the complex electrochemistry, the inherent two-phase transport and the effect of methanol crossover. In this paper we present a physical 1D cell model to describe the relevant processes for DMFC performance ranging from electrochemistry on the surface of the catalyst up to transport on the cell level. A two-phase flow model is implemented describing the transport in gas diffusion layer and catalyst layer at the anode side. Electrochemistry is described by elementary steps for the reactions occurring at anode and cathode, including adsorbed intermediate species on the platinum and ruthenium surfaces. Furthermore, a detailed membrane model including methanol crossover is employed. The model is validated using polarization curves, methanol crossover measurements and impedance spectra. It permits to analyze both steady-state and transient behavior with a high level of predictive capabilities. Steady-state simulations are used to investigate the open circuit voltage as well as the overpotentials of anode, cathode and electrolyte. Finally, the transient behavior after current interruption is studied in detail.
Lithium-ion batteries show a complex thermo-electrochemical performance and aging behavior. This paper presents a modeling and simulation framework that is able to describe both multi-scale heat and mass transport and complex electrochemical reaction mechanisms. The transport model is based on a 1D + 1D + 1D (pseudo-3D or P3D) multi-scale approach for intra-particle lithium diffusion, electrode-pair mass and charge transport, and cell-level heat transport, coupled via boundary conditions and homogenization approaches. The electrochemistry model is based on the use of the open-source chemical kinetics code CANTERA, allowing flexible multi-phase electrochemistry to describe both main and side reactions such as SEI formation. A model of gas-phase pressure buildup inside the cell upon aging is added. We parameterize the model to reflect the performance and aging behavior of a lithium iron phosphate (LiFePO4, LFP)/graphite (LiC6) 26650 battery cell. Performance (0.1–10 C discharge/charge at 25, 40 and 60°C) and calendaric aging experimental data (500 days at 30°C and 45°C and different SOC) from literature can be successfully reproduced. The predicted internal cell states (concentrations, potential, temperature, pressure, internal resistances) are shown and discussed. The model is able to capture the nonlinear feedback between performance, aging, and temperature.
This book offers a compendium of best practices in game dynamics. It covers a wide range of dynamic game elements ranging from player behavior over artificial intelligence to procedural content generation. Such dynamics make virtual worlds more lively and realistic and they also create the potential for moments of amazement and surprise. In many cases, game dynamics are driven by a combination of random seeds, player records and procedural algorithms. Games can even incorporate the player’s real-world behavior to create dynamic responses. The best practices illustrate how dynamic elements improve the user experience and increase the replay value.
The book draws upon interdisciplinary approaches; researchers and practitioners from Game Studies, Computer Science, Human-Computer Interaction, Psychology and other disciplines will find this book to be an exceptional resource of both creative inspiration and hands-on process knowledge.
Defining Recrutainment: A Model and a Survey on the Gamification of Recruiting and Human Resources
(2017)
Recrutainment, is a hybrid word combining recruiting and entertainment. It describes the combination of activities in human resources and gamification. Concepts and methods from game design are now used to assess and select future employees. Beyond this area, recrutainment is also applied for internal processes like professional development or even marketing campaigns. This paper’s contribution has four components: (1) we provide a conceptual background, leading to a more precise definition of recrutainment; (2) we develop a new model for analyzing solutions in recrutainment; (3) we present a corpus of 42 applications and use the new model to assess their strengths and potentials; (4) we provide a bird’s eye view on the state of the art in recrutainment and show the current weighting of gamification and recruiting aspects.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
The present-day methods of numerical simulation offer a great variety of options for optimizing metal forming processes. Although it is possible to simulate complex forming processes, the results are typically available only as 2D projections on screens. Some forming processes have reached a level of complexity beyond the level of spatial sense, which makes it necessary to use physical 3D representations to develop a deeper understanding of the material flow, microstructural processes, process and design limits, or to design the required tooling. Physical 3D models can be produced in a short amount of time using 3D printing, and indexed with a wide range of colors. In this paper, the additive manufacturing of 3D color models based on simulation results are explored by means of examples from metal forming. Different 3D-printing processes are compared on the basis of quality as well as technical and economic criteria. Other examples from the fields joining by upset-bulging of tubes and microstructure simulation are also analyzed. This paper discusses the possibilities offered by the rapid progress and wide availability of 3D printers for the design and optimization of complex metal forming processes.
Architecture models are an essential component of the development process and enable a physical representation of virtual designs. In addition to the conventional methods of model production using the machining of models made of wood, metal, plastic or glass, a number of additive manufacturing processes are now available. These new processes enable the additive manufacturing of architectural models directly from CAAD or BIM data. However, the boundary conditions applicable to the ability to manufacture models with additive manufacturing processes must also be considered. Such conditions include the minimum wall thickness, which depends on the applied additive manufacturing process and the materials used. Moreover, the need for the removal of support structures after the additive manufacturing process must also be considered. In general, a change in the scale of these models is only possible at very high effort. In order to allow these restrictions to be adequately incorporated into the CAAD model, this contribution develops a parametrized CAAD model that allows such boundary conditions to be modified and adapted while complying with the scale. Usability of this new method is illustrated and explained in detail in a case study. In addition, this article addresses the additive manufacturing processes including subsequent post-processing.
Implementation of lightweight design in the product development process of unmanned aerial vehicles
(2017)
The development and manufacturing of unmanned aerial vehicles (UAVs) require a multitude of design rules. Thereby, additive manufacturing (AM) processes provide a number of significant advantages over conventional production methods, particularly for implementing requirements with regard to lightweight construction and sustainability. A new, promising approach is presented, with which, through the combination of very light structural elements with a ribbed construction, an attached covering by means of foil is used. This contribution develops and presents a development process that is based on various development cycles. Such cycles differ in their effort and scope within the overall development, and may only comprise one part of the development process, or the entire development process. The applicability of this development process is demonstrated within the framework of a comprehensive case study. The aim is to develop an additively manufactured product that is as light as possible in the form of a UAV, along with a sustainable manufacturing process for such product. Finally, the results of this case study are analyzed with regard to the improvement of lightweight construction.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. In comparison with the engineers, the students often demonstrate lower motivation in learning systematic inventive techniques, like for example TRIZ methodology, and prefer random brainstorming for idea generation. The quality of obtained solutions also depends on the level of completeness of the problem analysis, which is more complex and time consuming in the case of interdisciplinary systems. The paper briefly describes one-semester-course of 60 hours in new product development with the Advanced Innovation Design Approach and TRIZ methodology, in which a typical industrial innovation process for one selected interdisciplinary mechatronic product is modelled.
Background: The electrical field (E-field) of the biventricular (BV) stimulation is important for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. The 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of atrial and BV stimulation and high frequency (HF) ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software CST (CST Darmstadt). Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Selected electrodes were integrated into the Offenburg heart rhythm model for the electrical field simulation. The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The right atrial stimulation was performed with an amplitude of 1.5 V with a pulse width of 0.5. The far-field potentials generated by the atrial stimulation were perceived by the right and left ventricular electrode. The far-field potential at a distance of 1 mm from the right ventricular electrode tip was 36.1 mV. The far-field potential at a distance of 1 mm from the left ventricular electrode tip was measured with 37.1 mV. The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode with a pulse width of 0.5 ms each. The far-field potentials generated by the BV stimulations could be perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. AV node ablation was simulated with an applied power of 5 W at 420 kHz and 10 W at 500 kHz at the distal 8 mm ablation electrode.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
Process engineering focuses on the design, operation, control and optimization of chemical, physical and biological processes and has applications in many industries. Process Intensification is the key development approach in the modern process engineering. The proposed Advanced Innovation Design Approach (AIDA) combines the holistic innovation process with the systematic analytical and problem solving tools of the theory of inventive problem solving TRIZ. The present paper conceptualizes the AIDA application in the field of process engineering and especially in combination with the Process Intensification. It defines the AIDA innovation algorithm for process engineering and describes process mapping, problem ranking, and concept design techniques. The approach has been validated in several industrial case studies. The presented research work is a part of the European project “Intensified by Design® platform for the intensification of processes involving solids handling”.
The collection of selected papers of the TRIZ Future Conference 2017 is in open access and is included to the Innovator, the journal of the European TRIZ Assocation.
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
Elastic constants of components are usually determined by tensile tests in combination with ultrasonic experiments. However, these properties may change due to e.g. mechanical treatments or service conditions during their lifetime. Knowledge of the actual material parameters is key to the determination of quantities like residual stresses present in the medium. In this work the acoustic nonlinearity parameter (ANP) for surface acoustic waves is examined through the derivation of an evolution equation for the amplitude of the second harmonic. Given a certain depth profile of the third-order elastic constants, the dependence of the ANP with respect to the input frequency is determined and on the basis of these results, an appropriate inversion method is developed. This method is intended for the extraction of the depth dependence of the third-order elastic constants of the material from second-harmonic generation and guided wave mixing experiments, assuming that the change in the linear Rayleigh wave velocity is small. The latter assumption is supported by a 3D-FEM model study of a medium with randomly distributed microcracks as well as theoretical works on this topic in the literature.
Spectral analysis of signal averaging electrocardiography in atrial and ventricular tachyarrhythmias
(2017)
Background: Targeting complex fractionated atrial electrograms detected by automated algorithms during ablation of persistent atrial fibrillation has produced conflicting outcomes in previous electrophysiological studies. The aim of the investigation was to evaluate atrial and ventricular high frequency fractionated electrical signals with signal averaging technique.
Methods: Signal averaging electrocardiography (ECG) allows high resolution ECG technique to eliminate interference noise signals in the recorded ECG. The algorithm uses automatic ECG trigger function for signal averaged transthoracic, transesophageal and intracardiac ECG signals with novel LabVIEW software (National Instruments, Austin, Texas, USA). For spectral analysis we used fast fourier transformation in combination with spectro-temporal mapping and wavelet transformation for evaluation of detailed information about the frequency and intensity of high frequency atrial and ventricular signals.
Results: Spectral-temporal mapping and wavelet transformation of the signal averaged ECG allowed the evaluation of high frequency fractionated atrial signals in patients with atrial fibrillation and high frequency ventricular signals in patients with ventricular tachycardia. The analysis in the time domain evaluated fractionated atrial signals at the end of the signal averaged P-wave and fractionated ventricular signals at the end of the QRS complex. The analysis in the frequency domain evaluated high frequency fractionated atrial signals during the P-wave and high frequency fractionated ventricular signals during QRS complex. The combination of analysis in the time and frequency domain allowed the evaluation of fractionated signals during atrial and ventricular conduction.
Conclusions: Spectral analysis of signal averaging electrocardiography with novel LabVIEW software can utilized to evaluate atrial and ventricular conduction delays in patients with atrial fibrillation and ventricular tachycardia. Complex fractionated atrial electrograms may be useful parameters to evaluate electrical cardiac arrhythmogenic signals in atrial fibrillation ablation.