Refine
Year of publication
Document Type
- Conference Proceeding (934) (remove)
Conference Type
- Konferenzartikel (734)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (12)
Language
- English (934) (remove)
Keywords
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Kommunikation (9)
- Assistive Technology (8)
- TRIZ (8)
- Deep Leaning (7)
- Produktion (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (303)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (196)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (194)
- Fakultät Wirtschaft (W) (131)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (109)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (99)
- INES - Institut für nachhaltige Energiesysteme (51)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Closed Access (376)
- Open Access (361)
- Closed (185)
- Bronze (105)
- Diamond (26)
- Grün (11)
- Gold (6)
- Hybrid (6)
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Growing demands for cleaner production and higher eco-efficiency in process engineering require a comprehensive analysis of technical and environmental outcomes of customers and society. Moreover, unexpected additional technical or ecological drawbacks may appear as negative side effects of new environ-mentally friendly technologies. The paper conceptualizes a comprehensive ap-proach for analysis and ranking of engineering and ecological requirements in process engineering in order to anticipate secondary problems in eco-design and to avoid compromising the environmental or technological goals. For this purpose, the paper presents a method based on integration of the Quality Func-tion Deployment approach with the Importance-Satisfaction Analysis for the requirements ranking. The proposed method identifies and classifies compre-hensively the potential engineering and eco-engineering contradictions through analysis of correlations within requirements groups such as stakehold-er requirements (SRs) and technical requirements (TRs), and additionally through cross-relationship between SRs and TRs.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
Economic growth and ecological problems have pushed industries to switch to eco-friendly technologies. However, environmental impact is still often neglected since production efficiency remains the main concern. Patent analysis in the field of process engineering shows that, on the one hand, some eco-issues appear as secondary problems of the new technologies, and on the other hand, eco-friendly solutions often show lower efficiency or performance capability. The study categorizes typical environmental problems and eco-contradictions in the field of process engineering involving solids handling and identifies underlying inventive principles that have a higher value for environmental innovation. Finally, 42 eco-innovation methods adapting TRIZ are chronologically presented and discussed.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation
Enhancing engineering creativity with automated formulation of elementary solution principles
(2023)
The paper describes a method for the automated formulation of elementary creative stimuli for product or process design at different levels of abstraction and in different engineering domains. The experimental study evaluates the impact of structured automated idea generation on inventive thinking in engineering design and compares it with previous experimental studies in educational and industrial settings. The outlook highlights the benefits of using automated ideation in the context of AI-assisted invention and innovation.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. It briefly describes the program of a novel one-semester-course of 150 hours in new product development and inventive problem solving with TRIZ methodology, offered for the master students at the Beuth University of Applied Sciences in Berlin. The paper outlines multi-source educational approach, which includes a new product development project (about 50% of the complete course), theory, practical work, self-learning with the software tools for computer-aided innovation, and demonstrates examples of the students work. The research part analyses the learning experience, identifies the factors that impact the innovation and problem solving performance of the students, and underlines the main difficulties faced by the students in the course. It describes a method for measurement of education efficiency and compares the results with educational experience in the industry. The presented results can help universities to establish the education in new product development or to improve its performance.
CONTEXT
The paper addresses the needs of medium and small businesses regarding qualification of R&D specialists in the interdisciplinary cross-industry innovation, which promises a considerable reduction of investments and R&D expenditures. The cross-industry innovation is commonly understood as identification of analogies and transfer of technologies, processes, technical solutions, working principles or business models between industrial sectors. However, engineering graduates and specialists frequently lack the advanced skills and knowledge required to run interdisciplinary innovation across the industry boundaries.
PURPOSE
The study compares the efficiency of the cross-industry innovation methods in one semester project-oriented course. It identifies the individual challenges and preferred working techniques of the students with different prior knowledge, sets of experiences, and cultural contexts, which require attention by engineering educators.
APPROACH
Two parallel one-semester courses were offered to the mechanical and process engineering students enrolled in bachelor’s and master’s degree programs at the faculty of mechanical and process engineering. The students from different years of study were working in 12 teams of 3…6 persons each on different innovation projects, spending two hours a week in the classroom and additionally on average two hours weekly on their project research. Students' feedback and self-assessments concerning gained skills, efficiency of learned tools and intermediate findings were documented, analysed, and discussed regularly along the course.
RESULTS
Analysis of numerous student projects allows to compare and to select the tools most appropriate for finding cross-industry solutions, such as thinking in analogies, web monitoring, function-oriented search, databases of technological effects and processes, special creativity techniques and others. The utilization of learned skills in practical innovation work strengthens the motivation of students and enhances their entrepreneurial competences. Suggested learning course and given recommendations help facilitate sustainable education of ambitious specialists.
CONCLUSIONS
The structured cross-industry innovation can be successfully run as a systematic process and learned in one semester course. The choice of the preferred working teqniques made by the students is affected by their prior knowledge in science, practical experience, and cultural contexts. Major outcomes of the students’ innovation projects such as feasibility, novelty and customer value of the concepts are primarily influenced by students’ engineering design skills, prior knowledge of the technologies, and industrial or business experience.
The comprehensive assessment method includes 80 innovation performance parameters and 10 key indicators of innovation capability, such as innovation process performance, innovating system performance, market and customer orientation, technology orientation, creativity, leadership, communication and knowledge management, risk and cost management, innovative climate, and innovation competences. The cross-industry study identifies parameters critical for innovation success and reveals different innovation performance patterns in companies.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. In comparison with the engineers, the students often demonstrate lower motivation in learning systematic inventive techniques, like for example TRIZ methodology, and prefer random brainstorming for idea generation. The quality of obtained solutions also depends on the level of completeness of the problem analysis, which is more complex and time consuming in the case of interdisciplinary systems. The paper briefly describes one-semester-course of 60 hours in new product development with the Advanced Innovation Design Approach and TRIZ methodology, in which a typical industrial innovation process for one selected interdisciplinary mechatronic product is modelled.
The internal crowdsourcing-based ideation within a company can be defined as an involvement of its staff, specialists, managers, and other employees, to propose solution ideas for a pre-defined problem. This paper addresses a question, how many participants of the company-internal ideation process are required to nearly reach the ideation limit for the problems with a finite number of workable solutions. To answer the research question, the author proposes a set of metrics and a non-linear ideation performance function with a positive decreasing slope and ideation limit for the closed-ended problems. Three series of experiments helped to explore relationships between the metric attributes and resulted in a mathematical model which allows companies to predict the productivity metrics of their crowdsourcing ideation activities such as quantity of different ideas and ideation limit as a function of the number of contributors, their average personal creativity and ideation efficiency of a contributors’ group.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
Disturbances of the cardiac conduction system causing reentry mechanisms above the atrioventricular (AV) node are induced by at least one accessory pathway with different conducting properties and refractory periods. This work aims to further develop the already existing and continuously expanding Offenburg heart rhythm model to visualise the most common supraventricular reentry tachycardias to provide a better understanding of the cause of the respective reentry mechanism.
Interaction and capturing information from the surrounding is dominated by vision and hearing. Haptics on the other side, widens the bandwidth and could also replace senses (sense switching) for impaired. Haptic technologies are often limited to point-wise actuation. Here, we show that actuation in two-dimensional matrices instead creates a richer input. We describe the construction of a full-body garment for haptic communication with a distributed actuating network. The garment is divided into attachable-detachable panels or add-ons that each can carry a two dimensional matrix of actuating haptic elements. Each panel adds to an enhanced sensoric capability of the human- garment system so that together a 720° system is formed. The spatial separation of the panels on different body locations supports semantic and theme-wise separation of conversations conveyed by haptics. It also achieves directional faithfulness, which is maintaining any directional information about a distal stimulus in the haptic input.
A polarization mode dispersion measurement set-up based on a Mach-Zehnder Interferometer was realized. Measurements were carried out on short high-birefringent fibers and on long standard telecommunication single-mode fibers. In order to ensure high accurate results, special emphasis was placed on the evaluation of the interference pattern. The procedure will be described in detail and practical measurement results will be presented.
Significant improvements in module performance are possible via implementation of multi-wire electrodes. This is economically sound as long as the mechanical yield of the production is maintained. While flat ribbons have a relatively large contact area to exert forces onto the solar cell, wires with round cross section reduce this contact area considerably – in theory to an infinitively thin line. Therefore, the local stresses induced by the electrodes might increase to a point that mechanical production yields suffer unacceptably.
In this paper, we assess this issue by an analytical mechanical model as well as experiments with an encapsulant-free N.I.C.E. test setup. From these, we can derive estimations for the relationship between lay-up accuracy and expected breakage losses. This paves the way for cost-optimized choices of handling equipment in industrial N.I.C.E.-wire production lines.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
New frontiers of supraventricular tachycardia and atrial flutter evaluation and catheter ablation
(2012)
Radiofrequency catheter ablation (RFCA) has revolutionized treatment for tachyarrhythmias and has become first-line therapy for some tachycardias. Although developed in the 1980s and widely applied in the 1990s, the technique is still in development. Transesophageal atrial pacing (TAP) can used for initiation and termination of supraventricular tachycardia (SVT).
Methods: The paroxysmal SVT include a wide spectrum of disorders including, in descending order of frequency, atrial flutter, atrioventricular (AV) nodal reentry, Wolff-Parkinson-White syndrome, and atrial tachycardia. While not life-threatening in most cases, they may cause important symptoms, such as palpitations, chest discomfort, breathlessness, anxiety, and syncope, which significantly impair quality of life. Medical therapy has variable efficacy, and most patients are not rendered free of symptoms. Research over the past several decades has revealed fundamental mechanisms involved in the initiation and maintenance of all of these arrhythmias. Knowledge of mechanisms has in turn led to highly effective surgical and catheter-based treatments. The supraventricular arrhythmias and their treatment are described in this report. SVT initiation was analysed with programmed TAP in 49 patients with palpitations (age 47 ± 17 years, 24 females, 25 males).
Results: In comparison to antiarrhythmic drug therapy the radiofrequency catheter ablation in patients suffering from atrial flutter, atrioventricular nodal reentry, atrioventricular reentry and atrial tachycardia is the better choice in most cases. TAP SVT initiation was possible in 23 patients before RFCA. Atrial cycle length of SVT was 320 ± 59 ms. We initiated AV nodal reentrant tachycardia (AVNRT, n=15), atrial tachycardia (AT, n=6) and AV reentrant tachycardia with Kent pathway conduction (AVRT, n=2) before RFCA.
Conclusions: Radiofrequency catheter ablation is a successful and safe method to cure most patients with paroxysmal supraventricular tachycardias. TAP allowed initiation and termination of SVT especially in outpatients.
Active safety systems for advanced driver assistance systems act within a complex, dynamic traffic environment featuring various sensor systems which detect the vehicles’ surroundings and interior. This paper describes the recent progress towards a performance evaluation of car-to-car communication (C2C) for active safety systems - in particular for crash constellation prediction. The methodology introduced in this work is designed to evaluate the impact of different sensors on the accuracy of a crash constellation prediction algorithm. The benefit of C2C communication (viewed as a virtual sensor) within a sensor data fusion architecture for pre-crash collision prediction is explored. Therefore, a simulation environment for accident scenarios analysis reproducing real-world sensor behaviour, is designed and implemented. Performance evaluation results show that C2C increases confidence in the estimated position of the oncoming vehicle. With C2C enhancement the given accuracy in time-to-collision (TTC) estimation is achievable about 110 ms earlier for moderate velocities at TTC range of [0.5s..0.2s]. The uncertainty in the vehicle position prediction at the time of collision can be reduced about half by integrating C2C communication into the sensor data fusion.
ECG simulators, available on the market, imitate the electric activity of the heart in a simplified manner. Thus, they are suitable for education purposes but not really for testing algorithms to recognize complex arrhythmias needed for pacemakers and implantable defibrillators. Especially certain discrimination between various morphologies of atrial and ventricular fibrillation needs simulators providing native electrograms of different patients’ heart rhythm events. This explains the necessity to develop an ECG simulator providing high-resolution native intracardiac and surface electrograms of in-vivo rhythm events. In this paper we demonstrate an approach for an ECG simulator based on a consumer multichannel soundcard and a corresponding software application for a laptop computer. This Live-ECG Simulator is able to handle invasive electrogram recordings from electrophysiological studies and send the data to a modified external soundcard for subsequent digital to analog conversion. The hardware is completed with an electronic circuit providing level adjustment to adapt the output amplitude to the input conditions of several cardiac implants.
Modelling detailed chemistry in lithium-ion batteries: Insight into performance, ageing and safety
(2018)
Muli-scale thermos-electrochemical modelling of aging mechanisms in an LFP/graphite lithium-ion cell
(2017)
The electrical field (E-field) of the biventricular (BV) stimulation is important for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex.
The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
The modeling and simulation was carried out using the electromagnetic simulation software CST. Five multipolar left ventricular (LV) electrodes, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modelled. A selection were integrated into the heart rhythm model (Schalk, Offenburg) for the electrical field simulation. The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
The BV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode with a pulse width of 0.5 ms each. The far-field potential at the RA electrode tip was 32.86 mV and 185.97 mV at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal ablation electrode. The temperature at the catheter tip was 103.87 °C after 5 s ablation time and 37.61 °C at a distance of 2 mm inside the myocardium. After 15 s, the temperature was 118.42 °C and 42.13 °C.
Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF and could be used to optimize the CRT and AF ablation.
Background: The electrical field (E-field) of the biventricular (BV) stimulation is essential for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software. Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Different models of electrodes were integrated into a heart rhythm model for the electrical field simulation (fig.1). The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode, each with a pulse width of 0.5 ms. The far-field potentials generated by the BV stimulations were perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. A far-field potential of 185.97 mV resulted at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal 8 mm ablation electrode. The temperature at the catheter tip was 103.87 ° C after 5 s ablation time, 44.17 ° C from the catheter tip in the myocardium and 37.61 ° C at a distance of 2 mm. After 10 s, the temperature at the three measuring points described above was 107.33 ° C, 50.87 ° C, 40.05 ° C and after 15 seconds 118.42 ° C, 55.75 ° C and 42.13 ° C.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
A smart energy concept was designed and implemented for a cluster of 5 existing multi-family houses, which combines heat pumps, photovoltaic (PV) modules and combined heat and power units (CHP) to achieve energy- and cost-efficient operation. Measurement results of the first year of operation show that the local power generation by PV modules and CHP unit has a positive effect on the electrical self-sufficiency by reducing electricity import from the grid. In winter, when the CHP unit operates continuously for long periods, the entire electricity for the heat pump and 91 % of the total electricity demand of the neighborhood are supplied locally. In summer, only 53 % is generated within the neighborhood. The use of a specifically developed energy management system EMS is intended to further increase this share. CO2 emissions for heating and electricity of the neighborhood are evaluated and amount to 18.4 kg/(m2a). Compared to the previous energy system consisting of gas boilers (29.1 kg/(m2a)), savings of 37 % are achieved with electricity consumption from the grid being reduced by 65 %. In the second construction stage, an additional heat pump, CHP unit and PV modules will be added. The measurement results indicate that the final district energy system is likely to achieve the ambitious CO2 reduction goal of -50% and further increase the self-sufficiency of the district.
This work documents the rising acceptance of social robots for healthcare as well as their growing economic potential from 2017 to 2021. The comparison is based on two studies in the active assisted living (AAL) community. We first provide a brief overview of social robotics and a discussion of the economic potential of social health robots. We found that, despite the huge potential for robotic support in healthcare and domestic routines, social robots still lack the functionality to access that potential. At the same time, the study exemplifies a rise in acceptance: all health-related activities are more accepted in 2021 when in 2017, most of them with high statistical significance. When investigating the economic perspective, we found that persons are aware of the influence of cultural, spiritual, or religious beliefs. Most experts (57%), having a European background, expect the state or the government to be the key driver for establishing social robots in health and significantly prefer leasing or renting a social health robot to buying one. Nevertheless, we speculate that it might be a global financial elite which is first to adopt social robots.
The number of impaired persons rises -- as a result of both regular degradation with age and psychological problems like burnout. Sheltered work organizations aim to reintegrate impaired persons into work environments and prepare them for the re-entry in the regular job market.
Both for elderly and for impaired persons it is crucial to quickly assess the abilities, to identify limits and potentials and thus find work processes suitable for their skill profile.
This work focuses on the analysis and comparison of software-tools that assess the abilities of persons with impairments. We describe two established generic tools (CANTAB, Cogstate), analyze a yet unknown specialized tool (Hamet) and present a new gamified tool (GATRAS).
Finally, we present a study with 20 participants with impairments, comparing the tools against a ground truth baseline generated by a real-world assembly task.
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
Designing Authentic Emotions for Non-Human Characters. A Study Evaluating Virtual Affective Behavior
(2017)
While human emotions have been researched for decades, designing authentic emotional behavior for non-human characters has received less attention. However, virtual behavior not only affects game design, but also allows creating authentic avatars or robotic companions. After a discussion of methods to model and recognize emotions, we present three characters with a decreasing level of human features and describe how established design techniques can be adapted for such characters. In a study, 220 participants assess these characters' emotional behavior, focusing on the emotion "anger". We want to determine how reliable users can recognize emotional behavior, if characters increasingly do not look and behave like humans. A secondary aim is determining if gender has an impact on the competence in emotion recognition. The findings indicate that there is an area of insecure attribution of virtual affective behavior not distant but close to human behavior. We also found that at least for anger, men and women assess emotional behavior equally well.
In this work we provide an overview of gamification, i.e. the application of methods from game design to enrich non-gaming processes. The contribution is divided into five subsections: an introduction focusing on the progression of gamification through the hype cycle in the recent years (1), a brief introduction to gamification mechanics (1) and an overview of the state of the art in established areas (3). The focus is a discussion of more recent attempts of gamification in service and production (4). We also discuss the ethical implications (5) and the future perspectives (6) of gamified business processes. Gamification has been successfully applied in the domains education (serious games) and health (exergames) and is spreading to other areas. In recent years there have been various attempts to “gamify” business processes. While the first efforts date back as far as the collection of miles in frequent flyer programs, we will portray some of the more recent and comprehensive software-based approaches in the service industry, e.g. the gamification of processes in sales and marketing. We discuss their accomplishments as well as their social and ethical implicatio. Finally a very recent approach is presented: the application of gamification in the domain of industrial production. We discuss the special requirements in this domain and the effects on the business level and on the users. We conclude with a prognosis on the future development of gamification.
With major intellectual properties there is a long tradition of cross-media value chains -- usually starting with books and comics, then transgressing to film and TV and finally reaching interactive media like video games. In recent years the situation has changed: (1) smaller productions start to establish cross media value chains; (2) there is a trend from sequential towards parallel content production. In this work we describe how the production of a historic documentary takes a cross media approach right from the start. We analyze how this impacts the content creation pipelines with respect to story, audience and realization. The focus of the case study is the impact on the production of a documentary game. In a second step we reflect on the experiences gained so far and derive recommendations for future small-scale cross media productions.
We present the design outline of a context-aware interactive system for smart learning in the STEM curriculum (science, technology, engineering, and mathematics). It is based on a gameful design approach and enables "playful coached learning" (PCL): a learning process enriched by gamification but also close to the learner's activities and emotional setting. After a brief introduction on related work, we describe the technological setup, the integration of projected visual feedback and the use of object and motion recognition to interpret the learner's actions. We explain how this combination enables rapid feedback and why this is particularly important for correct habit formation in practical skills training. In a second step, we discuss gamification methods and analyze which are best suited for the PCL system. Finally, emotion recognition, a major element of the final PCL design not yet implemented, is briefly outlined.
What emotional effects does gamification have on users who work or learn with repetitive tasks? In this work, we use biosignals to analyze these affective effects of gamification. After a brief discussion of related work, we describe the implementation of an assistive system augmenting work by projecting elements for guidance and gamification. We also show how this system can be extended to analyse users' emotions. In a user study, we analyse both biosignals (facial expressions and electrodermal activity), and regular performance measures (error rate and task completion time).
For the performance measures, the results confirm known effects like increased speed and slightly increased error rate. In addition, the analysis of the biosignals provides strong evidence for two major affective effects: the gamification of work and learning tasks incites highly significantly more positive emotions and increases emotionality altogether. The results add to the design of assistive systems, which are aware of the physical as well as the affective context.
In this work, we investigate how gamification can be integrated into work processes in the automotive industry. The contribution contains five parts: (1) An introduction showing how gamification has become increasingly common, especially in education, health and the service industry. (2) An analysis on the state of the art of gamified applications, discussing several best practices. (3) An analysis of the special requirements for gamification in production, regarding both external norms and the mindset of workers in this domain. (4) An overview of first approaches towards a gamification of production, focusing on solutions for impaired workers in sheltered work organizations. (5) A study with a focus group of instructors at two large car manufacturers. Based on the presentation of three potential designs for the gamification of production, the study investigates the general acceptance of gamification in modern production and determines which design is best suited for future implementations.
Gamification implies the application of methods and design patterns from gaming to non-gaming areas like learning or working. We applied an existing gamification design to production processes in an organization which provides sheltered employment for impaired persons. In contrast to existing work, we investigated not only a short period but a complete workday to measure the effects on the work performance. The study indicates that gamification has (1) a negative effect on workers with considerable cognitive impairments, (2) no significant effect on workers with medium cognitive impairments and (3) a positive effect on workers with mild cognitive impairments.
Deafblindness is a condition that limits communication capabilities primarily to the haptic channel. In the EU-funded project SUITCEYES we design a system which allows haptic and thermal communication via soft interfaces and textiles. Based on user needs and informed by disability studies, we combine elements from smart textiles, sensors, semantic technologies, image processing, face and object recognition, machine learning, affective computing, and gamification. In this work, we present the underlying concepts and the overall design vision of the resulting assistive smart wearable.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
Tactile Navigation with Checkpoints as Progress Indicators?: Only when Walking Longer Straight Paths
(2020)
Persons with both vision and hearing impairments have to rely primarily on tactile feedback, which is frequently used in assistive devices. We explore the use of checkpoints as a way to give them feedback during navigation tasks. Particularly, we investigate how checkpoints can impact performance and user experience. We hypothesized that individuals receiving checkpoint feedback would take less time and perceive the navigation experience as superior to those who did not receive such feedback. Our contribution is two-fold: a detailed report on the implementation of a smart wearable with tactile feedback (1), and a user study analyzing its effects (2). The results show that in contrast to our assumptions, individuals took considerably more time to complete routes with checkpoints. Also, they perceived navigating with checkpoints as inferior to navigating without checkpoints. While the quantitative data leave little room for doubt, the qualitative data open new aspects: when walking straight and not being "overwhelmed" by various forms of feedback in succession, several participants actually appreciated the checkpoint feedback.
Towards a gamification of industrial production: a comparative study in sheltered work environments
(2015)
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
Design approaches for the gamification of production environments: a study focusing on acceptance
(2015)
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes.
In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness.
The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
We present the design of a system combining augmented reality (AR) and gamification to support elderly persons’ rehabilitation activities. The system is attached to the waist; it collects detailed movement data and at the same time augments the user’s path by projections. The projected AR-elements can provide location-based information or incite movement games. The collected data can be observed by therapists. Based on this data, the challenge level can be more frequently adapted, keeping up the patient’s motivation. The exercises can involve cognitive elements (for mild cognitive impairments), physiological elements (rehabilitation), or both. The overall vision is an individualized and gamified therapy. Thus, the system also offers application scenarios beyond rehabilitation in sports. In accordance with the methodology of design thinking, we present a first specification and a design vision based on inputs from business experts, gerontologists, physiologists, psychologists, game designers, cognitive scientists and computer scientists.
The Effect of Gamification on Emotions - The Potential of Facial Recognition in Work Environmentsns
(2015)
Gamification means using video game elements to improve user experience and user engagement in non-game services and applications. This article describes the effects when gamification is used in work contexts. Here we focus on industrial production. We describe how facial recognition can be employed to measure and quantify the effect of gamification on the users’ emotions.
The quantitative results show that gamification significantly reduces both task completion time and error rate. However, the results concerning the effect on emotions are surprising. Without gamification there are not only more unhappy expressions (as to expect) but surprisingly also more happy expressions. Both findings are statistically highly significant.
We think that in redundant production work there are generally more (negative) emotions involved. When there is no gamification happy and unhappy balance each other. In contrast gamification seems to shift the spectrum of moods towards “relaxed”. Especially for work environments such a calm attitude is a desirable effect on the users. Thus our findings support the use of gamification.
Social robots are robots interacting with humans not only in collaborative settings, but also in personal settings like domestic services and healthcare. Some social robots simulate feelings (companions) while others just help lifting (assistants). However, they often incite both fascination and fear: what abilities should social robots have and what should remain exclusive to humans? We provide a historical background on the development of robots and related machines (1), discuss examples of social robots (2) and present an expert study on their desired future abilities and applications (3) conducted within the Forum of the European Active and Assisted Living Programme (AAL). The findings indicate that most technologies required for the social robots' emotion sensing are considered ready. For care robots, the experts approve health-related tasks like drawing blood while they prefer humans to do nursing tasks like washing. On a larger societal scale, the acceptance of social robots increases highly significantly with familiarity, making health robots and even military drones more acceptable than sex robots or child companion robots for childless couples. Accordingly, the acceptance of social robots seems to decrease with the level of face-to-face emotions involved.
Innovative technologies and concepts will emerge as we move towards a more dynamic, service-based, market-driven infrastructure, where energy efficiency and savings can be facilitated by interactive distribution networks. A new generation of fully interactive Information and Communication Technologies (ICT) infrastructure has to be developed to support the optimal exploitation of the changing, complex business processes and to enable the efficient functioning of the deregulated energy market for the benefit of citizens and businesses. The architecture of such distributed system landscapes must be designed and validated, standards need to be created and widely supported, and comprehensive, reliable IT applications will need to be implemented. The collaboration between a smart house and a smart grid is a promising approach which, with the help of ICT can fully unleash the capabilities of the smart electricity network.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
This paper describes a taxonomy which allows to assess and compare different implementations of master data objects. A systematic breakdown of core entities provides a framework to tell apart four subdividing categories of master data objects: independent and dependent objects, relational objects, and reference objects that serve to attribute information. This supports the preparation of data migrations from one system to another.
Machine-to-machine communication is continuously extending to new application fields. Especially smart metering has the potential to become the first really large-scale M2M application. Although in the future distributed meter devices will be mainly connected via dedicated primary communication protocols, like ZigBee, Wireless
M-Bus or alike, a major percentage of all meters will be connected via point to point communication using GPRS or UMTS platforms. Thus, such meter devices have to be extremely cost and energy efficient, especially if the devices are battery based and powered several years by a single battery. This paper presents the development of an automated measurement unit for power and time, thus energy characteristics can be recorded. The measurement unit includes a hardware platform for the device
under test (DUT) and a database based software environment for a smooth execution and analysis of the measurements.
Flexible Three-dimensional Camera-based Reconstruction and Calibration of Tracked Instruments
(2016)
Navigated instruments commonly include applied parts, e.g. burrs or saw blades, that need to be calibrated with respect to the attached or integrated tracker. Since this calibration has to be very precise, it is often performed by the manufacturer. However, due to the great variety of instruments and the option to exchange the applied parts (e.g. burrs) there is a definite demand for flexible and generic calibration techniques. Furthermore, if we look into the medical field, there is also a need for calibrating sterile instruments. We propose a new and flexible camera-based calibration technique that addresses these demands by working contactlessly, precisely, and generically for a large variety of tracked instruments. This is realized using one or more tracked cameras which are calibrated with respect to an attached or integrated tracker. The tracked instrument is rotated in front of the camera(s) and its 3D geometry and surface are reconstructed from the 2D images in the coordinate system of the attached or integrated tracker. The 3D geometry of the navigated instrument was reconstructed with an accuracy of under 0.2 mm. The radius of a sphere-shaped instrument was reconstructed with an RMS deviation of 0.015mm.
This work describes a non-parametric camera-based method for the calibration of Optical See-Through Glasses (OSTG). Existing works model the optical system through perspective projection and parametric functions. In the border areas of the displays such models are often inadequate. Moreover, rigid calibration patterns, that produce only a small amount of non-equidistant point correspondences, are used. In order to overcome these disadvantages every single display pixel is calibrated individually. The error prone user interaction is avoided by using cameras placed behind the displays of the OSTG. The displays show a shifting pattern that is used to calculate the pixels' locations. A camera mounted rigidly on the OSTG is used to find the relations between the system components. The obtained results show better accuracies than in previous works and prove that a second calibration step for user adaptation is necessary for high accuracy applications.
This work describes a camera-based method for the calibration of optical See-Through Glasses (STGs). A new calibration technique is introduced for calibrating every single display pixel of the STGs in order to overcome the disadvantages of a parametric model. A non-parametric model compared to the parametric one has the advantage that it can also map arbitrary distortions. The new generation of STGs using waveguide-based displays [5] will have higher arbitrary distortions due to the characteristics of their optics. First tests show better accuracies than in previous works. By using cameras which are placed behind the displays of the STGs, no error prone user interaction is necessary. It is shown that a high accuracy tracking device is not necessary for a good calibration. A camera mounted rigidly on the STGs is used to find the relations between the system components. Furthermore, this work elaborates on the necessity of a second subsequent calibration step which adapts the STGs to a specific user. First tests prove the theory that this subsequent step is necessary.
AV delay (AVD) optimization is mandatory in cardiac resynchronization (CRT) for heart failure. Several time consuming methods exist. We initialized development of left-atrial electrogram (LAE) feature for Biotronik ICS3000 programmer. It can be utilized to approximate optimal AV delay in CRT patients with pacing systems irrespective of make and model. Using this feature, we studied the share of interatrial conduction intervals (IACT) on individual echo AVD in 45 CRT patients (34m, 11f, mean age 69±6yrs.). The percentage of IACT on optimal echo AVD resulted in44.5±22.1% for VDD and 70.7±10.9% for DDD operation. In all patients, optimal echo AVDs exceeded the individual IACT by a duration of 52.5±33.3ms (p<0.001), at mean. Therefore, if AV delay optimization is not possible or not practicable in CRT patients, AVD should be approximated by individually measuring IACT and adding about 50ms.
3D Bin Picking with an innovative powder filled gripper and a torque controlled collaborative robot
(2023)
A new and innovative powder filled gripper concept will be introduced to a process to pick parts out of a box without the use of a camera system which guides the robot to the part. The gripper is a combination of an inflatable skin, and a powder inside. In the unjammed condition, the powder is soft and can adjust to the geometry of the part which will be handled. By applying a vacuum to the inflatable skin, the powder gets jammed and transforms to a solid shaped form in which the gripper was brought before applying the vacuum. This physical principle is used to pick parts. The flexible skin of the gripper adjusts to all kinds of shapes, and therefore, can be used to realize 3D bin picking. With the help of a force controlled robot, the gripper can be pushed with a consistent force on flexible positions depending of the filling level of the box. A Kuka LBR iiwa with joint torque sensors in all of its seven axis’ was used to achieve a constant contact pressure. This is the basic criteria to achieve a robust picking process.
Non-fluoroscopic Imaging with MRT/CT Image Integration - Catheter Positioning with Double Precision
(2014)
Introduction: When antiarrhythmic drug therapy has failed, different approaches of pulmonary vein isolation are considered a reasonable option in the treatment of atrial fibrillation. It will be performed predominantly by radiofrequency catheter ablation. As the individual anatomy of left atrium and the pulmonary veins differs considerably, accurate visualization of these structures is essential during catheter positioning. Using non-fluoroscopic electroanatomic mapping system with image integration, electroanatomic mapping can be combined with highly detailed anatomical MRT or CT information on complex left atrial structures. This may facilitate catheter navigation during ablation for atrial fibrillation.
Methods: The CARTO XP electroanatomic system was used in a project during biomedical engineering study to practice image integration of anonymized real patients that underwent pulmonary vein isolation by CARTO XP and a MRT/CT procedure. Using the image integration software, MRT or CT images were imported into the CARTO XP system. The next process was segmentation of the acquired images. It involves dividing the images into different regions in order to select the structures of interest. In clinical routine, this segmentation has to be performed before catheter ablation. Then, the segmented images were aligned with the reconstructed electroanatomic maps. This consists of several steps, including selection of the left atrium, scaling of the reconstructed geometry, fusion of the structures using landmarks, and optimization of the integration by adjusting the reconstructed geometry of the left atrium.
Results: In the 3 months lasting period of the project, image integration was trained in 13 patients undergoing catheter ablation for atrial fibrillation. Within this period, time consumption for the process decreased from about 90 minutes at the beginning to about 35 minutes at the end for one patient.
Conclusion: Image integration into non-fluoroscopic electroanatomic map is a sophisticated tool in cardiac radiofrequency catheter ablation. Intensive training is necessary to control the procedure.
The sharp rise in electricity and oil prices due to the war in Ukraine has caused fluctuations in the results of the previous study about the economic analysis of electric buses. This paper shows how the increase in fuel prices affects the implementation of electric buses. This publication is constructing the Total Cost of Ownership (TCO) model in the small-mid-size city, Offenburg for the transition to electric buses. The future development of costs is estimated and a projection based on learning curves will be carried out. This study intends to introduce a new future prospect by presenting the latest data based on previous research. Through the new TCO result, the cost differences between the existing diesel bus and the electric bus are updated, and also the future prospects for the economic feasibility of the electric bus in a small and midsize city are presented.
In many application areas, Deep Reinforcement Learning (DRL) has led to breakthroughs. In Curriculum Learning, the Machine Learning algorithm is not randomly presented with examples, but in a meaningful order of increasing difficulty. This has been used in many application areas to further improve the results of learning systems or to reduce their learning time. Such approaches range from learning plans created manually by domain experts to those created automatically. The automated creation of learning plans is one of the biggest challenges.In this work, we investigate an approach in which a trainer learns in parallel and analogously to the student to automatically create a learning plan for the student during this Double Deep Reinforcement Learning (DDRL). Three Reward functions, Friendly, Adversarial, and Dynamic based on the learner’s reward are compared. The domain for evaluation is kicking with variable distance, direction and relative ball position in the SimSpark simulated soccer environment.As a result, Statistic Curriculum Learning (SCL) performs better than a random curriculum with respect to training time and result quality. DDRL reaches a comparable quality as the baseline and outperforms it significantly in shorter trainings in the distance-direction subdomain reducing the number of required training cycles by almost 50%.
Active participation of industrial enterprises in electricity markets - a generic modeling approach
(2021)
Industrial enterprises represent a significant portion of electricity consumers with the potential of providing demand-side energy flexibility from their production processes and on-site energy assets. Methods are needed for the active and profitable participation of such enterprises in the electricity markets especially with variable prices, where the energy flexibility available in their manufacturing, utility and energy systems can be assessed and quantified. This paper presents a generic model library equipped with optimal control for energy flexibility purposes. The components in the model library represent the different technical units of an industrial enterprise on material, media, and energy flow levels with their process constraints. The paper also presents a case study simulation of a steel-powder manufacturing plant using the model library. Its energy flexibility was assessed when the plant procured its electrical energy at fixed and variable electricity prices. In the simulated case study, flexibility use at dynamic prices resulted in a 6% cost reduction compared to a fixed-price scenario, with battery storage and the manufacturing system making the largest contributions to flexibility.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
The authors present an abiotically catalyzed glucose fuel cell and demonstrate its application as energy harvesting power source for a cardiac pacemaker. This is enabled by an optimized DC-DC converter operating at 40 % conversion efficiency, which surpasses commercial low-power DC-DC converters. The required fuel cell surface area can thus be reduced from about 125 cm2 to 18 cm2, which would allow for its direct integration onto the pacemaker casing.
In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator’s cabin are tracked while satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated in computer simulations. Furthermore, for motion simulation scenarios where the reference trajectories are not known beforehand, we derive an estimate on how much motion simulation fidelity can maximally be improved by any reference prediction scheme compared to the case when no prediction scheme is applied.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
Live streaming of events over an IP network as a catalyst in media technology education and training
(2020)
The paper describes how students are involved in applied research when setting up the technology and running a live event. Real-time IP transmission in broadcast environments via fiber optics will become increasingly important in the future. Therefore, it is necessary to create a platform in this area where students can learn how to handle IP infrastructure and fiber optics. With this in mind, we have built a fully functional TV control room that is completely IP-based. The authors present the steps in the development of the project and show the advantages of the proposed digital solutions. The IP network proves to be a synergy between the involved teams: participants of the robot competition and the members of the media team. These results are presented in the paper. Our activities aim to awaken enthusiasm for research and technology in young people. Broadcasts of live events are a good opportunity for "hands on" activities.
Temperature regulation is an important component for modern high performance single -core and multi-core processors. Especially high operating frequencies and architectures with an increasing number of monolithically integrated transistors result in a high power dissipation and - since processor chips convert the consumed electrical energy into thermal energy - in high operating temperatures. High operating temperatures of processors can have drastic consequences regarding chip reliability, processor performance, and leakage currents. External components like fans or heat spreaders can help to reduce the processor temperature - with the disadvantage of additional costs and reduced reliability. Therefore, software based algorithms for dynamic temperature management are an attractive alternative and well known as Dynamic Thermal Management (DTM). However, the existing approaches for DTM are not taking into account the requirements of real-time embedded computing, which is the objective in the given project. The first steps are the profiling and the thermal modeling of the system, which is reported in this paper for a Freescale i. MX6Q quad-core microprocessor. An analytical model is developed and verified by an extensive set of measurement runs.
Experiences with a telecare platform integration of ZigBee sensors into a middleware platform
(2012)
Today, thermoforming moulds are mostly produced using conventional mould-building technologies (e.g. milling and drilling) and are made of metal (e.g. aluminium or steel) or hardwood. The tools thus produced are very robust, but are only cost-effective in mass production. For the production of small batches of thermoformed parts, there is a need for moulds which can be produced quickly and economically. A new approach which significantly reduces the production time and cost is the 3D printing process (3DP). The use of this technology to produce thermoforming moulds offers many new options in the geometries which can be manufactured, and in manufacturing time and costs. In a case study of a thermoformed part (a scaled automotive model), the pre-processing of the CAD model of a mould is demonstrated. The mould can be printed within a few hours, and is sufficiently heat-resistant for moulding processes. The important advantages of moulds printed in 3D, in comparison to moulds built using conventional technologies, are the ability to create any shape of channels for the vacuum and the simplification in the production of tool mock-ups. This paper also discusses the economics of the technique, such as a comparison of material costs and manufacturing costs in relation to conventional production technologies and materials.
Besides of conventional CAD systems, new, cloudbased CAD systems have also been available for some years. These CAD systems designed according to the principle of software as a service (SaaS) differ in some important features from the conventional CAD systems. Thus, these CAD systems are operated via a browser and it is not necessary to install the software on a computer. The CAD-data is stored in the cloud and not on a local computer or central server. This new approach should also facilitate the sharing and management of data. Finally, many of these new CAD systems are available as freeware for education purposes, so the universities can save license costs. The chances and risks of cloud-based systems will first be analyzed in this paper. Then two leading cloud-based CAD systems will be researched. During the process, the technical performance range these new systems offer for the product development will be initially checked and reviewed. For this purpose, various criteria are worked out and the CAD software is evaluated using these criteria. In addition, the criteria are weighted by their importance for design education. This allows one to conclude which capabilities the different CAD system offers for use in education.
Implementation of lightweight design in the product development process of unmanned aerial vehicles
(2017)
The development and manufacturing of unmanned aerial vehicles (UAVs) require a multitude of design rules. Thereby, additive manufacturing (AM) processes provide a number of significant advantages over conventional production methods, particularly for implementing requirements with regard to lightweight construction and sustainability. A new, promising approach is presented, with which, through the combination of very light structural elements with a ribbed construction, an attached covering by means of foil is used. This contribution develops and presents a development process that is based on various development cycles. Such cycles differ in their effort and scope within the overall development, and may only comprise one part of the development process, or the entire development process. The applicability of this development process is demonstrated within the framework of a comprehensive case study. The aim is to develop an additively manufactured product that is as light as possible in the form of a UAV, along with a sustainable manufacturing process for such product. Finally, the results of this case study are analyzed with regard to the improvement of lightweight construction.
A number of design rules must be adhered to in the development and manufacturing of unmanned aerial vehicles. In this, additive manufacturing, particularly in the implementation of requirements with respect to light-weight construction and sustainability, offers several advantages compared to conventional manufacturing methods. Therefore, this article will primarily introduce and compare current concepts for sustainable design using additive manufacturing. These will, above all, consist of the production of complete fuselages and wings by means of rapid prototyping or also rapid tooling. In addition, a new concept will be introduced in which a UAV using AM can be implemented through the combination of very light components and a preferably resource-saving manufacturing method. In this process, a three-dimensional spaceframe is used in combination with a covering in the construction of the wing. Hereby, the development process for sustainable design using additive manufacturing will be analyzed and the results will be explained by means of concrete case studies. In conclusion, the results of these case studies will be compared to the latest technology regarding wing span load.
The fast and cost-effective manufacturing of tools for thermoforming is an essential requirement to shorten the development time of products. Thus, additive processes are used increasingly in tooling for thermoforming of plastic sheets. However, a disadvantage of many additive methods is that they are highly cost-intensive, since complex systems based on laser technology and expensive metal powders are needed. Therefore, this paper examines how to work with favorable additive methods, e.g. Binder Jetting, to manufacture tools, which provide sufficient strength for thermoforming. The use of comparatively low-priced inkjet technology for the layer construction and a polymer plaster as material can be expected to result in significant cost reductions. Based on a case study using a cowling (engine bonnet) for an Unmanned Aerial Vehicle (UAV), the development of a complex tool for thermoforming is demonstrated. The object in this study is to produce a tool for a complex-shaped component in small numbers and high quality in a short time and at reasonable costs. Within the tooling process, integrated vacuum channels are implemented in additive tooling without the need for additional post-processing (for example, drilling). In addition, special technical challenges, such as the demolding of undercuts or the parting of the tool are explained. All process steps from tool design to the use of the additively manufactured tool are analyzed. Based on the manufacturing of a small series of cowlings for a UAV made of plastic sheets (ABS), it is shown, that the Binder Jetting offers sufficient mechanical and thermal strength for additive tooling. In addition, an economic evaluation of the tool manufacturing and a detailed consideration of the required manufacturing times for the different process steps are carried out. Finally, a comparison is made with conventional and alternative additive methods of tooling.
Due to globalization and the resulting increase in competition on the market, products must be produced more and more cheaply, especially in series production, because buyers expect new variants or even completely new products in ever shorter cycles. Injection molding is the most important production process for manufacturing plastic components in large quantities. However, the conventional production of a mold is extremely time-consuming and costly, which creates a contradiction to the fast pace of the market. Additive tooling is an area of application of additive manufacturing, which in the field of injection molding is preferably used for the prototype production of mold inserts. This allows injection molding tools to be produced faster and more cheaply than through the subtractive manufacturing of metal tools. Material Jetting processes using polymers (MJT-UV/P), also called Polyjet Modeling (PJM), have a great potential for use in additive tooling. Due to the poorer mechanical and thermal properties compared to conventional mold insert materials, e.g. steel or aluminum, the previously used design principles cannot be applied. Accordingly, new design guidelines are necessary, which are developed in this paper. The necessary information is obtained with the help of a systematic literature research. The design guidelines are mapped in a uniform design guide, which is structured according to the design process of injection molds. The guidelines do not only refer to the constructive design of the injection mold or the polymer mold insert, but to the entire design process and describe the four phases of planning, conception, development and realization. Particular attention is paid to the special geometric designs of a polymer mold insert and the thermomechanical properties of the mold insert materials. As a result, design guidelines are available that are adapted to the special requirements of additive tooling of molds inserts made of plastics for injection molding.
Additive manufacturing (AM), or 3D printing (3DP), has increasingly become more wide-spread and applied to a great degree over the past years. Along with that, the necessity for training courses which impart the required knowledge for product development with 3D printing rises. This article will introduce a “Rapid Prototyping” workshop which should convey to students the technical and creative knowledge for product development in using additive manufacturing. In this workshop, various 3D printers are initially installed and put into operation for the construction of self-assembly kits during the introduced training course. Afterwards, the students use databanks to select and download suitable components for the 3D print on the basis of criteria. Lastly, the students develop several assembly kits independently and establish design guidelines based on their experience. The students likewise learn to estimate and evaluate economic boundaries such as, e.g. costs and delivery times. For a start, it is a new approach to be using various assembly kits. These are up to date with current technology and dispose of features such as, e.g., additional nozzles for support material and heated building platforms. Moreover, a comprehensive evaluation of the training success will be conducted. The students’ level of knowledge in various areas will also be determined and compared with surveys taken before and after the conducting of the workshops. Additionally, cost and delivery time estimates and knowledge of databanks will be determined through concrete questioning.
In the development of new vehicles, increasing customer comfort requirements and rising safety regulations often result in an increase in weight. Nevertheless, in order to be able to meet the demand for reduced fuel consumption, it is necessary within product development process to implement complex and filigree lightweight structures. This contribution therefore addresses the potential of generatively developed components for fiber-reinforced additive manufacturing (FRAM). Currently, several commercial systems for this application are available on the market. Therefore, a comparison of the systems is first made to determine a suitable system. Then, a highly stressed and safety-relevant chassis component of a race car is generatively designed and manufactured using FRAM. A matrix with short fiber reinforcement and additional long fiber reinforcement with carbon fibers is applied. Finally, tensile tests are carried out to check the mechanical properties. In addition, relevant properties such as weight and cost are obtained in order to be able to compare them with conventionally developed and manufactured components.
Direct Digital Manufacturing of Architectural Models using Binder Jetting and Polyjet Modeling
(2019)
Today, architectural models are an important tool for illustrating drawn-on plansor computer-generated virtual models and making them understandable. Inaddition to the conventional methods for the manufacturing of physical models, awide range of processes for Direct Digital Manufacturing (DDM) has spreadrapidly in recent years. In order to facilitate the application of these new methodsfor architects, this contribution examines which technical and economic resultsare possible using 3D printed architectural models. Within a case study, it will beshown on the basis of a multi-storey detached house, which kind of datapreparation is necessary. The DDM of architectural models will be demonstratedusing two widespread techniques and the resulting costs will be compared.
In addition to traditional methods in product development, the increasing availability of two new technologies, namely additive manufacturing AM (e.g. 3D-printing) and reverse engineering RE by means of 3D-scanning, offer new opportunities in product development processes today. However, to date only very few approaches exist those include these new technologies systematically in the education of students in the field of product development. This paper explores several ways in which AM and RE can productively be used in education. New to this approach is, on the one hand, that the students assemble and install the 3Dprinters themselves, and on the other hand, that they are introduced to an approach that combines 3D-scanning followed by 3D-printing. In different case studies is demonstrated that students in design education are able to autonomously research and realize technical possibilities and limitations of these technologies, as well as economic parameters and constraints.
Additive Manufacturing and Reverse Engineering have increasingly been gaining in importance over the past years. This paper investigates the current status of the implementation of these new technologies in design education and also identifies current shortcomings. Then it develops two new approaches for the teaching of the necessary expertise for the design of 3D-printed components and illustrates these with case studies. First, a workshop is presented in which students gain a broad understanding for the functionalities of additive manufacturing and the creative possibilities and limits of this process, through the assembly and installation of a 3D-printer. A second new approach is the combination of reverse engineering and 3D-printing. Thereby, students learn how to deal with this complex process chain. The result of these new approaches can e.g. be seen in the design guidelines for Additive Manufacturing, which were developed by the students themselves. At the same time, the students are able to estimate opportunities and limits of both technologies. Finally, the success of the new course contents and form is reviewed by an evaluation by the students.
This paper presents a new approach for the teaching of competence in additive manufacturing to engineering students in product development. Particularly new to this approach is the combination of the students' autonomous assembly and commissioning of a 3D-printer, and the independent development of guidelines for this new technology regarding the design of components. This way the students will be able to gain first practical experiences with the data preparation, the additive manufacturing process itself and also the required post-treatment of the 3D-printed parts. To allow the students a significantly deeper insight into the functioning of 3D-printing, the workshop Rapid Prototyping developed a new approach in the course of which the students first assemble a construction kit for a 3D-printer themselves and then commission the printer. This enables the students to gain a better understanding of the functionality and configuration of additive manufacturing. In a next step, the students used the 3D-printers they constructed themselves to produce components which they take from a database. Finally, the experiences of the students in the course of the workshop will be evaluated to review the effectiveness of the new approach.