Refine
Year of publication
- 2017 (87) (remove)
Document Type
- Conference Proceeding (87) (remove)
Conference Type
- Konferenzartikel (58)
- Konferenz-Abstract (22)
- Konferenz-Poster (3)
- Sonstiges (3)
- Konferenzband (1)
Has Fulltext
- no (87) (remove)
Is part of the Bibliography
- yes (87)
Keywords
- CST (4)
- HF-Ablation (4)
- CRT (3)
- RoboCup (3)
- Gamification (2)
- IVD (2)
- Affective Computing (1)
- Aktivierung (1)
- Arbeitswissenschaft (1)
- Computer Games (1)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (36)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (26)
- Fakultät Wirtschaft (W) (13)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (12)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (8)
- ACI - Affective and Cognitive Institute (4)
- WLRI - Work-Life Robotics Institute (4)
- Zentrale Einrichtungen (2)
- INES - Institut für nachhaltige Energiesysteme (1)
Open Access
- Open Access (42)
- Closed Access (40)
- Bronze (5)
- Closed (4)
- Grün (1)
This paper describes the Sweaty II humanoid adult size robot trying to qualify for the RoboCup 2017 adult size humanoid competition. Sweaty came 2nd in RoboCup 2016 adult size league. The paper describes the main characteristics of Sweaty that made this success possible, and improvements that have been made or are planned to be implemented for RoboCup 2017.
In this TDP we describe a new tool created for testing the strategy layer of our soccer playing agents. It is a complete 2D simulator that simulates the games based on the decisions of 22 agents. With this tool, debugging the decision and strategy layer of our agents is much more efficient than before due to various interaction methods and complete control over the simulation.
In the future, the tool could also serve as a measure to run simulations of game series much faster than with the 3D simulator. This way, the impact of different play strategies could be evaluated much faster than before.
Technology and computer applications influence our daily lives and questions arise concerning the role of artificial intelligence and decision-making algorithms. There are warning voices, that computers can, in theory, emulate human intelligence-and exceed it. This paper points out that a replacement of humans by computers is unlikely, because human thinking is characterized by cognitive heuristics and emotions, which cannot simply be implemented in machines operating with algorithms, procedural data processing or artificial neural networks. However, we are going to share our responsibilities with superior computer systems, which are tracking and surveying all of our digital activities, whereas we have no idea of the decision-making processes inside the machines. It is shown that we need a new digital humanism defining rules of computer responsibilities to avoid digital totalism and comprehensive monitoring and controlling of individuals within the planet Earth.
Qualitative Wissenschaft, künstlerisches Forschen und forschendes Lernen verbinden Erkenntnis aus Praxis und Erfahrung. In der Autoethnographie der eigenen Werkstatt des Hörens wie der Kultur in Studios anderer, wird die noch neue Interdisziplin Sound (Studies) erprobt und vertieft, mit Impulsen für die Praxis und Theorie, von der noch wenig bekannten A/r/t ographie heute, hin zu einer künftig A/R/Tophonie, dem künstlerischen Forschen in der Musik, ebenso wie durch Klang Komposition, Radio Kunst und visuelle Musik.
Im vorliegenden Beitrag wird ein Strommarktsimulationsmodell entwickelt, mit dessen Hilfe die Bereitstellung von Flexibilität auf dem Strom- und Regelleistungsmarkt in Deutschland modell-gestützt analysiert werden soll. Das Modell bildet dabei zwei parallel verlaufende, zentrale Wettbewerbsmärkte ab, an denen Akteure durch die individuelle Gebotsermittlung handeln können. Die entsprechend hierzu entwickelte Gebotslogik wird detailliert erläutert, wobei der Fokus auf der Flexibilität fossil-thermischer Kraftwerke liegt. In der anschließenden Gegen-überstellung mit realen Marktpreisen zeigt sich, dass die verwendete Methodik und die Ge-botslogik den bestehenden Markt und dessen Marktergebnis in geeigneter Form wiederspie-geln, wodurch zukünftig unterschiedlichste Flexibilitätsszenarien analysiert und Aussagen zu deren Auswirkungen auf den Markt und seine Akteure getroffen werden können.
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
The electrical field (E-field) of the biventricular (BV) stimulation is important for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex.
The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
The modeling and simulation was carried out using the electromagnetic simulation software CST. Five multipolar left ventricular (LV) electrodes, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modelled. A selection were integrated into the heart rhythm model (Schalk, Offenburg) for the electrical field simulation. The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
The BV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode with a pulse width of 0.5 ms each. The far-field potential at the RA electrode tip was 32.86 mV and 185.97 mV at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal ablation electrode. The temperature at the catheter tip was 103.87 °C after 5 s ablation time and 37.61 °C at a distance of 2 mm inside the myocardium. After 15 s, the temperature was 118.42 °C and 42.13 °C.
Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF and could be used to optimize the CRT and AF ablation.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPDs is highly desirable. Outcomes of a previous study (Zirn, Arndt et al. 2016) revealed that a subset of BiCI users showed improved IPD detection thresholds with the fine structure processing strategy FS4 compared to the constant rate strategy HDCIS using narrowband stimuli. In contrast, little differences between the coding strategies were found for broadband stimuli with regard to binaural speech intelligibility level differences (BILD) as an estimate of binaural unmasking. Compared to normalhearing listeners (7.5 ± 1.2 dB) BILD were small in BiCI users (around 0.5 dB with both coding strategies).
In the present work, we investigated the influence of binaural fitting parameters on BILD. In our cohort of BiCI users many were implanted with electrode arrays differing in length left versus right. Because this length difference typically corresponded to the distance of two electrode contacts the first modification of bilateral fitting was a tonotopic adjustment by deactivation of the most apical electrode contact on the side with the deeper inserted array (tonotopic approach).
The second modification was the isolation of the residual, most apical electrode contacts by deactivation of the basally adjacent electrode contact on each side (tonotopic sparse approach). Applying these modifications, BILD improved by up to 1.5 dB.
Das normalhörende auditorische System ist in der Lage, interaurale Zeit- bzw. Phasendifferenzen zur verbesserten Signaldetektion im Störgeräusch zu nutzen. Dieses Phänomen wird häufig als binaurale Entmaskierung bezeichnet und ist sowohl bei einfachen Signalen wie Sinustönen, als auch bei Sprachsignalen im Störgeräusch wirksam. Vorangegangene Studien haben gezeigt, dass binaurale Entmaskierung eingeschränkt auch bei bilateralen CI-Trägern beobachtbar ist (Zirn et al., 2016).
Aktuelle Ergebnisse zeigen, dass die binaurale Entmaskierung sensitiv gegenüber der bilateralen CI-Anpassung ist. So lässt sich der Effekt durch tonotopen Abgleich und Herausstellen eines apikalen Feinstrukturkanals modulieren. Steigerungen der binauralen Entmaskierung um bis zu 1,5 dB sind auf diese Weise gegenüber der konventionellen CI-Anpassung möglich. Allerdings variiert der Einfluss der CI-Anpassung interindividuell erheblich.
Die drei großen Hersteller von Cochlea-Implantat (CI)-Systemen ermöglichen es klinischen Audiologen, die Mikrofoneigenschaften der meisten CI-Sprachprozessoren zu prüfen. Dazu können bei diesen Sprachprozessoren Monitorkopfhörer angeschlossen und das/die Mikrofon(e) inklusive eines Teils der Signalvorverarbeitung abgehört werden. Präzise Angaben dazu, mit welchen Stimuli, bei welchem Pegel und nach welchem Kriterium diese Prüfung stattfinden soll, machen die CI-Hersteller nicht. Auf Basis dieser Prüfung soll der Audiologe dann über die Funktion der Mikrofone und damit darüber entscheiden, ob der betreffende Sprachprozessor an den Hersteller eingeschickt wird oder nicht.
Zur Objektivierung der CI-Sprachprozessor-Mikrofon-Prüfung haben wir eine Testbox entwickelt, mit der alle abhörbaren aktuellen CI-Sprachprozessoren der drei großen Hersteller geprüft werden können. Die Box wurde im 3D-Druck-Verfahren hergestellt. Der zu prüfende Sprachprozessor wird in die Messbox eingehängt und über einen darin verbauten Lautsprecher mit definierten Prüfsignalen (Sinustöne unterschiedlicher Frequenz) beschallt. Das Mikrofonsignal wird über das Kabel der Monitorkopfhörer herausgeführt und mit einer Shifting- and Scaling-Schaltung in einen Spannungsbereich transformiert, der für die AD-Wandlung mit einem Mikrokontroller (ATmega1280 verbaut auf einem Arduino Mega) geeignet ist. Derselbe Mikrokontroller übernimmt über einen eigens gebauten DA-Wandler die Ausgabe der Sinustöne über den Lautsprecher. Signalaufnahme und –wiedergabe erfolgen mit jeweils 38,5 kHz Samplingrate. Der für jede Frequenz über mehrere Perioden des Prüfsignals ermittelte Effektivwert wird mit dem Effektivwert, der mit einem neuwertigen Referenzprozessor für diese Frequenz gemessen wurde, verglichen. Die Messergebnisse werden graphisch auf einem Display ausgegeben.
Derzeit läuft eine erste Datenerhebung mit in der Klinik subjektiv auffällig gewordenen CI-Sprachprozessoren, die anschließend in der Messbox untersucht werden. So sollen realistische Schwellen für kritische Abweichungen von den Referenz-Effektivwerten ermittelt werden. Im weiteren Verlauf sollen dann Hit und False Alarm-Raten der subjektiven Prüfung bestimmt werden.
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
In the course of the last few years, our students are becoming increasingly unhappy. Sometimes they stop attending lectures and even seem not to know how to behave correctly. It feels like they are getting on strike. Consequently, drop-out rates are sky-rocketing. The lecturers/professors are not happy either, adopting an “I-don’t-care” attitude.
An interdisciplinary, international team set in to find out: (1) What are the students unhappy about? Why is it becoming so difficult for them to cope? (2) What does the “I-don’t-care” attitude of professors actually mean? What do they care or not care about? (3) How far do the views of the parties correlate? Could some kind of mutual understanding be achieved?
The findings indicate that, at least at our universities, there is rather a long way to go from “Engineering versus Pedagogy” to “Engineering Pedagogy”.
Wie man die Vorlesung "Technische Mechanik 1 - Statik" für alle Beteiligten dynamisch gestaltet
(2017)
Lehrende nehmen vielfältige Veränderungen, insbesondere bei Studienanfängern wahr: Vorkenntnisse, Aufnahme- und Konzentrationsfähigkeit werden zunehmend heterogener. In der Vorlesung „Technische Mechanik 1“ wurde darauf konstruktiv reagiert, indem der Ablauf und die Struktur verändert wurden. Aufgaben und ihre Lösungen stehen im Mittelpunkt des Unterrichts. Neben der Lehrenden als aktiv Handelnde wird jeder Studierende im Lauf des Semesters in den Ablauf integriert und muss individuelle Lösungen der verteilten Aufgaben präsentieren. Im Vergleich entwickeln die Studierenden durch „Lernen am Modell“ dadurch ihre methodischen und fachlichen Fähigkeiten weiter. Um den Studierenden die Relevanz der behandelten Themenbereiche zu verdeutlichen wurden spezielle Aufgaben mit einem lebensweltlichem Bezug entwickelt. Befragungen zeigen, dass die Studierenden von den vielfältigen interaktiven Lernangeboten profitieren und die entwickelten Kompetenzen auch auf andere Lernsituationen übertragen.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
Die in dieser Arbeit vorgestellte Vorgehensweise erlaubt die Ortung von Schienenfahrzeugen in topologischen Karten allein mit Hilfe eines Wirbelstromsensorsystems (WSS). Zur Ortung primär erforderlich ist die Identifizierung des befahrenen Gleises selbst, wofür unterschiedliche in einer Karte gespeicherte Merkmale herangezogen werden sowie der zurückgelegte Weg, der durch Zählen der passierten Schwellen ermittelt wird. Diese Merkmale werden mittels eigens definierter, virtueller Sensoren aus dem Signal des WSS gewonnen und mittels einem Bayes’schen Formalismus mit den Referenzdaten aus der vorliegenden topologischen Karte abgeglichen. Diese auf virtuellen Sensoren basierende Vorgehensweise erlaubt eine Parallelisierung der Sensorsignalverarbeitung und eine flexible Einbindung von Sensoren in das Ortungssystem. Die Möglichkeit, Weichen mit einer Trefferquote von 99% zu detektieren, erlaubt die Verfolgung der Fahrzeugposition über die gesamte Fahrstrecke hinweg, unter alleiniger Verwendung der vom WSS gelieferten Messdaten.
Für die genaue Positionsbestimmung in Innenräumen, beispielsweise in Bahnhöfen oder Einkaufszentren, soll in dem beschriebenen Projekt untersucht werden, inwiefern lokale Magnetfelder genutzt werden können, um Genauigkeit und Robustheit zu erhöhen. Hierzu wird untersucht, ob und wie kostengünstige Magnetfeldsensoren und mobile Roboterplattformen genutzt werden können, um Karten zu erstellen, die eine spätere Navigation, beispielsweise mit Smartphones oder mit anderen mobilen Geräten.
In this paper we show that a model-free approach to learn behaviors in joint space can be successfully used to utilize toes of a humanoid robot. Keeping the approach model-free makes it applicable to any kind of humanoid robot, or robot in general. Here we focus on the benefit on robots with toes which is otherwise more difficult to exploit. The task has been to learn different kick behaviors on simulated Nao robots with toes in the RoboCup 3D soccer simulator. As a result, the robot learned to step on its toe for a kick that performs 30% better than learning the same kick without toes.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
In recent years, the additive manufacturing processes have rapidly developed. The additive manufacturing processes currently present a high-performance alternative to conventional manufacturing methods. In particular, they offer the opportunity of previously hardly imaginable design freedom, i.e. the implementation of complex forms and geometries. This capability can, for example, be applied in the development of especially light but still loadable components in automotive engineering. In addition, waste material is seldom produced in additive manufacturing which benefits a sustainable production of building components. Until now, this design freedom was barely used in the construction of technical components and products because, in doing so, both specific design guidelines for additive manufacturing and complex strength calculations must be simultaneously observed. Yet in order to fully take advantage of the additive manufacturing potential, the method of topology optimization, based on FEM simulation, suggests itself. It is with this method that components that are precisely matched and are especially light, thereby also resource-saving, can be produced. Current literature research indicates that this method is used in automotive manufacturing for reducing weight and improving the stability of both individual parts and assembly units. This contribution will study how this development method can be applied in the example of a brake mount from an experimental vehicle. In this, the conventional design is improved by means of a simulation tool for topology optimization in various steps. In an additional processing step, the smoothing of the thus developed component occurs. Finally, the component is generatively manufactured by means of selective laser melting technology. Models are manufactured using binder jetting for the demonstration of the process. It will also be determined how this weight reduction affects the CO2 emissions of a vehicle in use.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
The present-day methods of numerical simulation offer a great variety of options for optimizing metal forming processes. Although it is possible to simulate complex forming processes, the results are typically available only as 2D projections on screens. Some forming processes have reached a level of complexity beyond the level of spatial sense, which makes it necessary to use physical 3D representations to develop a deeper understanding of the material flow, microstructural processes, process and design limits, or to design the required tooling. Physical 3D models can be produced in a short amount of time using 3D printing, and indexed with a wide range of colors. In this paper, the additive manufacturing of 3D color models based on simulation results are explored by means of examples from metal forming. Different 3D-printing processes are compared on the basis of quality as well as technical and economic criteria. Other examples from the fields joining by upset-bulging of tubes and microstructure simulation are also analyzed. This paper discusses the possibilities offered by the rapid progress and wide availability of 3D printers for the design and optimization of complex metal forming processes.
Architecture models are an essential component of the development process and enable a physical representation of virtual designs. In addition to the conventional methods of model production using the machining of models made of wood, metal, plastic or glass, a number of additive manufacturing processes are now available. These new processes enable the additive manufacturing of architectural models directly from CAAD or BIM data. However, the boundary conditions applicable to the ability to manufacture models with additive manufacturing processes must also be considered. Such conditions include the minimum wall thickness, which depends on the applied additive manufacturing process and the materials used. Moreover, the need for the removal of support structures after the additive manufacturing process must also be considered. In general, a change in the scale of these models is only possible at very high effort. In order to allow these restrictions to be adequately incorporated into the CAAD model, this contribution develops a parametrized CAAD model that allows such boundary conditions to be modified and adapted while complying with the scale. Usability of this new method is illustrated and explained in detail in a case study. In addition, this article addresses the additive manufacturing processes including subsequent post-processing.
Implementation of lightweight design in the product development process of unmanned aerial vehicles
(2017)
The development and manufacturing of unmanned aerial vehicles (UAVs) require a multitude of design rules. Thereby, additive manufacturing (AM) processes provide a number of significant advantages over conventional production methods, particularly for implementing requirements with regard to lightweight construction and sustainability. A new, promising approach is presented, with which, through the combination of very light structural elements with a ribbed construction, an attached covering by means of foil is used. This contribution develops and presents a development process that is based on various development cycles. Such cycles differ in their effort and scope within the overall development, and may only comprise one part of the development process, or the entire development process. The applicability of this development process is demonstrated within the framework of a comprehensive case study. The aim is to develop an additively manufactured product that is as light as possible in the form of a UAV, along with a sustainable manufacturing process for such product. Finally, the results of this case study are analyzed with regard to the improvement of lightweight construction.
Viele hochbeanspruchte Bauteile müssen zur Erfüllung ihres konstruktiven Zwecks mit Durchdringungskerben versehen werden. Infolge der gegenseitigen Wechselwirkung gelten für die Kerbwirkung dieser Art von Mehrfachkerben andere Gesetzmäßigkeiten als bei Einzelkerben. Die Weiterentwicklung der Lehre von der Tragfähigkeitsberechnung höchstbeanspruchter Maschinenelemente macht es notwendig, sich mit der Durchdringungskerbwirkung eingehend zu befassen. Thum und Svenson [1] entwickelten im Jahr 1949 ein Näherungsverfahren zur Abschätzung der Formzahl an einem zugbelasteten Stab mit Durchdringungskerben. In vielen Lehrbüchern findet dieses Verfahren Anwendung. Aus heutiger Sicht erscheint die Eignung der aus diesem Ansatz erzielten Ergebnisse als dringend überprüfungswürdig. Das thum’sche Verfahren wird unter die Lupe genommen. Der hier vorliegende Beitrag präsentiert mit Hilfe der Finiten-Elemente-Methode (FEM) neue Untersuchungsergebnisse an zugbeanspruchten Stäben mit Halbkreisnut und überlagerter Querbohrung. Diese ergaben, dass die Berechnung nach [1] Lücken aufweist. Ihr Ansatz stellt für den heutigen Entwicklungsstand eine mit zu großen Abweichungen behaftete Näherungshypothese dar.
Process engineering focuses on the design, operation, control and optimization of chemical, physical and biological processes and has applications in many industries. Process Intensification is the key development approach in the modern process engineering. The proposed Advanced Innovation Design Approach (AIDA) combines the holistic innovation process with the systematic analytical and problem solving tools of the theory of inventive problem solving TRIZ. The present paper conceptualizes the AIDA application in the field of process engineering and especially in combination with the Process Intensification. It defines the AIDA innovation algorithm for process engineering and describes process mapping, problem ranking, and concept design techniques. The approach has been validated in several industrial case studies. The presented research work is a part of the European project “Intensified by Design® platform for the intensification of processes involving solids handling”.
The collection of selected papers of the TRIZ Future Conference 2017 is in open access and is included to the Innovator, the journal of the European TRIZ Assocation.
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
Elastic constants of components are usually determined by tensile tests in combination with ultrasonic experiments. However, these properties may change due to e.g. mechanical treatments or service conditions during their lifetime. Knowledge of the actual material parameters is key to the determination of quantities like residual stresses present in the medium. In this work the acoustic nonlinearity parameter (ANP) for surface acoustic waves is examined through the derivation of an evolution equation for the amplitude of the second harmonic. Given a certain depth profile of the third-order elastic constants, the dependence of the ANP with respect to the input frequency is determined and on the basis of these results, an appropriate inversion method is developed. This method is intended for the extraction of the depth dependence of the third-order elastic constants of the material from second-harmonic generation and guided wave mixing experiments, assuming that the change in the linear Rayleigh wave velocity is small. The latter assumption is supported by a 3D-FEM model study of a medium with randomly distributed microcracks as well as theoretical works on this topic in the literature.
Zerstörungsfreie Verfahren zur Messung von Eigenspannungen erfordern, abhängig vom gewählten Verfahren, die Kenntnis gewisser Kopplungskonstanten. Im Falle von Ultraschallmessverfahren sind das neben den elastischen Konstanten zweiter Ordnung (SOEC) vor allem die Konstanten dritter Ordnung (TOEC). Elastische Konstanten fester, metallischer Bauteile werden in der Regel in Zugversuchen bestimmt. Zur Ermittlung der TOEC werden diese mit Ultraschallmessmethoden kombiniert. Durch äußere Einflüsse, wie etwa mechanische Nachbehandlungen der zu untersuchenden Bauteile können sich diese Konstanten jedoch ändern und müssen folglich direkt am veränderten Material bestimmt werden. Mithilfe von Simulationen wird die Ausbreitung der zweiten Harmonischen und der nichtlinear erzeugten Oberflächenwellen in Wellenmischexperimenten analysiert und der akustische Nichtlinearitätsparameter (ANP) bzw. der Kopplungsparameter aus der Amplitudenentwicklung berechnet. Insbesondere wird untersucht, welchen Einfluss ein gegebenes Tiefenprofil der TOEC auf den ANP hat (Vorwärtsproblem) und inwiefern sich aus den Messungen des ANP auf ein vorliegendes Tiefenprofil der TOEC schließen lässt (inverses Problem). Außerdem wird diskutiert, welchen Einfluss lokale Änderungen der SOEC auf den ANP haben können und wie groß diese Änderungen sein dürfen, um die TOEC dennoch bestimmen zu können. Die Untersuchungen hierzu wurden auf der Basis eines 3D-FEM Modells mit zufällig orientierten Mikrorissen durchgeführt. Die numerischen Rechnungen zeigen dabei auch eine gute Übereinstimmung mit einem aus der Literatur bekannten und für dieses Problem erweiterten, analytischen Modell. Neben der rissinduzierten Nichtlinearität kann bei diesem auch die Gitternichtlinearität berücksichtigt werden.
Spectral analysis of signal averaging electrocardiography in atrial and ventricular tachyarrhythmias
(2017)
Background: Targeting complex fractionated atrial electrograms detected by automated algorithms during ablation of persistent atrial fibrillation has produced conflicting outcomes in previous electrophysiological studies. The aim of the investigation was to evaluate atrial and ventricular high frequency fractionated electrical signals with signal averaging technique.
Methods: Signal averaging electrocardiography (ECG) allows high resolution ECG technique to eliminate interference noise signals in the recorded ECG. The algorithm uses automatic ECG trigger function for signal averaged transthoracic, transesophageal and intracardiac ECG signals with novel LabVIEW software (National Instruments, Austin, Texas, USA). For spectral analysis we used fast fourier transformation in combination with spectro-temporal mapping and wavelet transformation for evaluation of detailed information about the frequency and intensity of high frequency atrial and ventricular signals.
Results: Spectral-temporal mapping and wavelet transformation of the signal averaged ECG allowed the evaluation of high frequency fractionated atrial signals in patients with atrial fibrillation and high frequency ventricular signals in patients with ventricular tachycardia. The analysis in the time domain evaluated fractionated atrial signals at the end of the signal averaged P-wave and fractionated ventricular signals at the end of the QRS complex. The analysis in the frequency domain evaluated high frequency fractionated atrial signals during the P-wave and high frequency fractionated ventricular signals during QRS complex. The combination of analysis in the time and frequency domain allowed the evaluation of fractionated signals during atrial and ventricular conduction.
Conclusions: Spectral analysis of signal averaging electrocardiography with novel LabVIEW software can utilized to evaluate atrial and ventricular conduction delays in patients with atrial fibrillation and ventricular tachycardia. Complex fractionated atrial electrograms may be useful parameters to evaluate electrical cardiac arrhythmogenic signals in atrial fibrillation ablation.
Heart rhythm model and simulation of electrophysiological studies and high-frequency ablations
(2017)
Background: Target of the study was to create an accurate anatomic CAD heart rhythm model, and to show its usefulness for cardiac electrophysiological studies and high-frequency ablations. The method is more careful for the patients’ health and has the potential to replace clinical studies due to its high efficiency regarding time and costs.
Methods: All natural heart components of the new HRM were based on MRI records, which guaranteed electronic functionality. The software CST was used for the construction, while CST’s material library assured genuine tissue properties. It should be applicable to simulate different heart rhythm diseases as well as various diffusions of electromagnetic fields, caused by electrophysiological conduction, inside the heart tissue.
Results: It was achievable to simulate sinus rhythm and fourteen different heart rhythm disturbance with different atrial and ventricular conduction delays. The simulated biological excitation of healthy and sick HRM were plotted by simulated electrodes of four polar right atrial catheter, six polar His bundle catheter, ten polar coronary sinus catheter, four polar ablation catheter and eight polar transesophageal left cardiac catheter. Accordingly, six variables were rebuilt and inserted into the anatomic HRM in order to establish heart catheters for ECG monitoring and HF ablation. The HF ablation catheters made it possible to simulate various types of heart rhythm disturbance ablations with different HF ablation catheters and also showed a functional visualisation of tissue heating. The use of tetrahedral meshing HRM made it attainable to store the results faster accompanied by a higher degree of space saving. The smart meshing function reduced unnecessary high resolutions for coarse structures.
Conclusions: The new HRM for EPS simulation may be additional useful for simulation of heart rhythm disturbance, cardiac pacing, HF ablation and for locating and identification of complex fractioned signals within the atrium during atrial fibrillation HF ablation.
Background: Cardiac resynchronization therapy (CRT) with biventricular (BV) pacing is an established therapy for heart failure (HF) patients (P) with sinus rhythm, reduced left ventricular (LV) ejection fraction (EF) and electrical ventricular desynchronization. The aim of the study was to evaluate electrical interventricular delay (IVD) and left ventricular delay (LVD) in right ventricular (RV) pacemaker pacing before upgrading to CRT BV pacing.
Methods: HF P (n=11, age 69.0 ± 7.9 years, 1 female, 10 males) with DDD pacemaker (n=10), DDD defibrillator (n=1), RV pacing, New York Heart Association (NYHA) class 3.0 ± 0.2 and 24.5 ± 4.9 % LVEF were measured by surface ECG and transesophageal bipolar LV ECG before upgrading to CRT defibrillator (n=8) and CRT pacemaker (n=3). IVD was measured between onset of QRS in the surface ECG and onset of LV signal in the transesophageal ECG. LVD was measured between onset and offset of LV signal in the transesophageal ECG. CRT atrioventricular (AV) and BV pacing delay were optimized by impedance cardiography.
Results: Interventricular and intraventricular desynchronization in RV pacemaker pacing were 228.2 ± 44.8 ms QRS duration, 86.5 ± 32.8ms IVD, 94.4 ± 23.8ms LVD, 2.6 ± 0.8 QRS-IVD-ratio with correlation between IVD and QRS-IVD-ratio (r=-0.668 P=0.0248) and 2.3 ± 0.7 QRS-LVD-ratio. The LVEF-IVD-ratio was 0.3 ± 0.1 with correlation between IVD and LVEF-IVD-ratio (r=-0.8063 P=0.00272) and with correlation between QRS duration and LVEF-IVD-ratio (r=-0.7251 P=0.01157). Optimal sensing and pacing AV delay were 128.3 ± 24.8 ms AV delay after atrial sensing (n=6) and 173.3 ± 40.4 ms AV delay after atrial pacing (n=3). Optimal BV pacing delay was -4.3 ± 11.3 ms between LV and RV pacing (n=7). During 30.4 ± 29.6 month CRT follow-up, the NYHA class improved from 3.1 ± 0.2 to 2.2 ± 0.3.
Conclusions: Transesophageal electrical IVD and LVD in RV pacemaker pacing may be additional useful ventricular desynchronization parameters to improve P selection for upgrading RV pacemaker pacing to CRT BV pacing.
Background: The electrical field (E-field) of the biventricular (BV) stimulation is essential for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software. Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Different models of electrodes were integrated into a heart rhythm model for the electrical field simulation (fig.1). The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode, each with a pulse width of 0.5 ms. The far-field potentials generated by the BV stimulations were perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. A far-field potential of 185.97 mV resulted at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal 8 mm ablation electrode. The temperature at the catheter tip was 103.87 ° C after 5 s ablation time, 44.17 ° C from the catheter tip in the myocardium and 37.61 ° C at a distance of 2 mm. After 10 s, the temperature at the three measuring points described above was 107.33 ° C, 50.87 ° C, 40.05 ° C and after 15 seconds 118.42 ° C, 55.75 ° C and 42.13 ° C.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
Electrochemical impedance spectroscopy (EIS) is a widely-used diagnostic technique to characterize electrochemical processes. It is based on the dynamic analysis of two electrical observables, that is, current and voltage. Electrochemical cells with gaseous reactants or products (e.g., fuel cells, metal/air cells, electrolyzers) offer an additional observable, that is, the gas pressure. The dynamic coupling of current and/or voltage with gas pressure gives rise to a number of additional impedance definitions, for which we have introduced the term electrochemical pressure impedance spectroscopy (EPIS) [1,2]. EPIS shows a particular sensitivity towards transport processes of gas-phase or dissolved species, in particular, diffusion coefficients and transport pathway lengths. It is as such complementary to standard EIS, which is mainly sensitive towards electrochemical processes. This sensitivity can be exploited for model parameterization and validation. A general analysis of EPIS is presented, which shows the necessity of model-based interpretation of the complex EPIS shapes in the Nyquist plot (cf. Figure). We then present EPIS simulations for two different electrochemical cells: (1) a sodium/oxygen battery cell and (2) a hydrogen/air fuel cell. We use 1D or 2D electrochemical and transport models to simulate current excitation/pressure detection or pressure excitation/voltage detection. The results are compared to first EPIS experimental data available in literature [2,3].