Refine
Year of publication
- 2017 (105) (remove)
Document Type
- Conference Proceeding (40)
- Article (reviewed) (24)
- Article (unreviewed) (13)
- Part of a Book (10)
- Other (8)
- Book (5)
- Contribution to a Periodical (2)
- Doctoral Thesis (1)
- Letter to Editor (1)
- Report (1)
Conference Type
- Konferenzartikel (34)
- Konferenz-Abstract (5)
- Sonstiges (1)
Has Fulltext
- no (105)
Is part of the Bibliography
- yes (105) (remove)
Keywords
- Games (4)
- Gamification (4)
- Computer Games (3)
- Computerspiele (3)
- Game Design (3)
- CST (2)
- Götz von Berlichingen (2)
- HF-Ablation (2)
- Human Resources (2)
- Procedural Content Generation (2)
- Trade Policy (2)
- Adaptive predictive control (1)
- Affective Computing (1)
- Anlagenaufwandszahl (1)
- Anti-Windup-Maßnahmen (1)
- Anwendungen: Regelung in der solaren Wärmeversorgung (1)
- Arbeitswissenschaft (1)
- Bauteilaktivierung (1)
- Beobachter (1)
- Betriebsführung (1)
- CRT (1)
- Cis-Platin (1)
- Computer Aided Design (CAD) (1)
- Computer-Aided Design (1)
- Context-Awareness (1)
- Controlling (1)
- Convolutional Neural Network (1)
- Data mining (1)
- Digitalisierung (1)
- Discrete-time state controllers (1)
- Dissens (1)
- Electrolyte-gated transistors (1)
- Elektronik (1)
- Emotions (1)
- Feldeffekt (1)
- Flugdatenregistriergerät (1)
- Früherkennung (1)
- GPU computing (1)
- Gamifizierung (1)
- Halbleiter (1)
- Hand (1)
- Handprothese (1)
- Handprothese, Funktionsprüfung (1)
- Herzschrittmachertherapie (1)
- Innovationsfinanzierung (1)
- Internationales Steuerrecht (1)
- Konferenz (1)
- Kontextbewusstsein (1)
- Künstlerische Forschung, Taktilität, Medienökologie, Zwischenkörperlichkeit, Philosophie, Leiblichkeit, Interface, Experimentalsystem (1)
- Learning (1)
- Lernen (1)
- Ljapunow-Funktionen (1)
- Lyapunov functions (1)
- Management (1)
- Many-core architectures (1)
- Marfan-Syndrom (1)
- Microgrids (1)
- Multi-core architectures (1)
- Multimaterial-3-D-Druck (1)
- Multiple regression (1)
- Netzwerk (1)
- PCG (1)
- Paganini (1)
- Perlen (1)
- Printed Electronics (1)
- Procedural Content (1)
- Produkt-Controlling (1)
- Produktion (1)
- Public Policy (1)
- Recruiting (1)
- Recrutainment (1)
- Rede (1)
- Rehabilitation (1)
- Risikomanagement (1)
- Robotersteuerung (1)
- Robotics (1)
- SAP Business Warehouse (1)
- Serendipity (1)
- Smart Grid Operation (1)
- Social Engineer (1)
- Spiele (1)
- Stabilität (1)
- Stellgrößenbegrenzungen (1)
- Steuerbarkeit (1)
- Streustrahlen (1)
- Subspace clustering (1)
- TRIZ, Kreativitätsmethodiken (1)
- Terrestrisches Laserscanning (1)
- Thermal comfort (1)
- Thermo-activate building system (TABS) (1)
- Transistortechnologie (1)
- Trigeneration (1)
- Ultraschall-Computertomographie (1)
- Zeitdiskrete PI-Zustandsregler (1)
- Zenware (1)
- agent based systems (1)
- anti-windup methods (1)
- bio-inspired models (1)
- controllability (1)
- distributed computing (1)
- limitation of in-put variables (1)
- nanotechnology (1)
- observers (1)
- oxide electronics (1)
- oxide semiconductors (1)
- performance of ring oscillators (1)
- printed electronics (1)
- reliability (1)
- self-organizing networks (1)
- stability (1)
- transistor model (1)
- „Eiserne Hand“, Götz von Berlichingen (1)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (35)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (33)
- Fakultät Wirtschaft (W) (24)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (13)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (8)
- INES - Institut für nachhaltige Energiesysteme (5)
- WLRI - Work-Life Robotics Institute (5)
- IfTI - Institute for Trade and Innovation (3)
- Zentrale Einrichtungen (2)
Open Access
- Closed Access (105) (remove)
Am Buffet des Lebens
(2017)
Message co chairmen
(2017)
In this study, a high-performance controller is proposed for single-phase grid-tied energy storage systems (ESSs). To control power factor and current harmonics and manage time-shifting of energy, the ESS is required to have low steady-state error and fast transient response. It is well known that fast controllers often lack the required steady-state accuracy and trade-off is inevitable. A hybrid control system is therefore presented that combines a simple yet fast proportional derivative controller with a repetitive controller which is a type of learning controller with small steady-state error, suitable for applications with periodic grid current harmonic waveforms. This results in an improved system with distortion-free, high power factor grid current. The proposed controller model is developed and design parameters are presented. The stability analysis for the proposed system is provided and the theoretical analysis is verified through stability, transient and steady-state simulations.
Exploiting Dissent: Towards Fuzzing-based Differential Black Box Testing of TLS Implementations
(2017)
The Transport Layer Security (TLS) protocol is one of the most widely used security protocols on the internet. Yet do implementations of TLS keep on suffering from bugs and security vulnerabilities. In large part is this due to the protocol's complexity which makes implementing and testing TLS notoriously difficult. In this paper, we present our work on using differential testing as effective means to detect issues in black-box implementations of the TLS handshake protocol. We introduce a novel fuzzing algorithm for generating large and diverse corpuses of mostly-valid TLS handshake messages. Stimulating TLS servers when expecting a ClientHello message, we find messages generated with our algorithm to induce more response discrepancies and to achieve a higher code coverage than those generated with American Fuzzy Lop, TLS-Attacker, or NEZHA. In particular, we apply our approach to OpenssL, BoringSSL, WolfSSL, mbedTLS, and MatrixSSL, and find several real implementation bugs; among them a serious vulnerability in MatrixSSL 3.8.4. Besides do our findings point to imprecision in the TLS specification. We see our approach as present in this paper as the first step towards fully interactive differential testing of black-box TLS protocol implementations. Our software tools are publicly available as open source projects.
In der Planungs- und Betriebspraxis herrscht im Bereich der Betriebsführung von thermisch aktivierten Bauteilsystemen und insbesondere der thermisch trägen Bauteilaktivierung noch große Unsicherheit. Trotz einer weiten Verbreitung dieser Systeme im Neubau von Nichtwohngebäuden hat sich bis heute keine einheitliche Betriebsführungsstrategie durchgesetzt. Vielmehr kritisieren Bauherren und Nutzer regelmäßig zu hohe bzw. niedrige Raumtemperaturen in den Übergangsjahreszeiten und bei Wetterwechsel sowie generell eine mangelhafte Regelbarkeit. Demgegenüber weisen Monitoringprojekte immer wieder einen hohen thermischen Komfort in diesen Gebäuden nach. Offensichtlich unterscheiden sich hier subjektiv empfundene Behaglichkeit und objektiv gemessener Komfort. Gleichzeitig sind Heiz- und Kühlkonzepte mit Flächentemperierung dann besonders energieeffizient, wenn das Regelkonzept auf deren thermische Trägheit angepasst ist. Eine gute Regelung gewährleistet also einen hohen thermischen Komfort und sorgt für einen möglichst niedrigen Energieeinsatz. Das Rechenverfahren mit Anlagenaufwandszahlen (in Anlehnung an DIN V 18599) bietet eine gute Möglichkeit, Anlagenkonzepte inklusive deren Betriebsführungsstrategie zu bewerten. Damit ist es möglich, eine auf das Gebäude angepasste Betriebsführungsstrategie für die Bauteilaktivierung zu finden und einheitlich zu bewerten.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
Erfinderisches Problemlösen mit TRIZ : Zielbeschreibung, Problemdefinition und Lösungspriorisierung
(2017)
Die Theorie des erfinderischen Problemlösens, TRIZ, ist eine Systematik von Annahmen, Regeln, Methoden und Werkzeugen zur innovativen Systemverbesserung z.B. von Produkten, Prozessen, Dienstleistungen oder Organisationen. Diese Richtlinie erläutert TRIZ-Werkzeuge und -Methoden, die insbesondere in den Phasen "Zielbeschreibung", "Problemdefinition" und "Lösungspriorisierung" des Problemlösungsprozesses eingesetzt werden. Die Detailtiefe der Beschreibung erlaubt eine Einschätzung der Werkzeuge und Methoden hinsichtlich Einsatzzwecken, Ergebnissen und Funktionsweise. Die jeweilige Beschreibung der Methoden und Werkzeuge enthält konkrete Aussagen über Zielsetzung und Ergebnis ihres Einsatzes.
Background: R-wave synchronised atrial pacing is an effective temporary pacing
therapy in infants with postoperative junctional ectopic tachycardia. In the technique
currently used, adverse short or long intervals between atrial pacing and ventricular
sensing (AP–VS) may be observed during routine clinical practice.
Objectives: The aim of the study was to analyse outcomes of R-wave synchronised
atrial pacing and the relationship between maximum tracking rates and AP–VS
intervals.
Methods: Calculated AP–VS intervals were compared with those predicted by experienced
pediatric cardiologist.
Results: A maximum tracking rate (MTR) set 10 bpm higher than the heart rate (HR)
may result in undesirable short AP–VS intervals (minimum 83 ms). A MTR set 20 bpm
above the HR is the hemodynamically better choice (minimum 96 ms). Effects of either
setting on the AP–VS interval could not be predicted by experienced observers. In our
newly proposed technique the AP–VS interval approaches 95 ms for HR > 210 bpm
and 130 ms for HR < 130 bpm. The progression is linear and decreases strictly
(− 0.4 ms/bpm) between the two extreme levels.
Conclusions: Adjusting the AP–VS interval in the currently used technique is complex
and may imply unfavorable pacemaker settings. A new pacemaker design is advisable
to allow direct control of the AP–VS interval.
Die in diesem Aufsatz angesprochenen drei Skype-Performances, die wir im Zeitraum von 2010 bis 2013 durchgeführt haben, fokussieren nicht auf die trennenden Momente der handelnden Subjekte und ihrer Medien, sondern verfolgen eine radikal verkörperte, techno-ökologische Sichtweise. Im Kern untersuchen wir erweiterte Phänotypen , die durch das fortlaufende Parasitiert-Werden unserer Organismen durch die elektronischen Kanäle entstehen. So gesehen verbinden sich via Skype keine getrennten Personen und Orte, sie bilden vielmehr über die Präsenzerfahrung u.a. von Stimme, Haut und Rhythmus eine gemeinsame Umwelt und öffnen temporär einen ‚Dritten Raum‘ bzw. einen ‚Dritten Körper‘. Mensch, Maschine und Umwelt entwickeln durch die Medialisierung eine emergente Bezogenheit organischer und anorganischer Milieus, eine taktil/haptisch-mediale, verkörperte Ökologie. Das ist eine zentrale Annahme unserer Versuchsreihen.
Produkt-Controlling
(2017)
Zentraler Baustein des Marketing ist die „facettenreiche“ Produktpolitik. Im nachstehenden Beitrag wird zunächst die Einordnung der Produktpolitik in den Zielkatalog des Marketing und des Unternehmens skizziert. Das Produkt-Controlling wird verstanden als zielgerichtete Unterstützung der Managementaufgaben im Kontext der Produktpolitik mittels passender Instrumente – Instrumente, die der Phase der Produktentstehung wie der Marktzyklusphase zuzuordnen sind. Erkennbar wird: es gibt ein umfangreiches Set an Methoden, die das Marketing-Management unterstützen und die Sicherstellung der Marketing-Effektivität und Marketing-Effizienz gewährleisten. Die Komplexität des Produkt-Controllings bedingt sich auch durch den ausreichenden Einbezug preis-, qualitäts- und markenpolitischer Informationen in die Zielkontrolle.
Computing Aggregates on Autonomous, Self-organizing Multi-Agent System: Application "Smart Grid"
(2017)
Decentralized data aggregation plays an important role in estimating the state of the smart grid, allowing the determination of meaningful system-wide measures (such as the current power generation, consumption, etc.) to balance the power in the grid environment. Data aggregation is often practicable if the aggregation is performed effectively. However, many existing approaches are lacking in terms of fault-tolerance. We present an approach to construct a robust self-organizing overlay by exploiting the heterogeneous characteristics of the nodes and interlinking the most reliable nodes to form an stable unstructured overlay. The network structure can recover from random state perturbations in finite time and tolerates substantial message loss. Our approach is inspired from biological and sociological self-organizing mechanisms.
With increasing flexible AC transmission system (FACTS) devices in operation, like the most versatile unified power flow controller (UPFC), the AC/DC transmission flexibility and power system stability have been suffering unprecedented challenge. This paper introduces the user-defined modeling (UDM) method into the UPFC dynamic modeling process, to deal with the challenging requirements of power system operation. This has also been verified using a leading-edge stability analysis software named DSATools TM in the IEEE-39 bus benchmark system. The characteristics of steady-state and dynamic responses are compared and analyzed under different conditions. Furthermore, simulation results prove the feasibility and effectiveness of the proposed UPFC in terms of both the independent regulation of power flow and the improvement of transient stability.
In the course of the last few years, our students are becoming increasingly unhappy. Sometimes they stop attending lectures and even seem not to know how to behave correctly. It feels like they are getting on strike. Consequently, drop-out rates are sky-rocketing. The lecturers/professors are not happy either, adopting an “I-don’t-care” attitude.
An interdisciplinary, international team set in to find out: (1) What are the students unhappy about? Why is it becoming so difficult for them to cope? (2) What does the “I-don’t-care” attitude of professors actually mean? What do they care or not care about? (3) How far do the views of the parties correlate? Could some kind of mutual understanding be achieved?
The findings indicate that, at least at our universities, there is rather a long way to go from “Engineering versus Pedagogy” to “Engineering Pedagogy”.
Three real-lab trigeneration microgrids are investigated in non-residential environments (educational, office/administrational, companies/production) with a special focus on domain-specific load characteristics. For accurate load forecasting on such a local level, à priori information on scheduled events have been combined with statistical insight from historical load data (capturing information on not explicitly-known consumer behavior). The load forecasts are then used as data input for (predictive) energy management systems that are implemented in the trigeneration microgrids. In real-world applications, these energy management systems must especially be able to carry out a number of safety and maintenance operations on components such as the battery (e.g. gassing) or CHP unit (e.g. regular test runs). Therefore, energy management systems should combine heuristics with advanced predictive optimization methods. Reducing the effort in IT infrastructure the main and safety relevant management process steps are done on site using a Smart & Local Energy Controller (SLEC) assisted by locally measured signals or operator given information as default and external inputs for any advanced optimization. Heuristic aspects for local fine adjustment of energy flows are presented.
Im Jahr 1504 verlor der deutsche Ritter Gottfried („Götz“) von Berlichingen seine
rechte Hand. Schon während seiner Genesung dachte er daran, die Hand zu ersetzen,
und beauftragte bald darauf die erste Handprothese, die sogenannte „Eiserne Hand“.
Jahre später wurde die aufwändigere zweite „Eiserne Hand“ gebaut. Wir haben die erste
Prothese auf der Basis früherer Literaturdaten von
Quasigroch (1982) mit Hilfe von 3-D
Computer-Aided Design (CAD) rekonstruiert. Dazu mussten einige Abmessungen angepasst
und ein paar Annahmen für das CAD-Modell gemacht werden. Die historische passive
Prothese des Götz von Berlichingen ist für die moderne Neuroprothetik interessant, da sie
eine Alternative zu komplexen invasiven Brain-Machine-Interface-Konzepten darstellen
könnte, wo diese Konzepte nicht notwendig, möglich oder vom Patienten gewünscht sind.
Streustrahlung in der Ultraschall-Computertomographie zur Verifizierung der Echtheit von Perlen
(2017)
This paper describes a new analysis method developed to distinguish real from fake
pearls using non-ionizing, non-destructive ultrasound computed tomography (USCT): In the USCT Shepp-Logan-filtered time-of-flight image, a fake pearl shows irregular, asymmetric
scattering of ultrasound, whereas the pattern in a natural pearl is regular and symmetric.
We strongly assume that pattern recognition of the scattering of ultrasound cannot only
play an important role in verifying pearls, but also in testing other materials and tissues in
(bio-)medical engineering. Furthermore, and most importantly, this new approach could
be helpful for a variety of clinical diagnoses using high-resolution 3D-USCT, such as the
detection of X-ray-negative micro-calcifications in early breast cancer. Moreover looking
at scattering patterns in dedicated positron emission tomography systems may promote
new developments in nuclear medicine diagnostics.
The building sector is one of the main consumers of energy. Therefore, heating and cooling concepts for renewable energy sources become increasingly important. For this purpose, low-temperature systems such as thermo-active building systems (TABS) are particularly suitable. This paper presents results of the use of a novel adaptive and predictive computation method, based on multiple linear regression (AMLR) for the control of TABS in a passive seminar building. Detailed comparisons are shown between the standard TABS and AMLR strategies over a period of nine months each. In addition to the reduction of thermal energy use by approx. 26% and a significant reduction of the TABS pump operation time, this paper focuses on investment savings in a passive seminar building through the use of the AMLR strategy. This includes the reduction of peak power of the chilled beams (auxiliary system) as well as a simplification of the TABS hydronic circuit and the saving of an external temperature sensor. The AMLR proves its practicality by learning from the historical building operation, by dealing with forecasting errors and it is easy to integrate into a building automation system.
Polygeneration systems are a key technology for the reduction of primary energy usage and emissions. High costs, lack of flexibility and effort for parameterization hinder the wide usage of modeling tools during their conceptual design. This paper describes how planning tools can be structured for the conceptual design phase where only little information is available to the planner. A library concept was developed using the principles of object-oriented modeling to address the flexibility issue. With respect to cost and expandability, the open-source modeling language Modelica was chosen. Furthermore, easy-to-parameterize component models were developed. In addition to the improved library concept and novel component models, an easy-to-adapt control concept is proposed. The component models were validated and the applicability of the library was demonstrated by means of an example. It was shown that the data usually obtained from spec sheets are sufficient to parameterize the models. In addition to this, the control concept was approved.
In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator’s cabin are tracked while satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated in computer simulations. Furthermore, for motion simulation scenarios where the reference trajectories are not known beforehand, we derive an estimate on how much motion simulation fidelity can maximally be improved by any reference prediction scheme compared to the case when no prediction scheme is applied.
Radiation is an important means of heat transfer inside an electric arc furnace (EAF).
To gain insight into the complex processes of heat transfer inside the EAF vessel, not only radiation from the surfaces but also emission and absorption of the gas phase and the dust cloud need to be considered.
Furthermore, the radiative heat exchange depends on the geometrical configuration which is continuously changing throughout the process.
The present paper introduces a system model of the EAF which takes into account the radiative heat transfer between the surfaces
and the participating medium. This is attained by the development of a simplified geometrical model,
the use of a weighted-sum-of-gray-gases model, and a simplified consideration of dust radiation.
The simulation results were compared with the data of real EAF plants available in literature.
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
The IEEE 1588 precision time protocol (PTP) is a time synchronization protocol with sub-microsecond precision primarily designed for wired networks. In this letter, we propose wireless precision time protocol (WPTP) as an extension to PTP for multi-hop wireless networks. WPTP significantly reduces the convergence time and the number of packets required for synchronization without compromising on the synchronization accuracy.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
Nachweise für die Stabilität von Regelkreisen, deren Stellgrößen an ihre Begrenzungen gelangen können und bei denen die Regler Integratoren oder andere dynamische Glieder sowie Anti-Windup-Maßnahmen enthalten, sind gewöhnlich sehr aufwändig zu führen. Bei PI-Zustandsreglern, die mittels der in [1] vorgestellten Methode in einem mehrstufigen Verfahren für Regelstrecken entworfen wurden, die bis auf die Stellgrößenbegrenzungen linear sind, lassen sich jedoch äußerst hilfreiche allgemeine Stabilitätsaussagen treffen, die den konkreten Stabilitätsnachweis für das Gesamtsystem– selbst unter Einbeziehung von Beobachtern – erheblich vereinfachen. Im vorliegenden Beitrag werden die diesbezüglichen, auf Steuerbarkeitsbetrachtungen beruhenden, Zusammenhänge für zeitdiskrete Regelkreise aufgezeigt sowie daraus exemplarisch mittels Ljapunow-Funktionen eine einfache Reglerformel für Zustandsregler abgeleitet, die auch im Begrenzungsfall stabil arbeiten. Ein Beispielaus der elektrischen Antriebstechnik illustriert die Anwendbarkeit der vorgestellten Methode.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
Heart rhythm model and simulation of electrophysiological studies and high-frequency ablations
(2017)
Background: The simulation of complex cardiologic structures has the potential to replace clinical studies due to its high efficiency regarding time and costs. Furthermore, the method is more careful for the patients’ health than the conventional ways. The aim of the study was to create an anatomic CAD heart rhythm model (HRM) as accurate as possible, and to show its usefulness for cardiac electrophysiological studies (EPS) and high-frequency (HF) ablations.
Methods: All natural heart components of the new HRM were based on MRI records, which guaranteed electronic functionality. The software CST (Computer Simulation Technology, Darmstadt) was used for the construction, while CST’s material library assured genuine tissue properties. It should be applicable to simulate different heart rhythm diseases as well as various diffusions of electromagnetic fields, caused by electrophysiological conduction, inside the heart tissue.
Results: It was achievable to simulate normal sinus rhythm and fourteen different heart rhythm disturbance with different atrial and ventricular conduction delays. The simulated biological excitation of healthy and sick HRM were plotted by simulated electrodes of four polar right atrial catheter, six polar His bundle catheter, ten polar coronary sinus catheter, four polar ablation catheter and eight polar transesophageal left cardiac catheter (Fig.). Accordingly, six variables were rebuilt and inserted into the anatomic HRM in order to establish heart catheters for ECG monitoring and HF ablation. The HF ablation catheters made it possible to simulate various types of heart rhythm disturbance ablations with different HF ablation catheters and also showed a functional visualisation of tissue heating. The use of tetrahedral meshing HRM made it attainable to store the results faster accompanied by a higher degree of space saving. The smart meshing function reduced unnecessary high resolutions for coarse structures.
Conclusions: The new HRM for EPS simulation may be additional useful for simulation of heart rhythm disturbance, cardiac pacing, HF ablation and for locating and identification of complex fractioned signals within the atrium during atrial fibrillation HF ablation.
Comparing anomalies and exceptions to multilateral dysfunction across a number of spheres of world politics, the book chapter explores pathways through and beyond gridlock in trade. It provides a vital new perspective on world politics as well as a practical guide for positive change in global policy.
Quo Vadis Freihandel?
(2017)
Background: The electrical field (E-field) of the biventricular (BV) stimulation is essential for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software. Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Different models of electrodes were integrated into a heart rhythm model for the electrical field simulation (fig.1). The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode, each with a pulse width of 0.5 ms. The far-field potentials generated by the BV stimulations were perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. A far-field potential of 185.97 mV resulted at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal 8 mm ablation electrode. The temperature at the catheter tip was 103.87 ° C after 5 s ablation time, 44.17 ° C from the catheter tip in the myocardium and 37.61 ° C at a distance of 2 mm. After 10 s, the temperature at the three measuring points described above was 107.33 ° C, 50.87 ° C, 40.05 ° C and after 15 seconds 118.42 ° C, 55.75 ° C and 42.13 ° C.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
Zerstörungsfreie Verfahren zur Messung von Eigenspannungen erfordern, abhängig vom gewählten Verfahren, die Kenntnis gewisser Kopplungskonstanten. Im Falle von Ultraschallmessverfahren sind das neben den elastischen Konstanten zweiter Ordnung (SOEC) vor allem die Konstanten dritter Ordnung (TOEC). Elastische Konstanten fester, metallischer Bauteile werden in der Regel in Zugversuchen bestimmt. Zur Ermittlung der TOEC werden diese mit Ultraschallmessmethoden kombiniert. Durch äußere Einflüsse, wie etwa mechanische Nachbehandlungen der zu untersuchenden Bauteile können sich diese Konstanten jedoch ändern und müssen folglich direkt am veränderten Material bestimmt werden. Mithilfe von Simulationen wird die Ausbreitung der zweiten Harmonischen und der nichtlinear erzeugten Oberflächenwellen in Wellenmischexperimenten analysiert und der akustische Nichtlinearitätsparameter (ANP) bzw. der Kopplungsparameter aus der Amplitudenentwicklung berechnet. Insbesondere wird untersucht, welchen Einfluss ein gegebenes Tiefenprofil der TOEC auf den ANP hat (Vorwärtsproblem) und inwiefern sich aus den Messungen des ANP auf ein vorliegendes Tiefenprofil der TOEC schließen lässt (inverses Problem). Außerdem wird diskutiert, welchen Einfluss lokale Änderungen der SOEC auf den ANP haben können und wie groß diese Änderungen sein dürfen, um die TOEC dennoch bestimmen zu können. Die Untersuchungen hierzu wurden auf der Basis eines 3D-FEM Modells mit zufällig orientierten Mikrorissen durchgeführt. Die numerischen Rechnungen zeigen dabei auch eine gute Übereinstimmung mit einem aus der Literatur bekannten und für dieses Problem erweiterten, analytischen Modell. Neben der rissinduzierten Nichtlinearität kann bei diesem auch die Gitternichtlinearität berücksichtigt werden.
Elastic constants of components are usually determined by tensile tests in combination with ultrasonic experiments. However, these properties may change due to e.g. mechanical treatments or service conditions during their lifetime. Knowledge of the actual material parameters is key to the determination of quantities like residual stresses present in the medium. In this work the acoustic nonlinearity parameter (ANP) for surface acoustic waves is examined through the derivation of an evolution equation for the amplitude of the second harmonic. Given a certain depth profile of the third-order elastic constants, the dependence of the ANP with respect to the input frequency is determined and on the basis of these results, an appropriate inversion method is developed. This method is intended for the extraction of the depth dependence of the third-order elastic constants of the material from second-harmonic generation and guided wave mixing experiments, assuming that the change in the linear Rayleigh wave velocity is small. The latter assumption is supported by a 3D-FEM model study of a medium with randomly distributed microcracks as well as theoretical works on this topic in the literature.
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
Клиновые акустические волны в твёрдом те-ле — это третий фундаментальный тип волн, после объёмных и поверхност-ных волн, импульсы которых распространяются без изменений своих форм (дисперсия отсутствует). Систему упругого клина можно получить из систе-мы упругого полупространства, “разрезав” его вдоль некоторой плоскости, а систему упругого полупространства можно получить из распределённой в пространстве упругой среды тем же методом, поэтому связи между поверх-ностными и объёмными волнами должны во многом повторяться при рас-смотрении клиновых и поверхностных волн. Например, существование быст-рых псевдоповерхностных волн в системе упругого полупространства, излу-чающих энергию при распространении в объёмные волны, имеет свой аналог и для системы упругого клина: совсем недавно были открыты псевдоклино-вые волны, излучающие как объёмные, так и поверхностные волны по мере своего распространения. С другой стороны, в этой же последовательности объёмных, поверхностных и клиновых волн должны выделяться и отличи-тельные особенности. Если поверхностные волны отличаются от объёмных волн тем, что они локализованы на двухмерной поверхности (объёмные вол-ны являются нелокализованными), то клиновые волны локализованы вдоль одномерной поверхности (линии) — кромки клина. Клиновые волны — это волноводные акустические волны, которые распространяются без дифракци-онных потерь, а также они не обладают дисперсией, поскольку в системе бесконечного упругого клина нет ни одного параметра размерности длины.
В заключении приведены основные результаты работы, которые со-стоят в следующем:
1. С помощью метода функций Лагерра была построена функция динами-ческого отклика на импульсный линейный источник (функция Грина) для задачи Лэмба в полупространстве, а также были изучены вопросы о сходимости и устойчивости данного построения. Было показано, что в предельном случае построенная функция динамического отклика совпа-дает с классической функцией Грина для этой задачи.
2. На основе результатов предыдущего пункта была построена функция Грина для упругого клина (и функция плотности состояния на кром-ке, совпадающая с диагональными компонентами функции Грина), с по-мощью которой удалось идентифицировать импульсы псевдоклиновых волн на экспериментальных кривых.
3. Для определённых клиновых конфигураций в анизотропных упругих средах (тетрагональных кристаллах) удалось получить критерий суще-ствования клиновых волн на основе характеристик поверхностных волн, распространяющихся на гранях исследуемых конфигураций, а также в некоторых случаях удалось классифицировать клиновые волны по типу симметрии.
4. Была разработана теория, описывающая формы импульсов клиновых волн при различных режимах генерации: абляционном и термоупругом.
5. Для клиновых волн была представлена нелинейная теория второго по-рядка. Были проведены численные расчёты функции ядра эволюцион-ного уравнения клиновых волн для кремниевых клиньев с одной гранью, совпадающей с поверхностью (111) (поверхность скола), и с произволь-ной ориентацией второй грани.
6. Были описаны фундаментальные отличия нелинейных линовых волн от нелинейных объёмных и поверхностных волн, а также было проведено численное моделирование эволюции импульса клиновых волн, которое показало соответствие теории эксперименту.
7. Получены решения солитонного типа для клиновых волн. Рассмотрены взаимодействия солитонов и свойства солитонного распада.
Electrolyte-Gated Field-Effect Transistors Based on Oxide Semiconductors: Fabrication and Modeling
(2017)
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.
In recent years, the additive manufacturing processes have rapidly developed. The additive manufacturing processes currently present a high-performance alternative to conventional manufacturing methods. In particular, they offer the opportunity of previously hardly imaginable design freedom, i.e. the implementation of complex forms and geometries. This capability can, for example, be applied in the development of especially light but still loadable components in automotive engineering. In addition, waste material is seldom produced in additive manufacturing which benefits a sustainable production of building components. Until now, this design freedom was barely used in the construction of technical components and products because, in doing so, both specific design guidelines for additive manufacturing and complex strength calculations must be simultaneously observed. Yet in order to fully take advantage of the additive manufacturing potential, the method of topology optimization, based on FEM simulation, suggests itself. It is with this method that components that are precisely matched and are especially light, thereby also resource-saving, can be produced. Current literature research indicates that this method is used in automotive manufacturing for reducing weight and improving the stability of both individual parts and assembly units. This contribution will study how this development method can be applied in the example of a brake mount from an experimental vehicle. In this, the conventional design is improved by means of a simulation tool for topology optimization in various steps. In an additional processing step, the smoothing of the thus developed component occurs. Finally, the component is generatively manufactured by means of selective laser melting technology. Models are manufactured using binder jetting for the demonstration of the process. It will also be determined how this weight reduction affects the CO2 emissions of a vehicle in use.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
Defining Recrutainment: A Model and a Survey on the Gamification of Recruiting and Human Resources
(2017)
Recrutainment, is a hybrid word combining recruiting and entertainment. It describes the combination of activities in human resources and gamification. Concepts and methods from game design are now used to assess and select future employees. Beyond this area, recrutainment is also applied for internal processes like professional development or even marketing campaigns. This paper’s contribution has four components: (1) we provide a conceptual background, leading to a more precise definition of recrutainment; (2) we develop a new model for analyzing solutions in recrutainment; (3) we present a corpus of 42 applications and use the new model to assess their strengths and potentials; (4) we provide a bird’s eye view on the state of the art in recrutainment and show the current weighting of gamification and recruiting aspects.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
This chapter portrays the historical and mathematical background of dynamic and procedural content generation (PCG). We portray and compare various PCG methods and analyze which mathematical approach is suited for typical applications in game design. In the next step, a structural overview of games applying PCG as well as types of PCG is presented. As abundant PCG content can be overwhelming, we discuss context-aware adaptation as a way to adapt the challenge to individual players’ requirements. Finally, we take a brief look at the future of PCG.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions. We present a model-based analysis of lithium-ion battery degradation in a stationary photovoltaic battery system. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including solid electrolyte interphase (SEI) formation. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics (PV), inverter, load, grid interaction, and energy management system, fed with historic weather data. Simulations are carried out for two load scenarios, a single-family house and an office tract, over annual operation cycles with one-minute time resolution. As key result, we show that the charging process causes a peak in degradation rate due to electrochemical charge overpotentials. The main drivers for cell ageing are therefore not only a high state of charge (SOC), but the charging process leading towards high SOC. We also show that the load situation not only influences system parameters like self-sufficiency and self-consumption, but also has a significant impact on battery ageing. We assess reduced charge cut-off voltage as ageing mitigation strategy.
The DMFC is a promising option for backup power systems and for the power supply of portable devices. However, from the modeling point of view liquid-feed DMFC are challenging systems due to the complex electrochemistry, the inherent two-phase transport and the effect of methanol crossover. In this paper we present a physical 1D cell model to describe the relevant processes for DMFC performance ranging from electrochemistry on the surface of the catalyst up to transport on the cell level. A two-phase flow model is implemented describing the transport in gas diffusion layer and catalyst layer at the anode side. Electrochemistry is described by elementary steps for the reactions occurring at anode and cathode, including adsorbed intermediate species on the platinum and ruthenium surfaces. Furthermore, a detailed membrane model including methanol crossover is employed. The model is validated using polarization curves, methanol crossover measurements and impedance spectra. It permits to analyze both steady-state and transient behavior with a high level of predictive capabilities. Steady-state simulations are used to investigate the open circuit voltage as well as the overpotentials of anode, cathode and electrolyte. Finally, the transient behavior after current interruption is studied in detail.
This book offers a compendium of best practices in game dynamics. It covers a wide range of dynamic game elements ranging from player behavior over artificial intelligence to procedural content generation. Such dynamics make virtual worlds more lively and realistic and they also create the potential for moments of amazement and surprise. In many cases, game dynamics are driven by a combination of random seeds, player records and procedural algorithms. Games can even incorporate the player’s real-world behavior to create dynamic responses. The best practices illustrate how dynamic elements improve the user experience and increase the replay value.
The book draws upon interdisciplinary approaches; researchers and practitioners from Game Studies, Computer Science, Human-Computer Interaction, Psychology and other disciplines will find this book to be an exceptional resource of both creative inspiration and hands-on process knowledge.
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
Designing Authentic Emotions for Non-Human Characters. A Study Evaluating Virtual Affective Behavior
(2017)
While human emotions have been researched for decades, designing authentic emotional behavior for non-human characters has received less attention. However, virtual behavior not only affects game design, but also allows creating authentic avatars or robotic companions. After a discussion of methods to model and recognize emotions, we present three characters with a decreasing level of human features and describe how established design techniques can be adapted for such characters. In a study, 220 participants assess these characters' emotional behavior, focusing on the emotion "anger". We want to determine how reliable users can recognize emotional behavior, if characters increasingly do not look and behave like humans. A secondary aim is determining if gender has an impact on the competence in emotion recognition. The findings indicate that there is an area of insecure attribution of virtual affective behavior not distant but close to human behavior. We also found that at least for anger, men and women assess emotional behavior equally well.
This work demonstrates the potentials of procedural content generation (PCG) for games, focusing on the generation of specific graphic props (reefs) in an explorer game. We briefly portray the state-of-the-art of PCG and compare various methods to create random patterns at runtime. Taking a step towards the game industry, we describe an actual game production and provide a detailed pseudocode implementation showing how Perlin or Simplex noise can be used efficiently. In a comparative study, we investigate two alternative implementations of a decisive game prop: once created traditionally by artists and once generated by procedural algorithms. 41 test subjects played both implementations. The analysis shows that PCG can create a user experience that is significantly more realistic and at the same time perceived as more aesthetically pleasing. In addition, the ever-changing nature of the procedurally generated environments is preferred with high significance, especially by players aged 45 and above.
Gamification, die spielerische Anreicherung von Tätigkeiten, erfreut sich zunehmender Beliebtheit. Insbesondere in den Bereichen Gesundheit (Exergames) oder Lernen (Serious Games, Edutainment) gibt es eine Vielzahl erfolgreicher Anwendungen. Weniger verbreitet ist Gamification dagegen bislang bei Arbeitsprozessen. Zwar gibt es erfolgreiche Ansätze im Bereich Dienstleistung und Service (z. B. bei Callcentern), der Bereich der industriellen Produktion wurde jedoch bis vor wenigen Jahren nicht adressiert.
Dieses Kapitel gibt einen Überblick der Entwicklung von Gamification und zeigt den Stand der Technik. Wir leiten allgemeine Anforderungen für Gamification im Produktionsumfeld ab und stellen zwei neue Ansätze aus der aktuellen Forschung vor. Diese werden in einer Studie mit Trainern aus der Automobilbranche auf Akzeptanz untersucht. Die Ergebnisse zeigen eine insgesamt positive Haltung zur Gamifizierung der Produktion und eine sehr hohe Akzeptanz insbesondere des Pyramiden-Designs.
We present the design outline of a context-aware interactive system for smart learning in the STEM curriculum (science, technology, engineering, and mathematics). It is based on a gameful design approach and enables "playful coached learning" (PCL): a learning process enriched by gamification but also close to the learner's activities and emotional setting. After a brief introduction on related work, we describe the technological setup, the integration of projected visual feedback and the use of object and motion recognition to interpret the learner's actions. We explain how this combination enables rapid feedback and why this is particularly important for correct habit formation in practical skills training. In a second step, we discuss gamification methods and analyze which are best suited for the PCL system. Finally, emotion recognition, a major element of the final PCL design not yet implemented, is briefly outlined.
EuGH "comtech"
(2017)
Since their dawning, space communications have been among the strongest driving applications for the development of error correcting codes. Indeed, space-to-Earth telemetry (TM) links have extensively exploited advanced coding schemes, from convolutional codes to Reed-Solomon codes (also in concatenated form) and, more recently, from turbo codes to low-density parity-check (LDPC) codes. The efficiency of these schemes has been extensively proved in several papers and reports. The situation is a bit different for Earth-to-space telecommand (TC) links. Space TCs must reliably convey control information as well as software patches from Earth control centers to scientific payload instruments and engineering equipment onboard (O/B) spacecraft. The success of a mission may be compromised because of an error corrupting a TC message: a detected error causing no execution or, even worse, an undetected error causing a wrong execution. This imposes strict constraints on the maximum acceptable detected and undetected error rates.
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Modelling and Simulation of Microscale Trigeneration Systems Based on Real- Life Experimental Data
(2017)
For the shift of the energy grid towards a smarter decentralised system flexible microscale trigeneration systems will play an important role due to their ability to support the demand side management in buildings. However to harness their potential modern control methods like model predictive control must be implemented for their optimal scheduling and control. To implement such supervisory control methods, first, simple analytical models representing the behaviour of the components need to be developed. At the Institute of Energy System Technologies in Offenburg we have built a real-life microscale trigeneration plant and present in this paper the models based on experimental data. These models are qualitatively validated and their application in the future for the optimal scheduling problem is briefly motivated.
This book has emerged from lectures and courses given in recent years by the authors at their universities and shows how theoretical concepts of Business Intelligence are applied in SAP BW on HANA.
The authors developed a set of case studies guiding the student through the complete process of building an end-to-end BI system, based on a simple but realistic business scenario. The cases are designed in such a way that the application of many concepts such as staging, core data warehouse, data mart, reporting, etc., in SAP BW on HANA is introduced and demonstrated step by step.
Target Audience:
The cases are primarily designed for SAP BW beginners, who want a first introduction and hands-on experience with the latest version of BW on HANA. We briefly touch the general concepts of Business Intelligence and Data Warehousing. These concepts are discussed in many excellent books out in the market, which we don’t want to replace. The reader should either already be familiar with these concepts or should be willing to use the references we provide. Also, this book can NOT replace a complete consultant training for BW, but it can serve as a starting point for a journey into the world of SAP BW on HANA.
Konzeption und Durchführung der Evaluation einer virtuellen Lernumgebung: Methodenlehre-Baukasten
Der MLBK steht unter der URL http://www.methodenlehre-baukasten.de zur freien Verfügung für Lernende und Lehrende. Es finden ständig noch Entwicklungsarbeiten statt, um fehlende Übungen zu ergänzen oder fehlerhafte zu verbessern oder Teile in Modulen, mit denen wir noch nicht zufrieden sind, zu ersetzen. Rückmeldungen der Nutzer sind uns sehr erwünscht, um diesen Entwicklungsprozess voranzutreiben. Für die Zukunft wird ein Geschäftsmodell erarbeitet, das die Wartung, Pflege und Weiterentwicklung des Systems tragen soll und für die nachhaltige Bereitstellung des Systems sorgen kann.
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2017)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
In safety critical applications wireless technologies are not widely spread. This is mainly due to reliability and latency requirements. In this paper a new wireless architecture is presented which will allow for customizing the latency and reliability for every single participant within the network. The architecture allows for building up a network of inhomogeneous participants with different reliability and latency requirements. The used TDMA scheme with TDD as duplex method is acting gentle on resources. Therefore participants with different processing and energy resources are able to participate.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
Sichere Detektion von Menschen in der Mensch-Roboter-Kollaboration mit Time-of-Flight Kameras
(2017)
Manuelle Montagen erfahren eine zunehmende Variantenvielfalt und abnehmende Losgrößen, und das bei steigenden Anforderungen bezüglich Qualität und Produktivität. Statische Montageanweisungen kommen hierbei an ihre Grenzen. Doch welche Systeme bieten für manuelle Montagen eine geeignete Assistenz an? Im vorliegenden Artikel werden auf dem Markt vorhandene Montageassistenzsysteme eingeteilt und bewertet, um Aussagen zu ihrer Eignung zu treffen. Anschließend wird ihr Unterstützungspotenzial am Beispiel eines Arbeitsplatzes untersucht und bewertet.
Die Vorteile des Einzelstückflusses haben zu einer Verschiebung der Grenzen zwischen den Montageorganisationen in Richtung des Fließprinzips geführt. Im vorliegenden Artikel werden quantitative Kriterien zur Fließlinieneignung als Entscheidungshilfe aus montageorganisatorischer Sicht vorgestellt. Neben der Eignung anhand der Überlappung von Endmontagevorgängen werden die schwankungsbedingte und verteilungsbedingte Fließlinieneignung vorgestellt. Ihre Anwendung wird anhand eines Algorithmus und anhand der Ergebnisse aus einem Praxisbeispiel gezeigt.