Refine
Year of publication
- 2017 (40) (remove)
Document Type
- Conference Proceeding (40) (remove)
Conference Type
- Konferenzartikel (34)
- Konferenz-Abstract (5)
- Sonstiges (1)
Has Fulltext
- no (40)
Is part of the Bibliography
- yes (40)
Keywords
- CST (2)
- Gamification (2)
- HF-Ablation (2)
- Affective Computing (1)
- Arbeitswissenschaft (1)
- CRT (1)
- Computer Games (1)
- Context-Awareness (1)
- Data mining (1)
- Elektronik (1)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (16)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (10)
- Fakultät Wirtschaft (W) (9)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (8)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (5)
- ACI - Affective and Cognitive Institute (4)
- WLRI - Work-Life Robotics Institute (4)
- Zentrale Einrichtungen (1)
Open Access
- Closed Access (40) (remove)
Our university carries out various research projects. Among others, the project Schluckspecht is an interdisciplinary work on different ultra-efficient car concepts for international contests. Besides the engineering work, one part of the project deals with real-time data visualization. In order to increase the efficiency of the vehicle, an online monitoring of the runtime parameters is necessary. The driving parameters of the vehicle are transmitted to a processing station via a wireless network connection. We plan to use an augmented reality (AR) application to visualize different data on top of the view of the real car. By utilizing a mobile Android or iOS device a user can interactively view various real-time and statistical data. The car and its components are meant to be augmented by various additional information, whereby that information should appear at the correct position of the components. An engine e.g. could show the current rpm and consumption values. A battery on the other hand could show the current charge level. The goal of this paper is to evaluate different possible approaches, their suitability and to expand our application to other projects at our university.
In the course of the last few years, our students are becoming increasingly unhappy. Sometimes they stop attending lectures and even seem not to know how to behave correctly. It feels like they are getting on strike. Consequently, drop-out rates are sky-rocketing. The lecturers/professors are not happy either, adopting an “I-don’t-care” attitude.
An interdisciplinary, international team set in to find out: (1) What are the students unhappy about? Why is it becoming so difficult for them to cope? (2) What does the “I-don’t-care” attitude of professors actually mean? What do they care or not care about? (3) How far do the views of the parties correlate? Could some kind of mutual understanding be achieved?
The findings indicate that, at least at our universities, there is rather a long way to go from “Engineering versus Pedagogy” to “Engineering Pedagogy”.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
Applications helping us to maintain the focus on work are called “Zenware” (from concentration and Zen). While form factors, use cases and functionality vary, all these applications have a common goal: creating uninterrupted, focused attention on the task at hand. The rise of such tools exemplifies the users’ desire to control their attention within the context of omnipresent distraction. In expert interviews we investigate approaches in the context of attention-management at the workplace of knowledge workers. To gain a broad understanding, we use judgement sampling in interviews with experts from several disciplines. We especially explore how focus and flow can be stimulated. Our contribution has four components: a brief overview on the state of the art (1), a presentation of the results (2), strategies for coping with digital distractions and design guidelines for future Zenware (3) and an outlook on the overall potential in digital work environments (4).
Gamifying rehabilitation is an efficient way to improve motivation and exercise frequency. However, between flow theory, self-determination theory or Bartle's player types there is much room for speculation regarding the mechanics required for successful gamification, which in turn leads to increased motivation. For our study, we selected a gamified solution for motion training (an exergame) where the playful design elements are extremely simple. The contribution is three-fold: we show best practices from the state of the art, present a study analyzing the effects of simple gamification mechanics on a quantitative and on a qualitative level and discuss strategies for playful design in therapeutic movement games.
In recent years, the additive manufacturing processes have rapidly developed. The additive manufacturing processes currently present a high-performance alternative to conventional manufacturing methods. In particular, they offer the opportunity of previously hardly imaginable design freedom, i.e. the implementation of complex forms and geometries. This capability can, for example, be applied in the development of especially light but still loadable components in automotive engineering. In addition, waste material is seldom produced in additive manufacturing which benefits a sustainable production of building components. Until now, this design freedom was barely used in the construction of technical components and products because, in doing so, both specific design guidelines for additive manufacturing and complex strength calculations must be simultaneously observed. Yet in order to fully take advantage of the additive manufacturing potential, the method of topology optimization, based on FEM simulation, suggests itself. It is with this method that components that are precisely matched and are especially light, thereby also resource-saving, can be produced. Current literature research indicates that this method is used in automotive manufacturing for reducing weight and improving the stability of both individual parts and assembly units. This contribution will study how this development method can be applied in the example of a brake mount from an experimental vehicle. In this, the conventional design is improved by means of a simulation tool for topology optimization in various steps. In an additional processing step, the smoothing of the thus developed component occurs. Finally, the component is generatively manufactured by means of selective laser melting technology. Models are manufactured using binder jetting for the demonstration of the process. It will also be determined how this weight reduction affects the CO2 emissions of a vehicle in use.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
The growing complexity in RF front-ends, which support carrier aggregation and a growing number of frequency bands, leads to tightened nonlinearity requirements in all sub-components. The generation of third order intermodulation products (IMD3) are typical problems caused by the non-linearity of SAW devices. In the present work, we investigate temperature compensating (TC) SAW devices on Lithium Niobate-rot128YX. An accurate FEM simulation model [1] is employed, which allows to better understand the origin of nonlinearities in such acoustic devices.
Elastic constants of components are usually determined by tensile tests in combination with ultrasonic experiments. However, these properties may change due to e.g. mechanical treatments or service conditions during their lifetime. Knowledge of the actual material parameters is key to the determination of quantities like residual stresses present in the medium. In this work the acoustic nonlinearity parameter (ANP) for surface acoustic waves is examined through the derivation of an evolution equation for the amplitude of the second harmonic. Given a certain depth profile of the third-order elastic constants, the dependence of the ANP with respect to the input frequency is determined and on the basis of these results, an appropriate inversion method is developed. This method is intended for the extraction of the depth dependence of the third-order elastic constants of the material from second-harmonic generation and guided wave mixing experiments, assuming that the change in the linear Rayleigh wave velocity is small. The latter assumption is supported by a 3D-FEM model study of a medium with randomly distributed microcracks as well as theoretical works on this topic in the literature.
Zerstörungsfreie Verfahren zur Messung von Eigenspannungen erfordern, abhängig vom gewählten Verfahren, die Kenntnis gewisser Kopplungskonstanten. Im Falle von Ultraschallmessverfahren sind das neben den elastischen Konstanten zweiter Ordnung (SOEC) vor allem die Konstanten dritter Ordnung (TOEC). Elastische Konstanten fester, metallischer Bauteile werden in der Regel in Zugversuchen bestimmt. Zur Ermittlung der TOEC werden diese mit Ultraschallmessmethoden kombiniert. Durch äußere Einflüsse, wie etwa mechanische Nachbehandlungen der zu untersuchenden Bauteile können sich diese Konstanten jedoch ändern und müssen folglich direkt am veränderten Material bestimmt werden. Mithilfe von Simulationen wird die Ausbreitung der zweiten Harmonischen und der nichtlinear erzeugten Oberflächenwellen in Wellenmischexperimenten analysiert und der akustische Nichtlinearitätsparameter (ANP) bzw. der Kopplungsparameter aus der Amplitudenentwicklung berechnet. Insbesondere wird untersucht, welchen Einfluss ein gegebenes Tiefenprofil der TOEC auf den ANP hat (Vorwärtsproblem) und inwiefern sich aus den Messungen des ANP auf ein vorliegendes Tiefenprofil der TOEC schließen lässt (inverses Problem). Außerdem wird diskutiert, welchen Einfluss lokale Änderungen der SOEC auf den ANP haben können und wie groß diese Änderungen sein dürfen, um die TOEC dennoch bestimmen zu können. Die Untersuchungen hierzu wurden auf der Basis eines 3D-FEM Modells mit zufällig orientierten Mikrorissen durchgeführt. Die numerischen Rechnungen zeigen dabei auch eine gute Übereinstimmung mit einem aus der Literatur bekannten und für dieses Problem erweiterten, analytischen Modell. Neben der rissinduzierten Nichtlinearität kann bei diesem auch die Gitternichtlinearität berücksichtigt werden.
Background: The electrical field (E-field) of the biventricular (BV) stimulation is essential for the success of cardiac resynchronization therapy (CRT) in patients with cardiac insufficiency and widened QRS complex. 3D modeling allows the simulation of CRT and high frequency (HF) ablation.
Purpose: The aim of the study was to model different pacing and ablation electrodes and to integrate them into a heart model for the static and dynamic simulation of BV stimulation and HF ablation in atrial fibrillation (AF).
Methods: The modeling and simulation was carried out using the electromagnetic simulation software. Five multipolar left ventricular (LV) electrodes, one epicardial LV electrode, four bipolar right atrial (RA) electrodes, two right ventricular (RV) electrodes and one HF ablation catheter were modeled. Different models of electrodes were integrated into a heart rhythm model for the electrical field simulation (fig.1). The simulation of an AV node ablation at CRT was performed with RA, RV and LV electrodes and integrated ablation catheter with an 8 mm gold tip.
Results: The RV and LV stimulation were performed simultaneously at amplitude of 3 V at the LV electrode and 1 V at the RV electrode, each with a pulse width of 0.5 ms. The far-field potentials generated by the BV stimulations were perceived by the RA electrode. The far-field potential at the RA electrode tip was 32.86 mV. A far-field potential of 185.97 mV resulted at a distance of 1 mm from the RA electrode tip. AV node ablation was simulated with an applied power of 5 W at 420 kHz at the distal 8 mm ablation electrode. The temperature at the catheter tip was 103.87 ° C after 5 s ablation time, 44.17 ° C from the catheter tip in the myocardium and 37.61 ° C at a distance of 2 mm. After 10 s, the temperature at the three measuring points described above was 107.33 ° C, 50.87 ° C, 40.05 ° C and after 15 seconds 118.42 ° C, 55.75 ° C and 42.13 ° C.
Conclusions: Virtual heart and electrode models as well as the simulations of electrical fields and temperature profiles allow the static and dynamic simulation of atrial synchronous BV stimulation and HF ablation at AF. The 3D simulation of the electrical field and temperature profile may be used to optimize the CRT and AF ablation.
Three real-lab trigeneration microgrids are investigated in non-residential environments (educational, office/administrational, companies/production) with a special focus on domain-specific load characteristics. For accurate load forecasting on such a local level, à priori information on scheduled events have been combined with statistical insight from historical load data (capturing information on not explicitly-known consumer behavior). The load forecasts are then used as data input for (predictive) energy management systems that are implemented in the trigeneration microgrids. In real-world applications, these energy management systems must especially be able to carry out a number of safety and maintenance operations on components such as the battery (e.g. gassing) or CHP unit (e.g. regular test runs). Therefore, energy management systems should combine heuristics with advanced predictive optimization methods. Reducing the effort in IT infrastructure the main and safety relevant management process steps are done on site using a Smart & Local Energy Controller (SLEC) assisted by locally measured signals or operator given information as default and external inputs for any advanced optimization. Heuristic aspects for local fine adjustment of energy flows are presented.
Designing Authentic Emotions for Non-Human Characters. A Study Evaluating Virtual Affective Behavior
(2017)
While human emotions have been researched for decades, designing authentic emotional behavior for non-human characters has received less attention. However, virtual behavior not only affects game design, but also allows creating authentic avatars or robotic companions. After a discussion of methods to model and recognize emotions, we present three characters with a decreasing level of human features and describe how established design techniques can be adapted for such characters. In a study, 220 participants assess these characters' emotional behavior, focusing on the emotion "anger". We want to determine how reliable users can recognize emotional behavior, if characters increasingly do not look and behave like humans. A secondary aim is determining if gender has an impact on the competence in emotion recognition. The findings indicate that there is an area of insecure attribution of virtual affective behavior not distant but close to human behavior. We also found that at least for anger, men and women assess emotional behavior equally well.
We present the design outline of a context-aware interactive system for smart learning in the STEM curriculum (science, technology, engineering, and mathematics). It is based on a gameful design approach and enables "playful coached learning" (PCL): a learning process enriched by gamification but also close to the learner's activities and emotional setting. After a brief introduction on related work, we describe the technological setup, the integration of projected visual feedback and the use of object and motion recognition to interpret the learner's actions. We explain how this combination enables rapid feedback and why this is particularly important for correct habit formation in practical skills training. In a second step, we discuss gamification methods and analyze which are best suited for the PCL system. Finally, emotion recognition, a major element of the final PCL design not yet implemented, is briefly outlined.
In this paper we present the implementation of a model-predictive controller (MPC) for real-time control of a cable-robot-based motion simulator. The controller computes control inputs such that a desired acceleration and angular velocity at a defined point in simulator’s cabin are tracked while satisfying constraints imposed by working space and allowed cable forces of the robot. In order to fully use the simulator capabilities, we propose an approach that includes the motion platform actuation in the MPC model. The tracking performance and computation time of the algorithm are investigated in computer simulations. Furthermore, for motion simulation scenarios where the reference trajectories are not known beforehand, we derive an estimate on how much motion simulation fidelity can maximally be improved by any reference prediction scheme compared to the case when no prediction scheme is applied.
Heart rhythm model and simulation of electrophysiological studies and high-frequency ablations
(2017)
Background: The simulation of complex cardiologic structures has the potential to replace clinical studies due to its high efficiency regarding time and costs. Furthermore, the method is more careful for the patients’ health than the conventional ways. The aim of the study was to create an anatomic CAD heart rhythm model (HRM) as accurate as possible, and to show its usefulness for cardiac electrophysiological studies (EPS) and high-frequency (HF) ablations.
Methods: All natural heart components of the new HRM were based on MRI records, which guaranteed electronic functionality. The software CST (Computer Simulation Technology, Darmstadt) was used for the construction, while CST’s material library assured genuine tissue properties. It should be applicable to simulate different heart rhythm diseases as well as various diffusions of electromagnetic fields, caused by electrophysiological conduction, inside the heart tissue.
Results: It was achievable to simulate normal sinus rhythm and fourteen different heart rhythm disturbance with different atrial and ventricular conduction delays. The simulated biological excitation of healthy and sick HRM were plotted by simulated electrodes of four polar right atrial catheter, six polar His bundle catheter, ten polar coronary sinus catheter, four polar ablation catheter and eight polar transesophageal left cardiac catheter (Fig.). Accordingly, six variables were rebuilt and inserted into the anatomic HRM in order to establish heart catheters for ECG monitoring and HF ablation. The HF ablation catheters made it possible to simulate various types of heart rhythm disturbance ablations with different HF ablation catheters and also showed a functional visualisation of tissue heating. The use of tetrahedral meshing HRM made it attainable to store the results faster accompanied by a higher degree of space saving. The smart meshing function reduced unnecessary high resolutions for coarse structures.
Conclusions: The new HRM for EPS simulation may be additional useful for simulation of heart rhythm disturbance, cardiac pacing, HF ablation and for locating and identification of complex fractioned signals within the atrium during atrial fibrillation HF ablation.
Modelling and Simulation of Microscale Trigeneration Systems Based on Real- Life Experimental Data
(2017)
For the shift of the energy grid towards a smarter decentralised system flexible microscale trigeneration systems will play an important role due to their ability to support the demand side management in buildings. However to harness their potential modern control methods like model predictive control must be implemented for their optimal scheduling and control. To implement such supervisory control methods, first, simple analytical models representing the behaviour of the components need to be developed. At the Institute of Energy System Technologies in Offenburg we have built a real-life microscale trigeneration plant and present in this paper the models based on experimental data. These models are qualitatively validated and their application in the future for the optimal scheduling problem is briefly motivated.
Sichere Detektion von Menschen in der Mensch-Roboter-Kollaboration mit Time-of-Flight Kameras
(2017)
In safety critical applications wireless technologies are not widely spread. This is mainly due to reliability and latency requirements. In this paper a new wireless architecture is presented which will allow for customizing the latency and reliability for every single participant within the network. The architecture allows for building up a network of inhomogeneous participants with different reliability and latency requirements. The used TDMA scheme with TDD as duplex method is acting gentle on resources. Therefore participants with different processing and energy resources are able to participate.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
Computing Aggregates on Autonomous, Self-organizing Multi-Agent System: Application "Smart Grid"
(2017)
Decentralized data aggregation plays an important role in estimating the state of the smart grid, allowing the determination of meaningful system-wide measures (such as the current power generation, consumption, etc.) to balance the power in the grid environment. Data aggregation is often practicable if the aggregation is performed effectively. However, many existing approaches are lacking in terms of fault-tolerance. We present an approach to construct a robust self-organizing overlay by exploiting the heterogeneous characteristics of the nodes and interlinking the most reliable nodes to form an stable unstructured overlay. The network structure can recover from random state perturbations in finite time and tolerates substantial message loss. Our approach is inspired from biological and sociological self-organizing mechanisms.
With increasing flexible AC transmission system (FACTS) devices in operation, like the most versatile unified power flow controller (UPFC), the AC/DC transmission flexibility and power system stability have been suffering unprecedented challenge. This paper introduces the user-defined modeling (UDM) method into the UPFC dynamic modeling process, to deal with the challenging requirements of power system operation. This has also been verified using a leading-edge stability analysis software named DSATools TM in the IEEE-39 bus benchmark system. The characteristics of steady-state and dynamic responses are compared and analyzed under different conditions. Furthermore, simulation results prove the feasibility and effectiveness of the proposed UPFC in terms of both the independent regulation of power flow and the improvement of transient stability.
Message co chairmen
(2017)
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.