Refine
Year of publication
Document Type
- Conference Proceeding (1253) (remove)
Conference Type
- Konferenzartikel (950)
- Konferenz-Abstract (156)
- Konferenzband (77)
- Sonstiges (42)
- Konferenz-Poster (32)
Language
- English (934)
- German (317)
- Multiple languages (1)
- Russian (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (453)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (286)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (213)
- Fakultät Wirtschaft (W) (164)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (113)
- INES - Institut für nachhaltige Energiesysteme (59)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Open Access (560)
- Closed Access (456)
- Closed (223)
- Bronze (214)
- Diamond (29)
- Grün (13)
- Gold (6)
- Hybrid (6)
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
Duplikaterkennung, -suche und -konsolidierung für Kunden- und Geschäftspartnerdaten, sog. „Identity Resolution“, ist die Voraussetzung für erfolgreiches Customer Relationship Management und Customer Experience Management, aber auch für das Risikomanagement zur Minimierung von Betrugsrisiken und Einhaltung regulatorischer Vorschriften und viele weitere Anwendungsfälle. Diese Systeme sind jedoch hochkomplex und müssen individuell an die kundenspezifischen Anforderungen angepasst werden. Der Einsatz lernbasierter Verfahren bietet großes Potenzial zur automatisierten Anpassung. In diesem Beitrag präsentieren wir für ein KMU praxisfähige, lernbasierte Verfahren zur automatischen Konfiguration von Business-Regeln in Duplikaterkennungssystemen. Dabei wurden für Fachanwender Möglichkeiten entwickelt, um beispielgetrieben das Match-System an individuelle Business-Regeln (u.a. Umzugserkennung, Sperrlistenabgleich) anzupassen und zu konfigurieren. Die entwickelten Verfahren wurden evaluiert und in einer prototypischen Lösung integriert. Wir konnten zeigen, dass unser Machine-Learning-Verfahren, die von einem Domainexperten erstellten Business-Regeln für das Duplikaterkennungssystem „identity“ verbessern konnte. Zudem konnte der hierzu erforderliche Zeitaufwand verkürzt werden.
With major intellectual properties there is a long tradition of cross-media value chains -- usually starting with books and comics, then transgressing to film and TV and finally reaching interactive media like video games. In recent years the situation has changed: (1) smaller productions start to establish cross media value chains; (2) there is a trend from sequential towards parallel content production. In this work we describe how the production of a historic documentary takes a cross media approach right from the start. We analyze how this impacts the content creation pipelines with respect to story, audience and realization. The focus of the case study is the impact on the production of a documentary game. In a second step we reflect on the experiences gained so far and derive recommendations for future small-scale cross media productions.
Towards a gamification of industrial production: a comparative study in sheltered work environments
(2015)
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
With projectors and depth cameras getting cheaper, assistive systems in industrial manufacturing are becoming increasingly ubiquitous. As these systems are able to continuously provide feedback using in-situ projection, they are perfectly suited for supporting impaired workers in assembling products. However, so far little research has been conducted to understand the effects of projected instructions on impaired workers. In this paper, we identify common visualizations used by assistive systems for impaired workers and introduce a simple contour visualization. Through a user study with 64 impaired participants we compare the different visualizations to a control group using no visual feedback in a real world assembly scenario, i.e. assembling a clamp. Furthermore, we introduce a simplified version of the NASA-TLX questionnaire designed for impaired participants. The results reveal that the contour visualization is significantly better in perceived mental load and perceived performance of the participants. Further, participants made fewer errors and were able to assemble the clamp faster using the contour visualization compared to a video visualization, a pictorial visualization and a control group using no visual feedback.
Design approaches for the gamification of production environments: a study focusing on acceptance
(2015)
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes.
In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness.
The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
In this work we provide an overview of gamification, i.e. the application of methods from game design to enrich non-gaming processes. The contribution is divided into five subsections: an introduction focusing on the progression of gamification through the hype cycle in the recent years (1), a brief introduction to gamification mechanics (1) and an overview of the state of the art in established areas (3). The focus is a discussion of more recent attempts of gamification in service and production (4). We also discuss the ethical implications (5) and the future perspectives (6) of gamified business processes. Gamification has been successfully applied in the domains education (serious games) and health (exergames) and is spreading to other areas. In recent years there have been various attempts to “gamify” business processes. While the first efforts date back as far as the collection of miles in frequent flyer programs, we will portray some of the more recent and comprehensive software-based approaches in the service industry, e.g. the gamification of processes in sales and marketing. We discuss their accomplishments as well as their social and ethical implicatio. Finally a very recent approach is presented: the application of gamification in the domain of industrial production. We discuss the special requirements in this domain and the effects on the business level and on the users. We conclude with a prognosis on the future development of gamification.
Do you know that for each banana bunch the complete plant must be cut as well? Only in Brazil 440 million trees are planted annually. With an average weight of 30 kg per banana plant you can estimate about 13,5 million tons of banana residues per year. Although there exist some projects to use these residues for the production of valuable products (e.g fibers for textile and paper production) most of this organic waste material is unused and left for composting on the farmland.
The basic idea of this project is to evaluate this organic waste material for converting it to a renewable and CO2 neutral fuel. Therefore, the different parts of the banana plant (heart, leaves and pseudo stem) were analyzed regarding their biogas potential (specific biogas yield and biogas production kinetics). In further studies the effect of mechanical and enzymatic pretreatments of the different parts of the plants was investigated. This examination could then be the basis for an energetic usage of this organic residue.
The biogas batch experiments were performed according to the german guideline VDI 4630 in 2-L-Batch reactors at 37°C. As biogas substrates, the heart, the leaves and the pseudo stem of the banana plant residue with and without enzymatic/mechanical pretreatment were used.
The different parts of the banana plants result in a specific biogas production yield in the range of 260-470 norm liters per kg organic dry mass.
To determine the influence of the mechanical pretreatment (particle size 1-15 mm) on the biogas production kinetics, the kinetic constants were defined and calculated. The reduction of the particle size leads to an improved biogas production kinetics. Therefore experiments will demonstrate, if the results from the batch experiments can be converted in the continuous fed biogas reactor. The experiments of the enzymatic pretreatment are still under investigation.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
Covert- and side-channels as well as techniques to establish them in cloud computing are in focus of research for quite some time. However, not many concrete mitigation methods have been developed and even less have been adapted and concretely implemented by cloud providers. Thus, we recently conceptually proposed C 3 -Sched a CPU scheduling based approach to mitigate L2 cache covert-channels. Instead of flushing the cache on every context switch, we schedule trusted virtual machines to create noise which prevents potential covert-channels. Additionally, our approach aims on preserving performance by utilizing existing instead of artificial workload while reducing covert-channel related cache flushes to cases where not enough noise has been achieved. In this work we evaluate cache covert-channel mitigation and performance impact of our integration of C 3 -Sched in the XEN credit scheduler. Moreover, we compare it to naive solutions and more competitive approaches.
Elektronische Türschilder zur Darstellung von Informationen sind insbesondere in öffentlichen Gebäuden zwischenzeitlich weit verbreitet. Die Varianz dieser elektronischen Türschilder reicht vom Tablet-basierten Türschild bis hin zum PC-basierten Türschild mit externem Bildschirm. Zumeist werden die Systeme mit 230 V betrieben. Bei einer großen Summe von Türschildern in öffentlichen Gebäuden kann dies zu einem signifikanten Umsatz an Energie führen. Im Rahmen dieses Papers wird die Entwicklung eines energieautarken arbeiten Türschildes vorgestellt, bei dem ein E-Paper-Display zum Einsatz kommt. Das Türschild lässt sich per Smartphone-App und NFC-Schnittstelle konfigurieren. Es wird insbesondere auf das Low-Power-Hardware-Design der Elektronik und energetische Aspekte eingegangen.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.
This book constitutes the refereed proceedings of the 20th International TRIZ Future Conference, TFC 2020, held online at the University Cluj-Napoca, Romania, in October 2020 and sponsored by the International Federation for Information Processing.
34 chapters were carefully peer reviewed and selected from 91 conference submissions. They are organized in the following thematic sections: computing TRIZ; education and pedagogy; sustainable development; tools and techniques of TRIZ for enhancing design; TRIZ and system engineering; TRIZ and complexity; and cross-fertilization of TRIZ for innovation management.
Sustainable design of equipment for process intensification requires a comprehensive and correct identification of relevant stakeholder requirements, design problems and tasks crucial for innovation success. Combining the principles of the Quality Function Deployment with the Importance-Satisfaction Analysis and Contradiction Analysis of requirements gives an opportunity to define a proper process innovation strategy more reliably and to develop an optimal process intensification technology with less secondary engineering and ecological problems.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
The Metering Bus, also known as M-Bus, is a European standard EN13757-3 for reading out metering devices, like electricity, water, gas, or heat meters. Although real-life M-Bus networks can reach a significant size and complexity, only very simple protocol analyzers are available to observe and maintain such networks. In order to provide developers and installers with the ability to analyze the real bus signals easily, a web-based monitoring tool for the M-Bus has been designed and implemented. Combined with a physical bus interface it allows for measuring and recording the bus signals. For this at first a circuit has been developed, which transforms the voltage and current-modulated M-Bus signals to a voltage signal that can be read by a standard ADC and processed by an MCU. The bus signals and packets are displayed using a web server, which analyzes and classifies the frame fragments. As an additional feature an oscilloscope functionality is included in order to visualize the physical signal on the bus. This paper describes the development of the read-out circuit for the Wired M-Bus and the data recovery.
Partial substitution of Al atoms with Sc in wurtzite AlN crystals increases the piezoelectric constants. This leads to an increased electromechanical coupling, which is required for high bandwidths in piezo-acoustic filters. The crystal bonds in Ah-xScxN (AlScN) are softened as function of Sc atomic percentage x, leading to reduction of phase velocity in the film. Combining high Sc content AlScN films with high velocity substrates favors higher order guided surface acoustic wave (SAW) modes [1]. This study investigates higher order SAW modes in epitaxial AlScN on sapphire (Al2O3). Their dispersion for Pt metallized epitaxial AlScN films on Al2O3was computed for two different propagation directions. Computed phase velocity dispersion branches were experimentally verified by the characterization of fabricated SAW resonators. The results indicated four wave modes for the propagation direction (0°, 0°, 0°), featuring 3D polarized displacement fields. The sensitivity of the wave modes to the elastic constants of AlScN was investigated. It was shown that due to the 3D polarization of the waves, all elastic constants have an influence on the phase velocity and can be measured by suitable weighting functions in material constant extraction procedures.
Laser ultrasound was used to determine dispersion curves of surface acoustic waves on a Si (001) surface covered by AlScN films with a scandium content between 0 and 41%. By including off-symmetry directions for wavevectors, all five independent elastic constants of the film were extracted from the measurements. Results for their dependence on the Sc content are presented and compared to corresponding data in the literature, obtained by alternative experimental methods or by ab-initio calculations.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
In this work a method for the estimation of current slopes induced by inverters operating interior permanent magnet synchronous machines is presented. After the derivation of the estimation algorithm, the requirements for a suitable sensor setup in terms of accuracy, dynamic and electromagnetic interference are discussed. The boundary conditions for the estimation algorithm are presented with respect to application within high power traction systems. The estimation algorithm is implemented on a field programmable gateway array. This moving least-square algorithm offers the advantage that it is not dependent on vectors and therefore not every measured value has to be stored. The summation of all measured values leads to a significant reduction of the required storage units and thus decreases the hardware requirements. The algorithm is designed to be calculated within the dead time of the inverter. Appropriate countermeasures for disturbances and hardware restrictions are implemented. The results are discussed afterwards.
Als Einstieg in den Diskurs über zivile Netzwerktechnologien, mobile Geräte, Onlinedienste und die Frage, wie sich die „Kirche der Zukunft“ (zumindest aus medienwissenschaftlicher Sicht) positionieren kann, dienen drei Zitate. Die Gegenüberstellung der darin vertretenen Positionen soll den Nutzen und die Folgen der zunehmend vollständigen Durchdringung (fast) aller Lebensbereiche mit Digitaltechnik für den Einzelnen wie für die Gesellschaft aufzeigen.
Additive manufacturing is a rapidly growing manufacturing process for which many new processes and materials are currently being developed. The biggest advantage is that almost any shape can be produced, while conventional manufacturing methods reach their limits. Furthermore, a lot of material is saved because the part is created in layers and only as much material is used as necessary. In contrast, in the case of machining processes, it is not uncommon for more than half of the material to be removed and disposed of. Recently, new additive manufacturing processes have been on the market that enables the manufacturing of components using the FDM process with fiber reinforcement. This opens up new possibilities for optimizing components in terms of their strength and at the same time increasing sustainability by reducing materials consumption and waste. Within the scope of this work, different types of test specimens are to be designed, manufactured and examined. The test specimens are tensile specimens, which are used both for standardized tensile tests and for examining a practical component from automotive engineering used in student project. This project is a vehicle designed to compete in the Shell Eco-marathon, one of the world’s largest energy efficiency competitions. The aim is to design a vehicle that covers a certain distance with as little fuel as possible. Accordingly, it is desirable to manufacture the components with the lowest possible weight, while still ensuring the required rigidity. To achieve this, the use of fiber-reinforced 3D-printed parts is particularly suitable due to the high rigidity. In particular, the joining technology for connecting conventionally and additively manufactured components is developed. As a result, the economic efficiency was assessed, and guidelines for the design of components and joining elements were created. In addition, it could be shown that the additive manufacturing of the component could be implemented faster and more sustainably than the previous conventional manufacturing.
Background: A disturbed synchronization of the ventricular contraction can cause a highly developed systolic heart failure in affected patients, which can often be explained by a diseased left bundle branch block (LBBB). If medication remains unresponsive, the concerned patients will be treated with a cardiac resynchronization therapy (CRT) system. The aim of this study was to integrate His bundle pacing into the Offenburg heart rhythm model in order to visualize the electrical pacing field generated by His bundle pacing.
Methods: Modelling and electrical field simulation activities were performed with the software CST (Computer Simulation Technology) from Dessault Systèms. CRT with biventricular pacing is to be achieved by an apical right ventricular electrode and an additional left ventricular electrode, which is floated into the coronary vein sinus. This conventional type of biventricular pacing leads to a reduction of the left ventricular ejection fraction. Furthermore, the non-responder rate of the CRT therapy is about one third of the CRT patients.
Results: His bundle pacing represents a physiological alternative to conventional cardiac pacing and cardiac resynchronization. An electrode implanted in the His bundle emits a stronger electrical pacing field than the electrical pacing field of conventional cardiac pacemakers. The pacing of the His bundle was performed by the Medtronic Select Secure 3830 electrode with pacing voltage amplitudes of 3 V, 2 V and 1.5 V in combination with a pacing pulse duration of 1 ms.
Conclusions: Compared to conventional cardiac pacemaker pacing, His bundle pacing is capable of bridging LBBB conduction disorders in the left ventricle. The His bundle pacing electrical field is able to spread via the physiological pathway in the right and left ventricles for CRT with a narrow QRS-complex in the surface ECG.
Im Rahmen des Forschungsprojekts Professional UX entwickelt die Hochschule Offenburg gemeinsam mit dem Softwarehaus Dr. Hornecker in Freiburg eine innovative Systemlösung, die es ermöglicht, anhand von Mimik, Stimme und Blickverlauf beim Nutzer entstehende Emotionen bei der Nutzung interaktiver Anwendungen zu erfassen und zu interpretieren. Ziel der Untersuchung ist es, Indikatoren zu identifizieren, die eine exakte Zuordnung von wahrgenommenen Reizen zu den jeweils ausgelösten Emotionen erlauben. Sobald negative Emotionen wie Ärger oder Unsicherheit auftreten, kann dieser erfasst und im Nachgang der jeweils irritierende Reiz eliminiert werden. Das Projektteam hat einen ersten Prototyp für die Professional UX Systemlösung in Form von Hard- und Software entwickelt, mit dem es möglich ist, UX-Messungen während der User Interaktion durchzuführen und automatisiert mithilfe von KI auswerten zu lassen.
Automotive service suppliers are keen to invent products that help to reduce particulate matter pollution substantial, but governance worldwide are not yet ready to introduce this retrofitting of helpful devices statutory. To develop a strategy how to introduce these devices to the market based on user needs is the objective of our research. The contribution of this paper is three-fold: we will provide an overview of the current options of particulate matter pollution solutions (I). This corpus is used to come to a more precise description of the specific needs and wishes of target groups (II). Finally, a representative empirical study via social media channels with German car owners will help to develop a strategy to introduce retrofit devices into the German market (III).
To reach customers by dialog marketing campaigns is more and more difficult. This is a common problem of companies and marketing agencies worldwide: information overload, multi-channel-communication and a confusing variety of offers make it hard to gain the attention of the target group. The contribution of this paper is four-fold: we provide an overview of the current state of print dialog marketing activities and trends (I). Based on this corpus we identify the main key performance indicators of dialog marketing customer interaction (II). A qualitative user experience study identifies the customer wishes and needs, focusing on lottery offers for senior citizens (III). Finally, we evaluate the success of two different dialog marketing campaigns with 20,000 clients and compare the key performance indicators of the original hands-on experience-based print mailings with user experience tested and optimized mailings (IV).
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
(2021)
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.
Bei dem vorgestellten Ansatz soll der Auftreffpunkt des Pfeils durch die Kreuzkorrelation von Audio-Signalen bestimmt werden. Das Auftreffen des Pfeils erzeugt ein charakteristisches Geräusch, welches von mehreren Mikrofonen in bestimmter Anordnung um die Dartscheibe herum in elektrische Signale umgewandelt wird. Mithilfe der Schallgeschwindigkeit und den Zeitdifferenzen, welche die Schallwelle zu den einzelnen Mikrofonen benötigt soll dann der Auftreffpunkt berechnet werden.
Mit der Implementierung sowie einer anschließenden aussagekräftigen Evaluierung, soll das, visuelle-inertiale Kartierungs- und Lokalisierungssystem maplab analysiert werden. Hierbei basiert die Kartierung bzw. Lokalisierung auf der Detektion von Umgebungsmerkmalen. Neben der Möglichkeit der Kartenerstellung besteht ferner die Option, mehrere Karten zu fusionieren und somit weitreichende Gebiete zu kartieren sowie für weitere Datenauswertungen zu nutzen. Aufgrund der Durchführung und Bewertung der Ergebnisse in unterschiedlichen Anwendungsszenarien zeigt sich, dass maplab besonders zur Kartierung von Räumen bzw. kleinen Gebäudekomplexen geeignet ist. Die Möglichkeit der Kartenfusionierung bietet weiterhin die Option, den Informationsgehalt von Karten zu erhöhen, welches die Effektivität für eine anschließende Lokalisierung steigert. Bei wachsender Kartierungsgröße hingegen zeigt sich jedoch eine Vergrößerung geometrischer Inkonsistenzen.
This paper explains the realization of a concept for research-oriented photonics education. Using the example of the integration of an actual PhD project, it is shown how students are familiarized with the topic of research and scientific work in the first semesters. Typical research activities are included as essential parts of the learning process. Research should be made visible and tangible for the students. The authors will present all aspects of the learning environment, their impressions and experiences with the implemented scenario, as well as first evaluation results of the students.
Live streaming of events over an IP network as a catalyst in media technology education and training
(2020)
The paper describes how students are involved in applied research when setting up the technology and running a live event. Real-time IP transmission in broadcast environments via fiber optics will become increasingly important in the future. Therefore, it is necessary to create a platform in this area where students can learn how to handle IP infrastructure and fiber optics. With this in mind, we have built a fully functional TV control room that is completely IP-based. The authors present the steps in the development of the project and show the advantages of the proposed digital solutions. The IP network proves to be a synergy between the involved teams: participants of the robot competition and the members of the media team. These results are presented in the paper. Our activities aim to awaken enthusiasm for research and technology in young people. Broadcasts of live events are a good opportunity for "hands on" activities.
Astronomical phenomena fascinate people from the very beginning of mankind up to today. In this paper the authors will present their experience with photography of astronomical events. The main focus will be on aurora borealis, comet Neowise, total lunar eclipses and how mobile devices open up new possibilities to observe the green flash. Our efforts were motivated by the great impact and high number of viewers of these events. Visitors from over a hundred countries watched our live broadcasts.
Furthermore, we report on our experiences with the photography of optical phenomena such as polar lights Fig. 1, comet Neowise with a Delta Aquariids meteor Fig. 11, and lunar eclipses Fig. 12.
The Human-Robot-Collaboration (HRC) has developed rapidly in recent years with the help of collaborative lightweight robots. An important prerequisite for HRC is a safe gripper system. This results in a new field of application in robotics, which spreads mainly in supporting activities in the assembly and in the care. Currently, there are a variety of grippers that show recognizable weaknesses in terms of flexibility, weight, safety and price.
By means of Additive manufacturing (AM) gripper systems can be developed which can be used multifunctionally, manufactured quickly and customized. In addition, the subsequent assembly effort can be reduced due to the integration of several components to a complex component. An important advantage of AM is the new freedom in designing products. Thus, components using lightweight design can be produced. Another advantage is the use of 3D multi-material printing, wherein a component with different material properties and also functions can be realized.
This contribution presents the possibilities of AM considering HRC requirements. First of all, the topic of Human-Robot-Interaction with regard to additive manufacturing will be explained on the basis of a literature review. In addition, the development steps of the HRI gripper through to assembly are explained. The acquired knowledge regarding the AM are especially emphasized here. Furthermore, an application example of the HRC gripper is considered in detail and the gripper and its components are evaluated and optimized with respect to their function. Finally, a technical and economic evaluation is carried out. As a result, it is possible to additively manufacture a multifunctional and customized human-robot collaboration gripping system. Both the costs and the weight were significantly reduced. Due to the low weight of the gripping system only a small amount of about 13% of the load of the robot used is utilized.
Zur Herstellung von Spritzgussformeinsätzen kommen in der Regel spanende Verfahren zum Einsatz. In den letzten Jahren hat sich allerdings auch die additive Herstellung dieser Werkzeuge als zweckmäßig erwiesen. In der Produktentwicklung spielt die Agilität heute eine immer wichtigere Rolle. Um mögliche Potentiale des Additive Tooling im Rahmen des Agile Prototyping und um Unterschiede zu den konventionellen Herstellverfahren aufzuzeigen, werden Angebote für die Fertigung mehrerer Formeinsätze durch eine CNC- und HSC-Fertigung, sowie durch additive Herstellung angefragt und hinsichtlich Beschaffungskosten und -zeiten miteinander verglichen. Zudem erfolgt eine Bewertung der technischen Unterschiede. Aus diesen beiden Betrachtungen kann schließlich ein Profil über die drei Herstellverfahren abgeleitet werden, welches bei der anwendungsfallspezifischen Verfahrensauswahl unterstützen soll.
Additive manufacturing (AM) or 3D printing (3DP) has become a widespread new technology in recent years and is now used in many areas of industry. At the same time, there is an increasing need for training courses that impart the knowledge required for product development in 3D printing. In this article, a workshop on “Rapid Prototyping” is presented, which is intended to provide students with the technical and creative knowledge for product development in the field of AM. Today, additive manufacturing is an important part of teaching for the training of future engineers. In a detailed literature review, the advantages and disadvantages of previous approaches to training students are examined and analyzed. On this basis, a new approach is developed in which the students analyze and optimize a given product in terms of additivie manufacturing. The students use two different 3D printers to complete this task. In this way, the students acquire the skills to work independently with different processes and materials. With this new approach, the students learn to adapt the design to different manufacturing processes and to observe the restrictions of different materials. The results of these courses are evaluated through feedback in a presentation and a questionnaire.
Efficient collaborative robotic applications need a combination of speed and separation monitoring, and power and force limiting operations. While most collaborative robots have built-in sensors for power and force limiting operations, there are none with built-in sensor systems for speed and separation monitoring. This paper proposes a system for speed and separation monitoring directly from the gripper of the robot. It can monitor separation distances of up to three meters. We used single-pixel Time-of-Flight sensors to measure the separation distance between the gripper and the next obstacle perpendicular to it. This is the first system capable of measuring separation distances of up to three meters directly from the robot's gripper.
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
This paper describes a comparative study of two tactile systems supporting navigation for persons with little or no visual and auditory perception. The efficacy of a tactile head-mounted device (HMD) was compared to that of a wearable device, a tactile belt. A study with twenty participants showed that the participants took significantly less time to complete a course when navigating with the HMD, as compared to the belt.
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
Short-term load forecasting (STLF) has been playing a key role in the electricity sector for several decades, due to the need for aligning energy generation with the demand and the financial risk connected with forecasting errors. Following the top-down approach, forecasts are calculated for aggregated load profiles, meaning the sum of singular loads from consumers belonging to a balancing group. Due to the emerging flexible loads, there is an increasing relevance for STLF of individual factories. These load profiles are typically more stochastic compared to aggregated ones, which imposes new requirements to forecasting methods and tools with a bottom-up approach. The increasing digitalization in industry with enhanced data availability as well as smart metering are enablers for improved load forecasts. There is a need for STLF tools processing live data with a high temporal resolution in the minute range. Furthermore, behin-the-meter (BTM) data from various sources like submetering and production planning data should be integrated in the models. In this case, STLF is becoming a big data problem so that machine learning (ML) methods are required. The research project “GaIN” investigates the improvement of the STLF quality of an energy utility using BTM data and innovative ML models. This paper describes the project scope, proposes a detailed definition for a benchmark and evaluates the readiness of existing STLF methods to fulfil the described requirements as a reviewing paper.
The review highlights that recent STLF investigations focus on ML methods. Especially hybrid models gain more and more importance. ML can outperform classical methods in terms of automation degree and forecasting accuracy. Nevertheless, the potential for improving forecasting accuracy by the use of ML models depends on the underlying data and the types of input variables. The described methods in the analyzed publications only partially fulfil the tool requirements for STLF on company level. There is still a need to develop suitable ML methods to integrate the expanded data base in order to improve load forecasts on company level.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
We propose in this work to solve privacy preserving set relations performed by a third party in an outsourced configuration. We argue that solving the disjointness relation based on Bloom filters is a new contribution in particular by having another layer of privacy on the sets cardinality. We propose to compose the set relations in a slightly different way by applying a keyed hash function. Besides discussing the correctness of the set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. We are in particular interested in how having bits overlapping in the Bloom filters impacts the privacy level of our approach. Finally, we present our results with real-world parameters in two concrete scenarios.
The Effect of Gamification on Emotions - The Potential of Facial Recognition in Work Environmentsns
(2015)
Gamification means using video game elements to improve user experience and user engagement in non-game services and applications. This article describes the effects when gamification is used in work contexts. Here we focus on industrial production. We describe how facial recognition can be employed to measure and quantify the effect of gamification on the users’ emotions.
The quantitative results show that gamification significantly reduces both task completion time and error rate. However, the results concerning the effect on emotions are surprising. Without gamification there are not only more unhappy expressions (as to expect) but surprisingly also more happy expressions. Both findings are statistically highly significant.
We think that in redundant production work there are generally more (negative) emotions involved. When there is no gamification happy and unhappy balance each other. In contrast gamification seems to shift the spectrum of moods towards “relaxed”. Especially for work environments such a calm attitude is a desirable effect on the users. Thus our findings support the use of gamification.
Video game developers continuously increase the degree of details and realism in games to create more human-like characters. But increasing the human-likeness becomes a problem in regard to the Uncanny Valley phenomenon that predicts negative feelings of people towards artificial entities. We developed an avatar creation system to examine preferences towards parametrized faces and explore in regard to the Uncanny Valley phenomenon how people design faces that they like or reject. Based on the 3D model of the Caucasian average face, 420 participants generate 1341 faces of positively and negatively associated concepts of both gender. The results show that some characteristics associated with the Uncanny Valley are used to create villains or repulsive faces. Heroic faces get attractive features but are rarely and little stylized. A voluntarily designed face is very similar to the heroine. This indicates that there is a tendency of users to design feminine and attractive but still credible faces.
The precise positioning of mobile systems is a prerequisite for any autonomous behavior, in an industrial environment as well as for field robotics. The paper describes the set up for an experimental platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. Two approaches are compared. First, a local method based on point cloud matching and integration of inertial measurement units is evaluated. Subsequent matching makes it possible to create a three-dimensional point cloud that can be used as a map in subsequent runs. The second approach is a full SLAM algorithm, based on graph relaxation models, incorporating the full sensor suite of odometry, inertial sensors, and 3D laser scan data.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
In this contribution, we propose an system setup for the detection andclassification of objects in autonomous driving applications. The recognition algo-rithm is based upon deep neural networks, operating in the 2D image domain. Theresults are combined with data of a stereo camera system to finally incorporatethe 3D object information into our mapping framework. The detection systemis locally running upon the onboard CPU of the vehicle. Several network archi-tectures are implemented and evaluated with respect to accuracy and run-timedemands for the given camera and hardware setup.
A Gamified and Adaptive Learning System for Neurodivergent Workers in Electronic Assembling Tasks
(2020)
Learning and work-oriented assistive systems are often designed to fit the workflow of neurotypical workers. Neurodivergent workers and individuals with learning disabilities often present cognitive and sensorimotor characteristics that are better accommodated with personalized learning and working processes. Therefore, we designed an adaptive learning system that combines an augmented interaction space with user-sensitive virtual assistance to support step-by-step guidance for neurodivergent workers in electronic assembling tasks. Gamified learning elements were also included in the interface to provide self-motivation and praise whenever users progress in their learning and work achievements.
Nowadays, the wide majority of Europeans uses smartphones. However, touch displays are still not accessible by everyone. Individuals with deafblindness, for example, often face difculties in accessing vision-based touchscreens. Moreover, they typically have few fnancial resources which increases the need for customizable, low-cost assistive devices. In this work-in-progress, we present four prototypes made from low-cost, every-day materials, that make modern pattern lock mechanisms more accessible to individuals with vision impairments or even with deafblindness. Two out of four prototypes turned out to be functional tactile overlays for accessing digital 4-by-4 grids that are regularly used to encode dynamic dot patterns. In future work, we will conduct a user study investigating whether these two prototypes can make dot-based pattern lock mechanisms more accessible for individuals with visual impairments or deafblindness.
Deafblindness, a form of dual sensory impairment, signifcantly impacts communication, access to information and mobility. Inde- pendent navigation and wayfnding are main challenges faced by individuals living with combined hearing and visual impairments. We developed a haptic wearable that provides sensory substitution and navigational cues for users with deafblindness by conveying vibrotactile signals onto the body. Vibrotactile signals on the waist area convey directional and proximity information collected via a fisheye camera attached to the garment, while semantic informa- tion is provided with a tapping system on the shoulders. A playful scenario called “Keep Your Distance” was designed to test the navigation system: individuals with deafblindness were “secret agents” that needed to follow a “suspect”, but they should keep an opti- mal distance of 1.5 meters from the other person to win the game. Preliminary fndings suggest that individuals with deafblindness enjoyed the experience and were generally able to follow the directional cues.
Co-Designing Assistive Tools to Support Social Interactions by Individuals Living with Deafblindness
(2020)
Deafblindness is a dual sensory impairment that affects many aspects of life, including mobility, access to information, communication, and social interactions. Furthermore, individuals living with deafblindness are under a high risk of social isolation. Therefore, we identified opportunities for applying assistive tools to support social interactions through co-ideation activities with members of the deafblind community. This work presents our co-design approach, lessons learned and directions for designing meaningful assistive tools for dual sensory loss.
Interaction and capturing information from the surrounding is dominated by vision and hearing. Haptics on the other side, widens the bandwidth and could also replace senses (sense switching) for impaired. Haptic technologies are often limited to point-wise actuation. Here, we show that actuation in two-dimensional matrices instead creates a richer input. We describe the construction of a full-body garment for haptic communication with a distributed actuating network. The garment is divided into attachable-detachable panels or add-ons that each can carry a two dimensional matrix of actuating haptic elements. Each panel adds to an enhanced sensoric capability of the human- garment system so that together a 720° system is formed. The spatial separation of the panels on different body locations supports semantic and theme-wise separation of conversations conveyed by haptics. It also achieves directional faithfulness, which is maintaining any directional information about a distal stimulus in the haptic input.
Tactile Navigation with Checkpoints as Progress Indicators?: Only when Walking Longer Straight Paths
(2020)
Persons with both vision and hearing impairments have to rely primarily on tactile feedback, which is frequently used in assistive devices. We explore the use of checkpoints as a way to give them feedback during navigation tasks. Particularly, we investigate how checkpoints can impact performance and user experience. We hypothesized that individuals receiving checkpoint feedback would take less time and perceive the navigation experience as superior to those who did not receive such feedback. Our contribution is two-fold: a detailed report on the implementation of a smart wearable with tactile feedback (1), and a user study analyzing its effects (2). The results show that in contrast to our assumptions, individuals took considerably more time to complete routes with checkpoints. Also, they perceived navigating with checkpoints as inferior to navigating without checkpoints. While the quantitative data leave little room for doubt, the qualitative data open new aspects: when walking straight and not being "overwhelmed" by various forms of feedback in succession, several participants actually appreciated the checkpoint feedback.
Wow, You Are Terrible at This!: An Intercultural Study on Virtual Agents Giving Mixed Feedback
(2020)
While the effects of virtual agents in terms of likeability, uncanniness, etc. are well explored, it is unclear how their appearance and the feedback they give affects people's reactions. Is critical feedback from an agent embodied as a mouse or a robot taken less serious than from a human agent? In an intercultural study with 120 participants from Germany and the US, participants had to find hidden objects in a game and received feedback on their performance by virtual agents with different appearances. As some levels were designed to be unsolvable, critical feedback was unavoidable. We hypothesized that feedback would be taken more serious, the more human the agent looked. Also, we expected the subjects from the US to react more sensitively to criticism. Surprisingly, our results showed that the agents' appearance did not significantly change the participants' perception. Also, while we found highly significant differences in inspirational and motivational effects as well as in perceived task load between the two cultures, the reactions to criticism were contrary to expectations based on established cultural models. This work improves our understanding on how affective virtual agents are to be designed, both with respect to culture and to dialogue strategies.
Deafblindness, also known as dual sensory loss, is the combination of sight and hearing impairments of such extent that it becomes difficult for one sense to compensate for the other. Communication issues are a key concern for the Deafblind community. We present the design and technical implementation of the Tactile Board: a mobile Augmentative and Alternative Communication (AAC) device for individuals with deafblindness. The Tactile Board allows text and speech to be translated into vibrotactile signs that are displayed real-time to the user via a haptic wearable. Our aim is to facilitate communication for the deafblind community, creating opportunities for these individuals to initiate and engage in social interactions with other people without the direct need of an intervener.
Novel manufacturing technologies, such as printed electronics, may enable future applications for the Internet of Everything like large-area sensor devices, disposable security, and identification tags. Printed physically unclonable functions (PUFs) are promising candidates to be embedded as hardware security keys into lightweight identification devices. We investigate hybrid PUFs based on a printed PUF core. The statistics on the intra- and inter-hamming distance distributions indicate a performance suitable for identification purposes. Our evaluations are based on statistical simulations of the PUF core circuit and the thereof generated challenge-response pairs. The analysis shows that hardware-intrinsic security features can be realized with printed lightweight devices.
Neuromorphic computing systems have demonstrated many advantages for popular classification problems with significantly less computational resources. We present in this paper the design, fabrication and training of a programmable neuromorphic circuit, which is based on printed electrolytegated field-effect transistor (EGFET). Based on printable neuron architecture involving several resistors and one transistor, the proposed circuit can realize multiply-add and activation functions. The functionality of the circuit, i.e. the weights of the neural network, can be set during a post-fabrication step in form of printing resistors to the crossbar. Besides the fabrication of a programmable neuron, we also provide a learning algorithm, tailored to the requirements of the technology and the proposed programmable neuron design, which is verified through simulations. The proposed neuromorphic circuit operates at 5V and occupies 385mm 2 of area.
Die Möglichkeit zur digitalen Verbindung geographischer Orte mit Aufgaben, Herausforderungen oder Lernmaterialien hat eine Vielzahl von Anwendungen auch außerhalb der Mathematikbildung inspiriert. Dieser Beitrag stellt eine exemplarische Auswahl solcher Applikationen vor und versucht, die technischen, organisatorischen und konzeptionellen Gestaltungselemente zu systematisieren. Die Ausführungen sollen als Anregung bei der Anlage von Mathematiktrails sowie bei der Weiterentwicklung technischer Lösungen für den Lehreinsatz dienen.
The TriRhenaTech alliance presents a collection of accepted papers of the cancelled tri-national 'Upper-Rhine Artificial Intelligence Symposium' planned for 13th May 2020 in Karlsruhe. The TriRhenaTech alliance is a network of universities in the Upper-Rhine Trinational Metropolitan Region comprising of the German universities of applied sciences in Furtwangen, Kaiserslautern, Karlsruhe, and Offenburg, the Baden-Wuerttemberg Cooperative State University Loerrach, the French university network Alsace Tech (comprised of 14 'grandes écoles' in the fields of engineering, architecture and management) and the University of Applied Sciences and Arts Northwestern Switzerland. The alliance's common goal is to reinforce the transfer of knowledge, research, and technology, as well as the cross-border mobility of students.
Konstrukteure im Maschinenbau stehen häufig vor der Problemstellung, hochfest vorgespannte Schraubenverbindungen und einen durchgehenden Korrosionsschutz zu vereinen. Die Normen und Richtlinien bieten hierzu Stand heute keine ausreichenden Antworten. Die Hochschule Offenburg befasst sich im Rahmen einer industriellen Gemeinschaftsforschung mit der Fragestellung, welchen Einfluss organische Beschichtungen auf die Vorspannkraft insbesondere bei erhöhten Umgebungstemperaturen haben. In dieser Arbeit werden die ersten Ergebnisse zum Einfluss der Einzelschichtstärke des Beschichtungssystems präsentiert.
OVVL (the Open Weakness and Vulnerability Modeller) is a tool and methodology to support threat modeling in the early stages of the secure software development lifecycle. We provide an overview of OVVL (https://ovvl.org), its data model and browser-based UI. We equally provide a discussion of initial experiments on how identified threats in the design phase can be aligned with later activities in the software lifecycle (issue management and security testing).
Threat Modelling is an accepted technique to identify general threats as early as possible in the software development lifecycle. Previous work of ours did present an open-source framework and web-based tool (OVVL) for automating threat analysis on software architectures using STRIDE. However, one open problem is that available threat catalogues are either too general or proprietary with respect to a certain domain (e.g. .Net). Another problem is that a threat analyst should not only be presented (repeatedly) with a list of all possible threats, but already with some automated support for prioritizing these. This paper presents an approach to dynamically generate individual threat catalogues on basis of the established CWE as well as related CVE databases. Roughly 60% of this threat catalogue generation can be done by identifying and matching certain key values. To map the remaining 40% of our data (~50.000 CVE entries) we train a text classification model by using the already mapped 60% of our dataset to perform a supervised machine-learning based text classification. The generated entire dataset allows us to identify possible threats for each individual architectural element and automatically provide an initial prioritization. Our dataset as well as a supporting Jupyter notebook are openly available.
The PHOTOPUR project aims to develop a photocatalytic process as a type of AOPs (Advanced Oxidation Processes) for the elimination of plant protection products (PPP) of the cleaning water used to wash sprayers. At INES a PV based energy supply for the photocatalytic cleaning system was developed within the framework of two bachelor theses and assembled as a demonstration unit. Then the system was step by step extended with further process automation features and pushed to a remote operating device. The final system is now available as a mobile unit mounted on a lab table. The latest step was the photocatalytic reactor module which completed the first PHOTOPUR prototype. The system is actually undergoing an intensive testing phase with performance checks at the consortium partners. First results give an overview about the successful operation.
Well-designed and informative product presentations can support consumers in making purchase decisions. There are plenty of facts and details about a product of interest. However, also emotions are an important aspect for the purchase decision. The unique visualization opportunities of virtual reality (VR) can give users of VR applications the feeling of being there (telepresence). The applications can intensely engage them in a flow experience, comprising the four dimensions of enjoyment, curiosity, focused attention and control. In this work, we claim that VR product presentations can create subjective product experiences for consumers and motivate them to reuse this innovative type of product presentation in the future, by immersing them in a virtual world and causing them to interact with it. To verify the conceptual model a study was conducted with 551 participants who explored a VR hotel application. The results indicate that VR product presentations evoke positive emotions among consumers. The virtual experience made potential customers focus their attention on the virtual world and aroused their curiosity about getting more information about the product in an enjoyable way. In contrast to the theoretical assumption, control did not influence the users’ behavioral intentions to reuse VR product presentation. We conclude that VR product presentations create a feeling of telepresence, which leads to a flow experience that contributes to the behavioral intention of users to reuse VR product presentations in the future.
Hochspannungs-Mischstrom-Übertragung (HMÜ) - Eine Ergänzung zu bestehenden Übertragungstechnologien?
(2019)
Bei der Mischstromübertragung wird einem Wechselstrom direkt ein Gleichstrom überlagert. Wechselstrom und Gleichstrom werden also auf dem gleichen Seil geführt.
Dadurch könnten die bereits bestehenden Drehstrom-Übertragungs-Strecken des Übertragungsnetzes genutzt werden.
Durch eine Aufschaltung des Gleichstromes auf vorhandene Freileitungen kann theoretisch bei kurzen Leitungen (<150km) bis zu 50% mehr Wirkleistung und bei großen Übertragungsstrecken (>300km) in etwa eine Verdopplung der übertragbaren Wirkleistung erwartet werden.
Theoretisch betrachtet ist die Mischstrom-Übertragung eine geometrische Addition aller Strom- und Spannungskomponenten, was zu einer Erhöhung der Leiter-Erde-Spannung führt, ohne dabei Einfluss auf die verkettete Spannung zu nehmen.
Außerdem wird die Übertragung von Blindströmen unnötig, da ein natürlicher Betrieb von Leitungen des HDÜ-Netzes empfehlenswert ist.
Die theoretischen Betrachtungen konnten mathematisch bewiesen und die technische Umsetzung mit einem 1:1000-Modellsystem demonstriert und bestätigt werden.
The need for the logistics sector to timely respond to the increasing requirements of a globalised and digitalised world relies greatly on the com- petences and skills of its labour force. It becomes therefore essential to reinforce the cooperation between universities and business partners in the logistics and supply chain management fields across the European region and to build a logistics knowledge cluster supported by a communication and collaboration platform to foster continuous learning, skill acquisition and experience sharing anytime anywhere. In this paper we focus on designing the conceptual and technical framework for a communication and collaboration platform with the aim to establish the communication pipelines between the partner institutions, facilitating user interactions and exchange, leading to the creation of new knowledge and innovation in the logistics field. This framework is based on the requirements of the three main stakeholders: students, lecturers and companies, and consists of four functional areas defined according to the platform opera- tional requirements. A working prototype of the platform was developed using the Moodle learning management system and its core tools to determine its applicability and possible enhancement requirements. In the next stages of the project some additional tools like a knowledge base and the integration of the partners’ learning management systems to form the logistics knowledge cluster will be implemented.
One of the main requirements of spatially distributed Internet of Things (IoT) solutions is to have networks with wider coverage to connect many low-power devices. Low-Power Wide-Area Networks (LPWAN) and Cellular IoT(cIOT) networks are promising candidates in this space. LPWAN approaches are based on enhanced physical layer (PHY) implementations to achieve long range such as LoRaWAN, SigFox, MIOTY. Narrowband versions of cellular network offer reduced bandwidth and, simplified node and network management mechanisms, such as Narrow Band IoT (NB-IoT) and Long-Term Evolution for Machines (LTE-M). Since the underlying use cases come with various requirements it is essential to perform a comparative analysis of competing technologies. This article provides systematic performance measurement and comparison of LPWAN and NB-IoT technologies in a unified testbed, also discusses the necessity of future fifth generation (5G) LPWAN solutions.
Wireless communication technologies play a major role to enable megatrends like Internet of Things (IoT) and Industry 4.0. The Narrowband Wireless WAN (NBWWAN) introduced to meet the long range and low power requirements of spatially distributed wireless communication use cases. These networks introduce additional challenges in testing because the network topology and RF characteristics become particularly complex and thus a multitude of different scenarios must be tested. This paper describes the infrastructure for automated testing of radio communication and for systematic measurements of the network performance of NBWWAN.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
Plant oils may be used as a sustainable, nearly CO2neutral fuel for diesel engines. This work investigates experimentally the particulate and gaseous emissions of diesel engines fuelled with different non-esterified, pure plant oils. The data are collected from three engines: a) Common rail 1.7 liter passenger car engine from Opel AG b) 12.8 liter truck engine from VOLVO c) Truck engine from MAN AG.
The emissions of the MAN engine have been used to perform AMES tests to analyze possible health impacts of plant oil operation. Finally, all emission results with plant oils have been compared to traditional gas oils.
Non-esterified plant oils gain ecological and economical importance, particularly in the EU where it is intended to increase the share of renewable energies. Plant oils do not require any chemical treatment so do not cause secondary pollution. The importance of plant oil will increase in Germany for mobile and stationary applications. The generation co-generation of heat and power is subsidized by the German “Erneuerbares Energiegesetz” and the “Kraft-Wärme-Kopplungsgesetz” when renewable fuels are used such as plant oils..
Plant oils have a much higher viscosity than conventional gas oil. It is mandatory to decrease the oil viscosity by heating prior to injection to assure proper injection and to avoid engine damage due to coke formation in the combustion chamber and at the injection nozzle. The German quality standard of Weihenstephan (RK-Qualitätsstandard 05/2000) for rape seed oil should be followed for use as diesel fuel. The chemical composition of plant oils is appreciably different in comparison to diesel fuels derived from mineral oils suggesting also different emission behavior.
Since direct current high energy shock fulguration was initially performed in the mid 1980s, ablation of cardiac arrhythmias has come to widespread use. Today the most frequently used energy source for catheter ablation is radio frequency (RF). It was the German engineer Peter Osypka who made available the HAT 100 as the first simple commercial RF ablator.
Nevertheless, in the first years of ablation, physicians were effectively working in the dark. Until today with an increasing understanding of arrhythmia mechanisms, both at the atrial and ventricular levels, this curative technology has made tremendous progress. Now, due to crucial improvement of RF ablation generators, temperature and contact force sensor catheters in combination with non-flouroscopic electroanatomical mapping technologies, computerized temperature and impedance controlled radiofrequency catheter ablation can be used to cure all types of arrhythmias including atrial and ventricular fibrillation. For the latter, cooled ablation by saline solution irrigated catheters has been developed to a widely used standard method. This procedure resulting in pulmonary vein isolation requires transseptal puncture and is technically demanding. Nevertheless, it has shown to be more effective than antiarrhythmic drug therapy.
While earliest RF ablations were performed with non-steerable catheters, today are used steerable sensor catheters without or with external and internal cooling and tips of 4mm or 8mm length. Further innovations like integration of mapping and cardiac imaging give exact information of the number of pulmonary veins and branching patterns and help to correlate electrical signals with anatomical structures.
The magnetic navigation significantly improved the success rates and safety of catheter ablation. Thus, in most cases RF catheter ablation has developed in the treatment of supraventricular arrhythmias from an alternative approach to drug therapy into the first therapeutic choice providing low complication rates.
In future, robotic navigation will further simplify procedures and reduce radiation exposure of this curative approach.
Introduction: Despite lots of developments in the last years, radiofrequency ablation of rhythm diseases is a safe but still complex procedure that requires special experience and expertise of the physicians and biomedical engineers. Thus, there is a need of special trainings to become familiar with the different equipment and to explain several effects that can be observed during clinical routine.
Methods: The Offenburg University of Applied Sciences offers a biomedical engineering study path specialized in the fields of cardiology, electrophysiology and cardiac electronic implants. It`s Peter Osypka Institute for Pacing and Ablation provides teaching following the slogan “Learning by watching, touching and adjusting”. It conducts lots of trainings for students as well as young physicians interested in electrophysiology and radiofrequency ablation.
Results: In-vitro trainings will be provided using the Osypka HAT 200 and HAT300s, Stockert EPshuttle and SmartAblate system as well as the Boston EPT-1000XP and Maestro 3000 and the Radionics RFG-3E cardiac radio frequency ablation generators. All of them require different handling as well as special accessories like catheter connection cables or boxes and back plates. The participants will be trained in the setup of temperature, power and cut-off impedance dependent on different ablation catheters. Furthermore troubleshooting in hard- and software is part of the program. Performing procedures in pork or animal protein and using physiological saline solution to simulate the blood flow, they can study the influence of contact force and impedance on lesion geometry etc. and to avoid adverse effects like “plops”. Lots of catheter types are available: 4mm tip, 8mm standard and gold tip, open and closed irrigated tip ablation catheters of different companies. The experiments will be completed by measuring the lesion size dependent on the used catheter type and ablation settings.
Conclusion: In-vitro training in radiofrequency ablation is a challenge for biomedical engineering students and young physicians.
Introduction: Patient selection for cardiac resynchronization therapy (CRT) requires quantification of left ventricular conduction delay (LVCD). After implantation of biventricular pacing systems, individual AV delay (AVD) programming is essential to ensure hemodynamic response. To exclude adverse effects, AVD should exceed individual implant-related interatrial conduction times (IACT). As result of a pilot study, we proposed the development of a programmer-based transoesophageal left heart electrogram (LHE) recording to simplify both, LVCD and IACT measurement. This feature was implemented into the Biotronik ICS3000 programmer simultaneously with 3-channel surface ECG.
Methods: A 5F oesophageal electrode was perorally applied in 44 heart failure CRT-D patients (34m, 10f, 65±8 yrs., QRS=162±21ms). In position of maximum left ventricular deflection, oesophageal LVCD was measured between onsets of QRS in surface ECG and oesophageal left ventricular deflection. Then, in position of maximum left atrial deflection (LA), IACT in VDD operation (As-LA) was calculated by difference between programmed AV delay and the measured interval from onset of left atrial deflection to ventricular stimulus in the oesophageal electrogram. IACT in DDD operation (Ap-LA) was measured between atrial stimulus and LA..
Results: LVCD of the CRT patients was characterized by a minimum of 47ms with mean of 69±23ms. As-LA and Ap-LA were found to be 41±23ms and 125±25ms, resp., at mean. In 7 patients (15,9%), IACT measurement in DDD operation uncovered adverse AVD if left in factory settings. In this cases, Ap-LA exceeded the factory AVD. In 6 patients (13,6%), IACT in VDD operation was less than or equal 10ms indicating the need for short AVD.
Conclusion: Response to CRT requires distinct LVCD and AVD optimization. The ICS3000 oesophageal LHE feature can be utilized to measure LVCD in order to justify selection for CRT. IACT measurement simplifies AV delay optimization in patients with CRT systems irrespective of their make and model.
In-vivo and in-vitro comparison of implant-based CRT optimization - What provide new algorithms?
(2011)
Introduction: In cardiac resynchronization therapy (CRT), individual AV delay (AVD) optimization can effectively increase hemodynamics and reduce non-responder rate. Accurate, automatic and easily comprehensible algorithms for the follow-up are desirable. QuickOpt is the first attempt of a semi-automatic intracardiac electrogram (IEGM) based AVD algorithm. We aimed to compare its accuracy and usefulness by in-vitro and in-vivo studies.
Methods: Using the programmable ARSI-4 four-chamber heart rhythm and IEGM simulator (HKP, Germany), the QuickOpt feature of an Epic HF system (St. Jude, USA) was tested in-vitro by simulated atrial IEGM amplitudes between 0.3 and 3.5mV during both, manual and automatic atrial sensing between 0.2 and 1.0mV. Subsequently, in 21 heart failure patients with implanted biventricular defibrillators, QuickOpt was performed in-vivo. Results of the algorithm for VDD and DDD stimulation were compared with echo AV delay optimization.
Results: In-vitro simulations demonstrated a QuickOpt measuring accuracy of ± 8ms. Depending on atrial IEGM amplitude, the algorithm proposed optimal AVD between 90 and 150ms for VDD and between 140 and 200ms for DDD operation, respectively. In-vivo, QuickOpt difference between individual AVD in DDD and VDD mode was either 50ms (20pts) or 40ms (1pt). QuickOpt and echo AVD differed by 41 ± 25ms (7 – 90ms) in VDD and by 18 ± 24ms (17-50ms) in DDD operation. Individual echo AVD difference between both modes was 73 ± 20ms (30-100ms).
Conclusion: The study demonstrates the value of in-vitro studies. It predicted QuickOpt deficiencies regarding IEGM amplitude dependent AVD proposals constrained to fixed individual differences between DDD and VDD mode. Consequently, in-vivo, the algorithm provided AVD of predominantly longer duration than echo in both modes. Accepting echo individualization as gold standard, QuickOpt should not be used alone to optimize AVD in CRT patients.
Introduction: To simplify AV delay (AVD) optimization in cardiac resynchronization therapy (CRT), we reported that the hemodynamically optimal AVD for VDD and DDD mode CRT pacing can be approximated by individually measuring implant-related interatrial conduction intervals (IACT) in oesophageal electrogram (LAE) and adding about 50ms. The programmer-based St Jude QuickOpt algorithm is utilizing this finding. By automatically measuring IACT in VDD operation, it predicts the sensed AVD by adding either 30ms or 60ms. Paced AVD is strictly 50ms longer than sensed AVD. As consequence of those variations, several studies identified distinct inaccuracies of QuickOpt. Therefore, we aimed to seek for better approaches to automate AVD optimization.
Methods: In a study of 35 heart failure patients (27m, 8f, age: 67±8y) with Insync III Marquis CRT-D systems we recorded telemetric electrograms between left ventricular electrode and superior vena cava shock coil (LVtip/SVC = LVCE) simultaneously with LAE. By LVCE we measured intervals As-Pe in VDD and Ap-Pe in DDD operation between right atrial sense-event (As) or atrial stimulus (Ap), resp., and end of the atrial activity (Pe). As-Pe and Ap-Pe were compared with As-LA an Ap-LA in LAE, respectively.
Results: End of the left atrial activity in LVCE could clearly be recognized in 35/35 patients in VDD and 29/35 patients in DDD operation. We found mean intervals As-LA of 40.2±24.5ms and Ap-LA of 124.3±20.6ms. As-Pe was 94.8±24.1ms and Ap-Pe was 181.1±17.8ms. Analyzing the sums of As-LA + 50ms with duration of As-Pe and Ap-LA + 50ms with duration of Ap-Pe, the differences were 4.7±9.2ms and 4.2±8.6ms, resp., only. Thus, hemodynamically optimal timing of the ventricular stimulus can be triggered by automatically detecting Pe in LVCE.
Conclusion: Based on minimal deviations between LAE and LVCE approach, we proposed companies to utilize the LVCE in order to automate individual AVD optimization in CRT pacing.
Vorgestellt wird ein Konzept zur biologischen Methanisierung von Wasserstoff direkt in Biogasreaktoren, mit dem durch Membranbegasung der Methangehalt des Biogases auf > 96 % erhöht werden kann. Essentiell zum Erreichen solch hoher Methanwerte sind die Einhaltung eines optimalen pH-Bereichs und die Vermeidung von H2-Akkumulation. Im Falle einer Limitierung der Methanbildungsrate durch den eigentlichen anaeroben Abbauprozess der Biomasse ist auch eine externe Zufuhr von CO2 zur weiteren Methanbildung denkbar. Das Verfahren soll weiter optimiert und in einem von der Deutschen Bundesstiftung Umwelt geförderten Projekt in der Biogasanlage einer regionalen Käserei in der Praxis getestet werden. Die hier angestrebte Kombination aus dezentraler Abfallverwertung und Eigenenergieerzeugung eines lebensmittelverarbeitenden Betriebs unter Einbindung in ein intelligentes Erneuerbare Energien - Konzept soll einen zusätzlichen Mehrwert liefern.
This work discusses several use cases of post-mortem mobile device tracking in which privacy is required e.g. due to client-confidentiality agreements and sensibility of data from government agencies as well as mobile telecommunication providers. We argue that our proposed Bloomfilter based privacy approach is a valuable technical building block for the arising General Data Protection Regulation (GDPR) requirements in this area. In short, we apply a solution based on the Bloom filters data structure that allows a 3rd party to performsome privacy saving setrelations on a mobiletelco’s access logfile or other mobile access logfile from harvesting parties without revealing any other mobile users in the proximity of a mobile base station but still allowing to track perpetrators.
Printed electronics (PE) is a fast growing technology with promising applications in wearables, smart sensors and smart cards since it provides mechanical flexibility, low-cost, on-demand and customizable fabrication. To secure the operation of these applications, True Random Number Generators (TRNGs) are required to generate unpredictable bits for cryptographic functions and padding. However, since the additive fabrication process of PE circuits results in high intrinsic variation due to the random dispersion of the printed inks on the substrate, constructing a printed TRNG is challenging. In this paper, we exploit the additive customizable fabrication feature of inkjet printing to design a TRNG based on electrolyte-gated field effect transistors (EGFETs). The proposed memory-based TRNG circuit can operate at low voltages (≤ 1 V ), it is hence suitable for low-power applications. We also propose a flow which tunes the printed resistors of the TRNG circuit to mitigate the overall process variation of the TRNG so that the generated bits are mostly based on the random noise in the circuit, providing a true random behaviour. The results show that the overall process variation of the TRNGs is mitigated by 110 times, and the simulated TRNGs pass the National Institute of Standards and Technology Statistical Test Suite.
Printed Electronics is perceived to have a major impact in the fields of smart sensors, Internet of Things and wearables. Especially low power printed technologies such as electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials and inkjet printing are very promising in such application domains. In this paper, we discuss a modeling approach to describe the variations of printed devices. Incorporating these models and design flows into our previously developed printed design system allows for robust circuit design. Additionally, we propose a reliability-aware routing solution for printed electronics technology based on the technology constraints in printing crossovers. The proposed methodology was validated on multiple benchmark circuits and can be easily integrated with the design automation tools-set.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
Process engineering industries are now facing growing economic pressure and societies' demands to improve their production technologies and equipment, making them more efficient and environmentally friendly. However unexpected additional technical and ecological drawbacks may appear as negative side effects of the new environmentally-friendly technologies. Thus, in their efforts to intensify upstream and downstream processes, industrial companies require a systematic aid to avoid compromising of ecological impact. The paper conceptualises a comprehensive approach for eco-innovation and eco- design in process engineering. The approach combines the advantages of Process Intensification as Knowledge-Based Engineering (KBE), inventive tools of Knowledge-Based Innovation (KBI), and main principles and best-practices of Eco-Design and Sustainable Manufacturing. It includes a correlation matrix for identification of eco-engineering contradictions and a process mapping technique for problem definition, database of Process Intensification methods and equipment, as well as a set of strongest inventive operators for eco-ideation.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation