Refine
Year of publication
- 2015 (133) (remove)
Document Type
- Conference Proceeding (64)
- Article (reviewed) (45)
- Article (unreviewed) (10)
- Part of a Book (8)
- Bachelor Thesis (2)
- Master's Thesis (2)
- Patent (2)
Conference Type
- Konferenzartikel (53)
- Konferenz-Abstract (6)
- Sonstiges (3)
- Konferenz-Poster (1)
- Konferenzband (1)
Language
- English (133) (remove)
Keywords
- Kommunikation (5)
- Applikation (4)
- Abtragung (3)
- Ausbildung (3)
- Dünnschichtchromatographie (3)
- Eingebettetes System (3)
- Licht (3)
- Mikrostruktur (3)
- Polymere (3)
- Sicherheit (3)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (53)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (43)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (20)
- Fakultät Wirtschaft (W) (19)
- INES - Institut für nachhaltige Energiesysteme (16)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (15)
- ACI - Affective and Cognitive Institute (9)
- WLRI - Work-Life Robotics Institute (4)
- IUAS - Institute for Unmanned Aerial Systems (2)
- IfTI - Institute for Trade and Innovation (1)
Open Access
- Closed Access (65)
- Open Access (39)
- Bronze (7)
- Closed (1)
- Diamond (1)
The overview of public key infrastructure based security approaches for vehicular communications
(2015)
Modern transport infrastructure becomes a full member of globally connected network. Leading vehicle manufacturers have already triggered development process, output of which will open a new horizon of possibilities for consumers and developers by providing a new communication entity - a car, thus enabling Car2X communications. Nevertheless some of available systems already provide certain possibilities for vehicles to communicate, most of them are considered not sufficiently secured. During last 15 years a number of big research projects funded by European Union and USA governments were started and concluded after which a set of standards were published prescribing a common architecture for Car2X and vehicles onboard communications. This work concentrates on combining inner and outer vehicular communications together with a use of Public Key Infrastructure (PKI).
Extended Performance Measurements of Scalable 6LoWPAN Networks in an Automated Physical Testbed
(2015)
IPv6 over Low power Wireless Personal Area Networks, also known as 6LoWPAN, is becoming more and more a de facto standard for such communications for the Internet of Things, be it in the field of home and building automation, of industrial and process automation, or of smart metering and environmental monitoring. For all of these applications, scalability is a major precondition, as the complexity of the networks continuously increase. To maintain this growing amount of connected nodes a various 6LoWPAN implementations are available. One of the mentioned was developed by the authors' team and was tested on an Automated Physical Testbed for Wireless Systems at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, which allows the flexible setup and full control of arbitrary topologies. It also supports time-varying topologies and thus helps to measure performance of the RPL implementation. The results of the measurements prove an excellent stability and a very good short and long-term performance also under dynamic conditions. In all measurements, there is an advantage of minimum 10% with regard to the average times, like global repair time; but the advantage with reagr to average values can reach up to 30%. Moreover, it can be proven that the performance predictions from other papers are consistent with the executed real-life implementations.
In many scientific studies lens experiments are part of the curriculum. The conducted experiments are meant to give the students a basic understanding for the laws of optics and its applications. Most of the experiments need special hardware like e.g. an optical bench, light sources, apertures and different lens types. Therefore it is not possible for the students to conduct any of the experiments outside of the university’s laboratory. Simple optical software simulators enabling the students to virtually perform lens experiments already exist, but are mostly desktop or web browser based.
Augmented Reality (AR) is a special case of mediated and mixed reality concepts, where computers are used to add, subtract or modify one’s perception of reality. As a result of the success and widespread availability of handheld mobile devices, like e.g. tablet computers and smartphones, mobile augmented reality applications are easy to use. Augmented reality can be easily used to visualize a simulated optical bench. The students can interactively modify properties like e.g. lens type, lens curvature, lens diameter, lens refractive index and the positions of the instruments in space. Light rays can be visualized and promote an additional understanding of the laws of optics. An AR application like this is ideally suited to prepare the actual laboratory sessions and/or recap the teaching content.
The authors will present their experience with handheld augmented reality applications and their possibilities for light and optic experiments without the needs for specialized optical hardware.
The demand of wireless solutions in industrial applications increases since the early nineties. This trend is not only ongoing, it is further pushed by developments in the area of software stacks like the latest Bluetooth Low Energy Stack. It is also pushed by new chip-designs and powerful and highly integrated electronic hardware. The acceptance of wireless technologies as a possible solution for industrial applications, has overcome the entry barrier [1]. The first step to see wireless as standard for many industrial applications is almost accomplished. Nevertheless there is nearly none acceptance of wireless technology for Safety applications. One highly challenging and demanding requirement is still unsolved: The aspect safety and robustness. Those topics have been addressed in many cases but always in a similar manner. WirelessHART as an example addresses this topic with redundant so called multiple propagation paths and frequency hopping to handle with interferences and loss of network participants. So far the pure peer to peer link is rarely investigated and there are less safety solutions available. One product called LoRa™ can be seen as one possible solution to address this lack of safety within wireless links. This paper focuses on the safety performance evaluation of a modem-chip-design. The use of diverse and redundant wireless technologies like LoRa can lead to an increase acceptance of wireless in safety applications. Many measurements in real industrial application have been carried out to be able to benchmark the new chip in terms of the safety aspects. The content of this research results can help to raise the level of confidence in wireless. In this paper, the term “safety” is used for data transmission reliability.
The automatic classification of the modulation format of a detected signal is the intermediate step between signal detection and demodulation. If neither the transmitted data nor other signal parameters such as the frequency offset, phase offset and timing information are known, then automatic modulation classification (AMC) is a challenging task in radio monitoring systems. The approach of clustering algorithms is a new trend in AMC for digital modulations. A novel algorithm called `highest constellation pattern matching' is introduced to identify quadrature amplitude modulation and phase shift keying signals. The obtained simulation and measurement results outperform the existing algorithms for AMC based on clustering. Finally, it is shown that the proposed algorithm works in a real monitoring environment.
Security in IT systems, particularly in embedded devices like Cyber Physical Systems (CPSs), has become an important matter of concern as it is the prerequisite for ensuring privacy and safety. Among a multitude of existing security measures, the Transport Layer Security (TLS) protocol family offers mature and standardized means for establishing secure communication channels over insecure transport media. In the context of classical IT infrastructure, its security with regard to protocol and implementation attacks has been subject to extensive research. As TLS protocols find their way into embedded environments, we consider the security and robustness of implementations of these protocols specifically in the light of the peculiarities of embedded systems. We present an approach for systematically checking the security and robustness of such implementations using fuzzing techniques and differential testing. In spite of its origin in testing TLS implementations we expect our approach to likewise be applicable to implementations of other cryptographic protocols with moderate efforts.
Combined heat and power production (CHP) based on solid oxide fuel cells (SOFC) is a very promising technology to achieve high electrical efficiency to cover power demand by decentralized production. This paper presents a dynamic quasi 2D model of an SOFC system which consists of stack and balance of plant and includes thermal coupling between the single components. The model is implemented in Modelica® and validated with experimental data for the stack UI-characteristic and the thermal behavior. The good agreement between experimental and simulation results demonstrates the validity of the model. Different operating conditions and system configurations are tested, increasing the net electrical efficiency to 57% by implementing an anode offgas recycle rate of 65%. A sensitivity analysis of characteristic values of the system like fuel utilization, oxygen-to-carbon ratio and electrical efficiency for different natural gas compositions is carried out. The result shows that a control strategy adapted to variable natural gas composition and its energy content should be developed in order to optimize the operation of the system.
Concussions in sports and during recreational activities are a major source of traumatic brain injury in our society. This is mainly relevant in adolescence and young adulthood, where the annual rate of diagnosed concussions is increasing from year to year. Contact sports (e.g., ice hockey, American football, or boxing) are especially exposed to repeated concussions. While most of the athletes recover fully from the trauma, some experience a variety of symptoms including headache, fatigue, dizziness, anxiety, abnormal balance and postural instability, impaired memory, or other cognitive deficits. Moreover, there is growing evidence regarding clinical and neuropathological consequences of repetitive concussions, which are also linked to an increased risk for depression and Alzheimer’s disease or the development of chronic traumatic encephalopathy. With little contribution of conventional structural imaging (computed tomography (CT) or magnetic resonance imaging (MRI)) to the evaluation of concussion, nuclear imaging techniques (i.e., positron emission tomography (PET) and single-photon emission computed tomography (SPECT)) are in a favorable position to provide reliable tools for a better understanding of the pathophysiology and the clinical evaluation of athletes suffering a concussion.
Monitors are in the center of media productions and hold an important function as the main visual interface. Tablets and smartphones are becoming more and more important work tools in the media industry. As an extension to our lecture contents an intensive discussion of different display technologies and its applications is taking place now. The established LCD (Liquid Crystal Display) technology and the promising OLED (Organic Light Emitting Diode) technology are in the focus.
The classic LCD is currently the most important display technology. The paper will present how the students should develop sense for display technologies besides the theoretical scientific basics. The workshop focuses increasingly on the technical aspects of the display technology and has the goal of deepening the students understanding of the functionality by building simple Liquid Crystal Displays by themselves.
The authors will present their experience in the field of display technologies. A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of digital color representation and display technologies. The design and development of a suitable learning environment with the required infrastructure is crucial. The main focus of this paper is on the hands-on optics workshop “Liquid Crystal Display in the do-it-yourself”.
Cardiac resynchronization therapy (CRT) is an established therapy for heart failure patients and improves quality of life in patients with sinus rhythm, reduced left ventricular ejection fraction (LVEF), left bundle branch block and wide QRS duration. Since approximately sixty percent of heart failure patients have a normal QRS duration they do not benefit or respond to the CRT. Cardiac contractility modulation (CCM) releases nonexcitatoy impulses during the absolute refractory period in order to enhance the strength of the left ventricular contraction. The aim of the investigation was to evaluate differences in cardiac index between optimized and nonoptimized CRT and CCM devices versus standard values. Impedance cardiography, a noninvasive method was used to measure cardiac index (CI), a useful parameter which describes the blood volume during one minutes heart pumps related to the body surface. CRT patients indicate an increase of 39.74 percent and CCM patients an improvement of 21.89 percent more cardiac index with an optimized device.
Bluetooth Low Energy extends the Bluetooth standard in version 4.0 for ultra-low energy applications through the extensive usage of low-power sleeping periods, which inherently difficult in frequency hopping technologies. This paper gives an introduction into the specifics of the Bluetooth Low Energy protocol, shows a sample implementation, where an embedded device is controlled by an Android smart phone, and shows the results of timing and current consumption measurements.
Transcatheter aortic valve implantation is a therapy for patients with reduced left ventricular ejection fraction and symptomatic aortic stenosis. The aim of the study was to compare the pre-and post- transcatheter aortic valve implantation procedures to determine the QRS and QT ventricular conduction times as a potential predictor of permanent pacemaker therapy requirement after transcatheter aortic valve implantation. QRS and QT ventricular conduction times were prolonged after transcatheter aortic valve implantation in heart failure patients with permanent dual chamber pacemaker therapy after transcatheter aortic valve implantation. QRS and QT ventricular conduction times may be useful parameters to evaluate the risk of post-procedural ventricular conduction block and permanent pacemaker therapy in transcatheter aortic valve implantation.
In online analytical processing (OLAP), filtering elements of a given dimensional attribute according to the value of a measure attribute is an essential operation, for example in top-k evaluation. Such filters can involve extremely large amounts of data to be processed, in particular when the filter condition includes “quantification” such as ANY or ALL, where large slices of an OLAP cube have to be computed and inspected. Due to the sparsity of OLAP cubes, the slices serving as input to the filter are usually sparse as well, presenting a challenge for GPU approaches which need to work with a limited amount of memory for holding intermediate results. Our CUDA solution involves a hashing scheme specifically designed for frequent and parallel updates, including several optimizations exploiting architectural features of Nvidia’s Fermi and Kepler GPUs.
In this work we describe the implementation details of a protocol suite for a secure and reliable over-the-air reprogramming of wireless restricted devices. Although, recently forward error correction codes aiming at a robust transmission over a noisy wireless medium have extensively been discussed and evaluated, we believe that the clear value of the contribution at hand is to share our experience when it comes to a meaningful combination and implementation of various multihop (broadcast) transmission protocols and custom-fit security building blocks: For a robust and reliable data transmission we make use of fountain codes a.k.a. rateless erasure codes and show how to combine such schemes with an underlying medium access control protocol, namely a distributed low duty cycle medium access control (DLDC-MAC). To handle the well known problem of packet pollution of forward-error-correction approaches where an attacker bogusly modifies or infiltrates some minor number of encoded packets and thus pollutes the whole data stream at the receiver side, we apply homomorphic message authentication codes (HomMAC). We discuss implementation details and the pros and cons of the two currently available HomMAC candidates for our setting. Both require as the core cryptographic primitive a symmetric block cipher for which, as we will argue later, we have opted for the PRESENT, PRIDE and PRINCE (exchangeable) ciphers in our implementation.
An Extraction Method for 17α-Ethinylestradiol from Water using a new kind of monolithic Stir-bar
(2015)
A 2D-separation of 16 polyaromatic hydrocarbons (PAHs) according to the Environmental Protecting Agency (EPA) standard was introduced. Separation took place on a TLC RP-18 plate (Merck, 1.05559). In the first direction, the plate was developed twice using n-pentane at −20°C as the mobile phase. The mixture acetonitrile-methanol-acetone-water (12:8:3:3, v/v) was used for developing the plate in the second direction. Both developments were carried out over a distance of 43 mm. Further on in this publication, a specific and very sensitive indication method for benzo[a]pyrene and perylene was presented. The method can detect these hazardous compounds even in complicated PAH mixtures. These compounds can be quantified by a simple chemiluminescent reaction with a limit of detection (LOD) of 48 pg per band for perylene and 95 pg per band for benzo[a]pyrene. Although these compounds were separated from all other PAHs in the standard, a separation of both compounds was not possible from one another. The method is suitable for tracing benzo[a]pyrene and/or perylene. The proposed chemiluminescence screening test on PAHs is extremely sensitive but may indicate a false positive result for benzo[a]pyrene.
The main focus of this chapter is the theoretical and instrumental processes that underpin densitometric methods widely used in thin-layer chromatography (TLC). Densitometric methods include UV–vis, luminescence and fluorescence optical measurements as well as infrared and Raman spectroscopic measurements. The chapter is divided in two general parts: a theoretical part and a practical part. The systems for direct radioactivity measurements and the combination of TLC with mass spectrometry are also discussed. All these systems allow measuring an intensity distribution directly on a TLC plate. We call this “in situ detection” because no analyte is removed from the plate.
The increasing number of transistors being clocked at high frequencies of modern microprocessors lead to an increasing power consumption, which calls for an active dynamic thermal management. In a research project a system environment has been developed, which includes thermal modeling of the microprocessor in the board system, a software environment to control the characteristics of the system’s timing behavior, and a modified Linux scheduler, which is enhanced with a prediction controller. Measurement results are shown for this development for a Freescale i.MX6Q quad-core microprocessor.
The application of leaky feeder (radiating) cables is a common solution for the implementation of reliable radio communication in huge industrial buildings, tunnels and mining environment. This paper explores the possibilities of leaky feeders for 1D and 2D localization in wireless systems based on time of flight chirp spread spectrum technologies. The main focus of this paper is to present and analyse the results of time of flight and received signal strength measurements with leaky feeders in indoor and outdoor conditions. The authors carried out experiments to compare ranging accuracy and radio coverage area for a point-like monopole antenna and for a leaky feeder acting as a distributed antenna. In all experiments RealTrac equipment based on nanoLOC radio standard was used. The estimation of the most probable path of a chirp signal going through a leaky feeder was calculated using the ray tracing approach. The typical non-line-of-sight errors profiles are presented. The results show the possibility to use radiating cables in real time location technologies based on time-of-flight method.
Video game developers continuously increase the degree of details and realism in games to create more human-like characters. But increasing the human-likeness becomes a problem in regard to the Uncanny Valley phenomenon that predicts negative feelings of people towards artificial entities. We developed an avatar creation system to examine preferences towards parametrized faces and explore in regard to the Uncanny Valley phenomenon how people design faces that they like or reject. Based on the 3D model of the Caucasian average face, 420 participants generate 1341 faces of positively and negatively associated concepts of both gender. The results show that some characteristics associated with the Uncanny Valley are used to create villains or repulsive faces. Heroic faces get attractive features but are rarely and little stylized. A voluntarily designed face is very similar to the heroine. This indicates that there is a tendency of users to design feminine and attractive but still credible faces.
Demand Side Management for Thermally Activated Building Systems based on Multiple Linear Regression
(2015)
Photovoltaics Energy Prediction Under Complex Conditions for a Predictive Energy Management System
(2015)
The following contribution deals with the experimental investigation and theoretical evaluation of fatigue crack growth under isothermal and non-isothermal conditions at the nickel alloy 617. The microstructure and mechanical properties of alloy 617 are influenced significantly by the thermal heat treatment and the following thermal exposure in service. Hence, a solution annealed and a long-time service exposed material condition is studied. The crack growth measurement is carried out by using an alternate current potential drop system, which is integrated into a thermomechanical fatigue (TMF) test facility. The measured fatigue crack growth rates results in a function of material condition, temperature and load waveform. Furthermore, the results of the non-isothermal tests depend on the phase between thermal and mechanical load (in-phase, out-of-phase). A fracture mechanic based, time dependent model is upgraded by an approach to consider environmental effects, where almost all model parameters represent directly measureable values. A consistent description of all results and a good correlation with the experimental data can be achieved.
Autonomous humanoid robots require light weight, high torque and high speed actuators to be able to walk and run. For conventional gears with a fixed gear ratio the product of torque and velocity is constant. On the other hand desired motions require maximum torque and speed. In this paper it is shown that with a variable gear ratio it is possible to vary the relation between torque and velocity. This is achieved by introducing systems of rods and levers to move the joints of our humanoid robot ”Sweaty II”. On the basis of a variable gear ratio low speed and high torque can be achieved for those joint angles, which require this motion mode, whereas high speed and low torque can be realized for those joint angles, where it is favorable for the desired motion.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things. It can be observed that most of the available solutions are following an open source approach, which significantly leads to a fast development of technologies and of markets. Although the currently available implementations are in a pretty good shape, all of them come with some significant drawbacks. It was therefore decided to start the development of an own implementation, which takes the advantages from the existing solutions, but tries to avoid the drawbacks. This paper discussed the reasoning behind this decision, describes the implementation and its characteristics, as well as the testing results. The given implementation is available as open-source project under [15].
The transformation of the building energy sector to a highly efficient, clean, decentralised and intelligent system requires innovative technologies like microscale trigeneration and thermally activated building structures (TABS) to pave the way ahead. The combination of such technologies however presents a scientific and engineering challenge. Scientific challenge in terms of developing optimal thermo-electric load management strategies based on overall energy system analysis and an engineering challenge in terms of implementing these strategies through process planning and control. Initial literature research has pointed out the need for a multiperspective analysis in a real life laboratory environment. To this effect an investigation is proposed wherein an analytical model of a microscale trigeneration system integrated with TABS will be developed and compared with a real life test-rig corresponding to building management systems. Data from the experimental analysis will be used to develop control algorithms using model predictive control for achieving the thermal comfort of occupants in the most energy efficient and grid reactive manner. The scope of this work encompasses adsorption cooling based microscale trigeneration systems and their deployment in residential and light commercial buildings.
The energy system of the future will transform from the current centralised fossil based to a decentralised, clean, highly efficient, and intelligent network. This transformation will require innovative technologies and ideas like trigeneration and the crowd energy concept to pave the way ahead. Even though trigeneration systems are extremely energy efficient and can play a vital role in the energy system, turning around their deployment is hindered by various barriers. These barriers are theoretically analysed in a multiperspective approach and the role decentralised trigeneration systems can play in the crowd energy concept is highlighted. It is derived from an initial literature research that a multiperspective (technological, energy-economic, and user) analysis is necessary for realising the potential of trigeneration systems in a decentralised grid. And to experimentally quantify these issues we are setting up a microscale trigeneration lab at our institute and the motivation for this lab is also briefly introduced.
Distributed Flow Control and Intelligent Data Transfer in High Performance Computing Networks
(2015)
This document contains my master thesis report, including problem definition, requirements, problem analysis, review of current state of the art, proposed solution,
designed prototype, discussions and conclusion.
During this work we propose a collaborative solution to run different types of operations in a broker-less network without relying on a central orchestrator.
Based on our requirements, we define and analyze a number of scenarios. Then we design a solution to address those scenarios using a distributed workflow management approach. We explain how we break a complicated operation into simpler parts and how we manage it in a non-blocking and distributed way. Then we show how we asynchronously launch them on the network and how we collect and aggregate results. Later on we introduce our prototype which demonstrates the proposed design.
Since 2003, most European countries established heat health warning systems to alert the population to heat load. Heat health warning systems are based on predicted meteorological conditions outdoors. But the majority of the European population spends a substantial amount of time indoors, and indoor thermal conditions can differ substantially from outdoor conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) extended the existing heat health warning system (HHWS) with a thermal building simulation model to consider heat load indoors. In this study, the thermal building simulation model is used to simulate a standardized building representing a modern nursing home, because elderly and sick people are most sensitive to heat stress. Different types of natural ventilation were simulated. Based on current and future test reference years, changes in the future heat load indoors were analyzed. Results show differences between the various ventilation options and the possibility to minimize the thermal heat stress during summer by using an appropriate ventilation method. Nighttime ventilation for indoor thermal comfort is most important. A fully opened window at nighttime and the 2-h ventilation in the morning and evening are more sufficient to avoid heat stress than a tilted window at nighttime and the 1-h ventilation in the morning and the evening. Especially the ventilation in the morning seems to be effective to keep the heat load indoors low. Comparing the results for the current and the future test reference years, an increase of heat stress on all ventilation types can be recognized.
Chronic insomnia is defined by difficulties in falling asleep, maintaining sleep, and early morning awakening, and is coupled with daytime consequences such as fatigue, attention deficits, and mood instability. These symptoms persist over a period of at least 3 months (Diagnostic and Statistical Manual 5 criteria). Chronic insomnia can be a symptom of many medical, neurological, and mental disorders. As a disorder, it incurs substantial health-care and occupational costs, and poses substantial risks for the development of cardiovascular and mental disorders, including cognitive deficits. Family and twin studies confirm that chronic insomnia can have a genetic component (heritability coefficients between 42% and 57%), whereas the investigation of autonomous and central nervous system parameters has identified hyperarousal as a final common pathway of the pathophysiology, implicating an imbalance of sleep–wake regulation consisting of either overactivity of the arousal systems, hypoactivity of the sleep-inducing systems, or both. Insomnia treatments include benzodiazepines, benzodiazepine-receptor agonists, and cognitive behavioural therapy. Treatments currently under investigation include transcranial magnetic or electrical brain stimulation, and novel methods to deliver psychological interventions.
Adsorption of N2 and CO2 on Activated Carbon, AlO(OH) Nanoparticles, and AlO(OH) Hollow Spheres
(2015)
Adsorption behaviors of nitrogen and CO2 on Norit R1 Extra and AlO(OH) nanoparticles and hollow spheres were measured under different temperature and pressure conditions using a magnetic suspension balance. Independent from the substrate investigated, all isotherms increase at lower pressure, reach a maximum, and then decrease with increasing pressure. In addition, selected experimental data were correlated with different model approaches and compared with reliable literature data. In case of CO2 on AlO(OH), capillary condensation was observed at two defined temperatures. The results suggest that the conversion of the liquid into a supercritical adsorbate phase does not take place suddenly.
DEM–FEA estimation of pores arrangement effect on the compressive Young’s modulus for Mg foams
(2015)
This work reports the study of the effect of the pore arrangement on the compressive behavior of Mg foams with regular pore size and porosities ranging from 25% to 45%. Pore arrangements were modeled using Finite Element Analysis (FEA), with random and ordered models, and compared to the estimations obtained for a previous work. The coordinates of the random pore arrangements were firstly generated using Discrete Element Method (DEM), and used in a second stage for modeling the pores by FEA. Estimations were also compared to the experimental results for Mg foams obtained by means of powder metallurgy. Results show important drops in the Young’s moduli as the porosity increases for both, experimental results and FEA estimations. Estimations obtained using ordered pore arrangements presented significant differences when compared to the estimations acquired from models with random arrangements. The randomly arranged models represent more accurately the real topologies of the experimental metallic foams. The Young’s moduli estimated using these models were in excellent agreement with the experiments, whilst the estimations obtained using ordered models presented relative errors significantly higher. The importance of the use of more realistic FEA models for improving the predicting ability of this method was probed, for the study of the mechanical properties of metallic foams.
We report the use of the Raman spectral information of the chemical compound toluene C7H8 as a reference on the analysis of laboratory-prepared and commercially acquired gasoline-ethanol blends. The rate behavior of the characteristic Raman lines of toluene and gasoline has enabled the approximated quantification of this additive in commercial gasoline-ethanol mixtures. This rate behavior has been obtained from the Raman spectra of gasoline-ethanol blends with different proportions of toluene.
All these Raman spectra have been collected by using a self-designed, frequency precise and low-cost Fourier-transform Raman spectrometer (FT-Raman spectrometer) prototype. This FT-Raman prototype has helped to accurately confirm the frequency position of the main characteristic Raman lines of toluene present on the different gasoline-ethanol samples analyzed at smaller proportions than those commonly found in commercial gasoline-ethanol blends. The frequency accuracy validation has been performed by analyzing the same set of toluene samples with two additional state-of-the-art commercial FT-Raman devices. Additionally, the spectral information has been contrasted, with highly-correlated coefficients as a result, with the values of the standard Raman spectrum of toluene.
The Raman spectra from the chemical compounds toluene and cyclohexane obtained using a Fourier Transform (FT)-Raman spectrometer prototype have been contrasted with the Raman spectra of these same materials collected with two different commercial FT-Raman devices. The FT-Raman spectrometer consist of a Michelson interferometer, a self-designed photon counter and a reference photo-detector. The evaluation methodology of the spectral information, contrary to the commercial devices that commonly use the zero-crossing method, is carried out by re-sampling the Raman scattering and by accurately extracting the optical path information of the Michelson interferometer. The FTRaman arrangement has been built using conventional parts without disregarding the spectral frequency precision that usually such a FTRaman instruments deliver. No additional complex hardware components or costly software modules have been included in this FT-Raman device. The main Raman lines from the spectra obtained with the three FT-Raman devices have been compared with the Raman lines from the standard Raman spectra of these two materials. The values obtained using the FT-Raman spectrometer prototype have shown a frequency accuracy comparable to that obtained with the commercial devices without facing the need for a large investment. Although the proposed FT-Raman prototype cannot be directly compared to the last generation of FT-Raman spectrometers from the commercial manufacturers, such a device could give an opportunity to users that require high frequency precision in their spectral analysis and are provided with rather scarce resources.
Digital networked communications are the key to all Internet-of-Things applications, especially to smart metering systems and the smart grid. In order to ensure a safe operation of systems and the privacy of users, the transport layer security (TLS) protocol, a mature and well standardized solution for secure communications, may be used. We implemented the TLS protocol in its latest version in a way suitable for embedded and resource-constrained systems. This paper outlines the challenges and opportunities of deploying TLS in smart metering and smart grid applications and presents performance results of our TLS implementation. Our analysis shows that given an appropriate implementation and configuration, deploying TLS in constrained smart metering systems is possible with acceptable overhead.
We propose secure multi-party computation techniques for the distributed computation of the average using a privacy-preserving extension of gossip algorithms. While recently there has been mainly research on the side of gossip algorithms (GA) for data aggregation itself, to the best of our knowledge, the aforementioned research line does not take into consideration the privacy of the entities involved. More concretely, it is our objective to not reveal a node's private input value to any other node in the network, while still computing the average in a fully-decentralized fashion. Not revealing in our setting means that an attacker gains only minor advantage when guessing a node's private input value. We precisely quantify an attacker's advantage when guessing - as a mean for the level of data privacy leakage of a node's contribution. Our results show that by perturbing the input values of each participating node with pseudo-random noise with appropriate statistical properties (i) only a minor and configurable leakage of private information is revealed, by at the same time (ii) providing a good average approximation at each node. Our approach can be applied to a decentralized prosumer market, in which participants act as energy consumers or producers or both, referred to as prosumers.
Environmental Monitoring is an attractive application field for Wireless Sensor Network (WSN). Water Level Monitoring helps to increase the efficiency of water distribution and management. In Pakistan, the world’s largest irrigation system covers 90.000 km of channels which needs to be monitored and managed on different levels. Especially the sensor systems for the small distribution channels need to be low energy and low cost. The distribution presents a technical solution for a communication system which is developed in a research project being co-funded by German Academic Exchange Service (DAAD). The communication module is based on IEEE-802.15.4 transceivers which are enhanced through Wake-On-Radio (WOR) to combine low-energy and real-time behavior. On higher layers, IPv6 (6LoWPAN) and corresponding routing protocols like Routing Protocol for Low power and Lossy Networks (RPL) can extend range of the network. The data are stored in a database and can be viewed online via a web interface. Of course, also automatic data analysis can be performed.
Wireless sensor networks have recently found their way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researchers. Such monitoring applications, in general, don way into a wide range of applications among which environmental monitoring system has attracted increasing interests of researc latency requirements regarding to the energy efficiency. Also a challenge of this application is the network topology as the application should be able to be deployed in very large scale. Nevertheless low power consumption of the devices making up the network must be on focus in order to maximize the lifetime of the whole system. These devices are usually battery-powered and spend most of their energy budget on radio transceiver module. A so-called Wake-On-Radio (WoR) technology can be used to achieve a reasonable balance among power consumption, range, complexity and response time. In this paper, some designs for integration of WOR into IEEE 802.1.5.4 are to be discussed, providing an overview of trade-offs in energy consumption while deploying the WoR schemes in a monitoring system.
Instabilities of the interface between two thin liquid films under DC electroosmotic flow are investigated using linear stability analysis followed by an asymptotic analysis in the long-wave limit. The two-liquid system is bounded by two rigid plates which act as substrates. The Boltzmann charge distribution is considered for the two electrolyte solutions and gives rise to a potential distribution in these liquids. The effect of van der Waals interactions in these thin films is incorporated in the momentum equations through the disjoining pressure. Marginal stability and growth rate curves are plotted in order to identify the thresholds for the control parameters when instabilities set in. If the upper liquid is a dielectric, the applied electric field can have stabilizing or destabilizing effects depending on the viscosity ratio due to the competition between viscous and electric forces. For viscosity ratio equal to unity, the stability of the system gets disconnected from the electric parameters like interface zeta potential and electric double-layer thickness. As expected, disjoining pressure has a destabilizing effect, and capillary forces have stabilizing effect. The overall stability trend depends on the complex contest between all the above-mentioned parameters. The present study can be used to tune these parameters according to the stability requirement.
Rubber materials are characterized by a variety of inelasticities such as softening behavior, hysteresis loops and permanent set. In order to calculate the inelastic material behavior, constitutive models, that describe rubber as a homogeneous continuum, have to make use of damping or friction elements.
On the nanoscale, there is no need to adopt such rheological models. Inelastic material behavior can be explained and simulated by a continuous rearrangement of bonds, in particular, the van der Waals interactions, and by the polymer chains transitioning between cis and trans equilibrium torsion angles. The discrete molecular dynamics simulations presented in this paper are performed in an explicit FEM environment using nonlinear but elastic force field potentials. From a structural mechanics point of view, topological changes of the polymer network can be interpreted as a sequence of local material instability problems due to negative tangential bond stiffnesses.
In order to obtain representative results within reasonable computational time, the model is optimized with respect to the number of atoms and the loading velocity. It is shown that by increasing the model size, the stress–strain curves become independent of both the atoms initial state and the strain amplitudes.
We provide a privacy-friendly cloud-based smart metering storage architecture which provides few-instance storage on encrypted measurements by at the same time allowing SQL queries on them. Our approach is most flexible with respect to two axes: on the one hand it allows to apply filtering rules on encrypted data with respect to various upcoming business cases; on the other hand it provides means for a storage-efficient handling of encrypted measurements by applying server-side deduplication techniques over all incoming smart meter measurements. Although the work at hand is purely dedicated to a smart metering architecture we believe our approach to have value for a broader class of IoT cloud storage solutions. Moreover, it is an example for Privacy-by-design supporting the positive-sum paradigm.
In this paper, the correlation of the cyclic J-integral, ΔJ, and the cyclic crack-tip opening displacement, ΔCTOD, is studied in the presence of crack closure to assess the question if ΔJ describes the crack-tip opening displacement in this case. To this end, a method is developed to evaluate ΔJ numerically within finite-element calculations. The method is validated for an elastic–plastic material that exhibits Masing behavior. Different strain ranges and strain ratios are considered under fully plastic cyclic conditions including crack closure. It is shown that the cyclic J-integral is the parameter to determine the cyclic crack-tip opening displacement even in cases where crack closure is present.
In this paper, the initial multiaxial yield behavior of three different gray cast iron materials with lamellar shaped graphite inclusions is numerically investigated by means of the finite-element method. Therefore, volume elements including the real microstructure of the materials are loaded bi- and triaxially beyond macroscopic yield. The shape of the obtained yield surfaces are compared to the surfaces of four continuum models which, amongst others, are proposed in literature to describe the inelastic behavior of gray cast iron with lamellar shaped graphite inclusions. It is found that the presented continuum models and the macroscopic yield surfaces obtained with microstructure-based finite-element models deviate. Furthermore, the initial inelastic flow direction is computed at the onset of macroscopic yielding. The analysis show that the inelastic flow is normal to the yield surface.
This paper focuses on the microstructure-dependent inelastic behavior of lamellar gray cast iron. It comprises the reconstruction of three dimensional volume elements by use of the serial sectioning method for the materials GJL-150, GJL-250 and GJL-350. The obtained volume elements are prepared for the numerical analyses by means of finite-element method. In the finite-element analysis, the metallic matrix is modeled with an elastic–plastic deformation law. The graphite inclusions are modeled nonlinear elastic with a decreasing value of Young’s modulus for increasing tensile loading. Thus, the typical tension–compression asymmetry of this material class can be described. The stress–strain curves obtained with the microstructure-based finite-element models agree well with experimental curves of tension and compression tests. Besides the analysis of the whole volume element, the scatter of the stress–strain response in smaller statistical volume elements is investigated. Furthermore, numerical studies are performed to reduce computational costs.
The durability of polymer electrolyte membrane fuel cells (PEMFC) is governed by a nonlinear coupling between system demand, component behavior, and physicochemical degradation mechanisms, occurring on timescales from the sub-second to the thousand-hour. We present a simulation methodology for assessing performance and durability of a PEMFC under automotive driving cycles. The simulation framework consists of (a) a fuel cell car model converting velocity to cell power demand, (b) a 2D multiphysics cell model, (c) a flexible degradation library template that can accommodate physically-based component-wise degradation mechanisms, and (d) a time-upscaling methodology for extrapolating degradation during a representative load cycle to multiple cycles. The computational framework describes three different time scales, (1) sub-second timescale of electrochemistry, (2) minute-timescale of driving cycles, and (3) thousand-hour-timescale of cell ageing. We demonstrate an exemplary PEMFC durability analysis due to membrane degradation under a highly transient loading of the New European Driving Cycle (NEDC).
The Metering Bus, also known as M-Bus, is a European standard EN13757-3 for reading out metering devices, like electricity, water, gas, or heat meters. Although real-life M-Bus networks can reach a significant size and complexity, only very simple protocol analyzers are available to observe and maintain such networks. In order to provide developers and installers with the ability to analyze the real bus signals easily, a web-based monitoring tool for the M-Bus has been designed and implemented. Combined with a physical bus interface it allows for measuring and recording the bus signals. For this at first a circuit has been developed, which transforms the voltage and current-modulated M-Bus signals to a voltage signal that can be read by a standard ADC and processed by an MCU. The bus signals and packets are displayed using a web server, which analyzes and classifies the frame fragments. As an additional feature an oscilloscope functionality is included in order to visualize the physical signal on the bus. This paper describes the development of the read-out circuit for the Wired M-Bus and the data recovery.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things (IoT). Whereas the lower layers (IEEE802.15.4 and 6LoWPAN) are already well defined and consolidated with regard to frame formats, header compression, routing protocols and commissioning procedures, there is still an abundant choice of possibilities on the application layer. Currently, various groups are working towards standardization of the application layer, i.e. the ETSI Technical Committee on M2M, the IP for Smart Objects (IPSO) Alliance, Lightweight M2M (LWM2M) protocol of the Open Mobile Alliance (OMA), and OneM2M. This multitude of approaches leaves the system developer with the agony of choice. This paper selects, presents and explains one of the promising solutions, discusses its strengths and weaknesses, and demonstrates its implementation.
This paper presents a practice and science orientated education approach for freshman students of interdisciplinary bachelor engineering degree programs. This approach is meant to enhance the motivation and success of freshman students during their whole study. The education approach is called Fit4PracSis (Fit for Practice and Sciences) It was started to develop, set up and establish an education approach, which is building a relationship to students' future profession and to scientific working during the introductory study phase. The freshman students will be trained early in important skills, which are necessary for achieving the final degree successfully and handling of future business and research activities.
Systemic Constellations are a phenomenological approach to resolving personal, professional and organizational issues. They offer a way of mapping a present reality, working at the source of the hidden dynamics and moving to a resolution. This systemic approach often delivers surprising and unexpected insights while also offering the possibility to analyze and solve organizational problems. Rational analysis provides the whole picture of the problem which often turns out to be too complex for a decision making. Systemic constellations can help to simplify and clarify the situation and inform what has to happen next [8], [17]. The outcomes of systemic constellations as an additional resource for solving comprehensive technical problems have not yet been sufficiently investigated. In structural constellation work dealing with technical problems, the individuals who are involved in the problem situation are used to represent different system components, substances or fields. A moderator voices the feedback from the representatives concerning their feelings or intuitive movements, and points to possible solutions. For example, a moderator places the representatives somewhere in the room, develops a three-dimensional picture of the constellation of the analyzed situation and tries to expose the factors empowering or blocking the way towards constructive solutions [13]. This paper explores the theoretical background and practical outcomes of the systemic constellation method for technical problem solving. It presents some case study work which has been conducted in recent years, and then discusses its findings and implications. The research outlined in this paper demonstrates that the noteworthy contribution of structural constellation work for problem solving is typically the result of a combination of functional analysis and the feeling-as-information principle. The constellation work helps, at first, to reveal the subjective experiences, such as feelings, moods, emotions, and bodily sensations, and then to accept them as a source of objective information relevant to the decision making process. In accordance with the latest research [19], the use of feelings as a source of information follows the same principles as the use of any other information. This paper provides the structures of some standard templates and types of constellation work for technical problems, and discusses the preconditions for their application.
A wet-chemical treatment system for electrochemically coating flat substrates with coating material, has having a basin for receiving an electrolyte, a transporting means, by means of which the flat substrates can be transported through the electrolyte horizontally, and at least one contact element which comprises a shaft having an axis of rotation and a cylindrical circumferential surface suitable for rolling on the substrate, wherein the circumferential surface comprises at least one electrically insulated segment and at least one electrically conductive segment which can be connected to a current source in such a way that the polarity can be reversed, wherein the axis of rotation of the contact element is positioned above the surface of the electrolyte, and wherein the contact element is designed as a consumable electrode.
In this work we provide an overview of gamification, i.e. the application of methods from game design to enrich non-gaming processes. The contribution is divided into five subsections: an introduction focusing on the progression of gamification through the hype cycle in the recent years (1), a brief introduction to gamification mechanics (1) and an overview of the state of the art in established areas (3). The focus is a discussion of more recent attempts of gamification in service and production (4). We also discuss the ethical implications (5) and the future perspectives (6) of gamified business processes. Gamification has been successfully applied in the domains education (serious games) and health (exergames) and is spreading to other areas. In recent years there have been various attempts to “gamify” business processes. While the first efforts date back as far as the collection of miles in frequent flyer programs, we will portray some of the more recent and comprehensive software-based approaches in the service industry, e.g. the gamification of processes in sales and marketing. We discuss their accomplishments as well as their social and ethical implicatio. Finally a very recent approach is presented: the application of gamification in the domain of industrial production. We discuss the special requirements in this domain and the effects on the business level and on the users. We conclude with a prognosis on the future development of gamification.
With major intellectual properties there is a long tradition of cross-media value chains -- usually starting with books and comics, then transgressing to film and TV and finally reaching interactive media like video games. In recent years the situation has changed: (1) smaller productions start to establish cross media value chains; (2) there is a trend from sequential towards parallel content production. In this work we describe how the production of a historic documentary takes a cross media approach right from the start. We analyze how this impacts the content creation pipelines with respect to story, audience and realization. The focus of the case study is the impact on the production of a documentary game. In a second step we reflect on the experiences gained so far and derive recommendations for future small-scale cross media productions.
Towards a gamification of industrial production: a comparative study in sheltered work environments
(2015)
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
Design approaches for the gamification of production environments: a study focusing on acceptance
(2015)
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes.
In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness.
The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
Recent advances in motion recognition allow the development of Context-Aware Assistive Systems (CAAS) for industrial workplaces that go far beyond the state of the art: they can capture a user's movement in real-time and provide adequate feedback. Thus, CAAS can address important questions, like Which part is assembled next? Where do I fasten it? Did an error occur? Did I process the part in time? These new CAAS can also make use of projectors to display the feedback within the corresponding area on the workspace (in-situ). Furthermore, the real-time analysis of work processes allows the implementation of motivating elements (gamification) into the repetitive work routines that are common in manual production. In this chapter, the authors first describe the relevant backgrounds from industry, computer science, and psychology. They then briefly introduce a precedent implementation of CAAS and its inherent problems. The authors then provide a generic model of CAAS and finally present a revised and improved implementation.
The Effect of Gamification on Emotions - The Potential of Facial Recognition in Work Environmentsns
(2015)
Gamification means using video game elements to improve user experience and user engagement in non-game services and applications. This article describes the effects when gamification is used in work contexts. Here we focus on industrial production. We describe how facial recognition can be employed to measure and quantify the effect of gamification on the users’ emotions.
The quantitative results show that gamification significantly reduces both task completion time and error rate. However, the results concerning the effect on emotions are surprising. Without gamification there are not only more unhappy expressions (as to expect) but surprisingly also more happy expressions. Both findings are statistically highly significant.
We think that in redundant production work there are generally more (negative) emotions involved. When there is no gamification happy and unhappy balance each other. In contrast gamification seems to shift the spectrum of moods towards “relaxed”. Especially for work environments such a calm attitude is a desirable effect on the users. Thus our findings support the use of gamification.
Creating growth through trade is an important part of the policy approach of many economies. For decades, many member countries of the Organisation for Economic Co-operation and Development (OECD) have cooperated in a fair competition for the benefit of their national exporters. The countries’ official export credit agencies (ECAs) have established and jointly improved rules and regulations for export credit and political risk insurance. However, new players such as China, Russia or other fast developing countries have now joined the list of top exporting nations. As these countries have established their own ECAs, there is a need to introduce rules and regulations on global standards for financial terms as well as truly international norms ensuring ‘ethical’ trading behaviour.
But how will government support for foreign trade look like in the future? Will global standards for export credit and political risk insurance become reality by 2020? And how will strict rules and regulations for officially supported export credits and FDI regarding ethics, human rights and the environment impact growth through trade in general, or exporters in particular? These are questions addressed by the thirty eight contributions to Global Policy’s third eBook entitled ‘The Future of Foreign Trade Support – Setting Global Standards for Export Credit and Political Risk Insurance’, guest edited by Andreas Klasen and Fiona Bannert.
Additive Manufacturing and Reverse Engineering have increasingly been gaining in importance over the past years. This paper investigates the current status of the implementation of these new technologies in design education and also identifies current shortcomings. Then it develops two new approaches for the teaching of the necessary expertise for the design of 3D-printed components and illustrates these with case studies. First, a workshop is presented in which students gain a broad understanding for the functionalities of additive manufacturing and the creative possibilities and limits of this process, through the assembly and installation of a 3D-printer. A second new approach is the combination of reverse engineering and 3D-printing. Thereby, students learn how to deal with this complex process chain. The result of these new approaches can e.g. be seen in the design guidelines for Additive Manufacturing, which were developed by the students themselves. At the same time, the students are able to estimate opportunities and limits of both technologies. Finally, the success of the new course contents and form is reviewed by an evaluation by the students.
This paper presents a new approach for the teaching of competence in additive manufacturing to engineering students in product development. Particularly new to this approach is the combination of the students' autonomous assembly and commissioning of a 3D-printer, and the independent development of guidelines for this new technology regarding the design of components. This way the students will be able to gain first practical experiences with the data preparation, the additive manufacturing process itself and also the required post-treatment of the 3D-printed parts. To allow the students a significantly deeper insight into the functioning of 3D-printing, the workshop Rapid Prototyping developed a new approach in the course of which the students first assemble a construction kit for a 3D-printer themselves and then commission the printer. This enables the students to gain a better understanding of the functionality and configuration of additive manufacturing. In a next step, the students used the 3D-printers they constructed themselves to produce components which they take from a database. Finally, the experiences of the students in the course of the workshop will be evaluated to review the effectiveness of the new approach.
In addition to traditional methods in product development, the increasing availability of two new 3D digital technologies, namely digital manufacturing (3D-printing) and digitizing of surfaces (3D-scanning), offer new opportunities in product development processes today. With regard to the systematic implementation of these technologies in the education of students in the field of product development, however, only a small number of approaches exist so far. This paper explores several ways in which 3D digital technologies can productively be used in design education. The innovative aspects here include that the students assemble and install the 3D-printers themselves, and that they are introduced to an approach that combines 3D-scanning followed by 3D-printing.
Application of Polymer Plaster Composites in Additive Manufacturing of High-Strength Components
(2015)
Today, 3D-printing with polymer plaster composites is a common method in Additive Manufacturing. This technique has proven to be especially suitable for the production of presentation models, due to the low cost of materials and the possibility to produce color-models. But nowadays it requires refinishing through the manual application of a layer of resin. However, the strength of these printed components is very limited, as the applied resin only penetrates a thin edge layer on the surface. This paper develops a new infiltration technique that allows for a significant increase in the strength of the 3D-printed component. For this process, the components are first dehydrated in a controlled two-tier procedure, before they are then penetrated with high-strength resin. The infiltrate used in this process differs significantly from materials traditionally used for infiltration. The result is an almost complete penetration of the components with high-strength infiltrate. As the whole process is computer-integrated, the results are also easier to reproduce, compared to manual infiltration. On the basis of extensive material testing with different testing specimen and testing methods, it can be demonstrated that a significant increase in strength and hardness can be achieved. Finally, this paper also considers the cost and energy consumption of this new infiltration method. As a result of this new technology, the scope of applicability of 3D-printing can be extended to cases that require significantly more strength, like the production of tools for the shaping of metals or used for the molding of plastics. Furthermore, both the process itself and the parameters used are monitored and can be optimized to individual requirements and different fields of application.
Distribution of esophageal interventricular conduction delays in CRT patients and healthy subjects
(2015)
Quarz crystal microbalances allow the monitoring of the adsorption process of mass from a liquid to their surface. The adsorbed mass can be analysed regarding to its protein content using mass spectromety. To ensure the protein identification the results of several measurements can be combined. A high content QCM-D array was developed to allow up to ten measurements parallel. The samples can be routed inside the array distributing one sample to several chips. The fluidic parts were prototyped using 3D printing. The assembled array was tight and the sample routing function could be demonstrated. A temperature controller was developed and implemented. The parameters for the PID controller were determined and the controller was shown to be able to keep the temperature constant over long time with high accuracy.
Android is the most popular mobile operating system. Its omnipresence leads to the fact that it is also the most popular target amongst malware developers and other computer criminals. Hence, this thesis shows the security-relevant structures of Android’s system and application architecture. Furthermore, it provides laboratory exercises on various security-related issues to understand them not only theoretically but also deal with them in a practical way. In order to provide infrastructure-independent education, the exercises are based on Android Virtual Devices (AVDs).
Cardiac resynchronization therapy (CRT) is an established biventricular pacing therapy in heart failure patients with left bundle branch block and reduced left ventricular ejection fraction, but not all patients improved clinically as CRT responder. Purpose of the study was to evaluate electrical left atrial conduction delay (LACD) with focused transesophageal electrocardiography in CRT responder and CRT non-responder.
Methods: Twenty heart failure patients (age 66.6±8.2 years; 2 females, 18 males) with New York Heart Association functional class 3.0±0.3 and 174.2±40.2ms QRS duration were analysed using posterior left atrial transesophageal electrocardiography with hemispherical electrodes. Electrical LACD was measured between onset and offset of transesophageal left atrial signal before implantation of CRT devices.
Results: Electrical LACD could be evaluated by bipolar transesophageal left atrial electrocardiography using TO Osypka electrode in all heart failure patients with negative correlation between 54.7±18.1ms LACD and 24.9±6.4% left ventricular ejection fraction (r=-0.65, P=0.002). There were 16 CRT responders with reduction of New York Heart Association functional class from 3.0±0.29 to 2.1±0.2 (r=0.522, P=0.038) during 9.41±10.96 month biventricular pacing and negative correlation between 49.6±14.2ms LACD and 26.0±6.2% left ventricular ejection fraction (r=-0.533, P=0.034). There were 4 CRT non-responders with no reduction of New York Heart Association functional class from 3.0±0.4 to 2.8±0.5 (r=0.816, P=0.184) during with 13.88±16.39 month biventricular pacing and no correlation between 75.25±19.17ms LACD and 20.75±6.4% left ventricular ejection fraction (r=-0.831, P=0.169).
Conclusions: Focused transesophageal left atrial electrocardiography can be utilized to analyse electrical LACD in heart failure patients. LACD correlated negative with left ventricular ejection fraction in CRT responders. LACD may be a useful parameter to evaluate electrical left atrial desynchronization in heart failure patients.
Cardiac resynchronization therapy (CRT) is an established class I level A biventricular pacing therapy in chronic heart failure patients with left bundle branch block and reduced left ventricular ejection fraction, but not all patients improved clinically. Purpose of the study was to evaluate electrical interatrial conduction delay (IACD) to interventricular conduction delay (IVCD) ratio with focused transesophageal left atrial and left ventricular electrocardiography.
Methods: Thirty eight chronic heart failure patients (age 63.4±10.2 years; 3 females, 35 males) with New York Heart Association (NYHA) functional class 3.0±0.2 and 171.71±36.17ms QRS duration were analysed using posterior left atrial and left ventricular transesophageal electrocardiography with hemispherical electrodes before CRT. Electrical IACD was measured between onset of P-wave in the surface ECG and onset of left atrial signal. Electrical IVCD was measured between onset of QRS complex in the surface ECG and onset of left ventricular signal.
Results: Electrical IACD and IVCD could be evaluated by transesophageal left atrial and left ventricular electrocardiography in all heart failure patients with correlation to 1.18±0.92 IACD-IVCD-ratio (r=-0.57, P<0.001; r=0.66, P<0.001). There were 32 CRT responder with reduction of NYHA class from 3.0±0.22 to 1.97±0.31 (P<0.001) during 16.5±18.9 month CRT with 75.19±33.49ms IACD, 78.91±24.73ms IVCD, 1.04±0.66 IACD-IVCD-ratio and correlation between IACD and IACDIVCD- ratio (r=0.84, P<0.001). There were 6 CRT nonresponder with no reduction of NYHA class from 3.0±0.3 to 2.9±0.5 during 14.3±13.7 month biventricular pacing, 50.0±28.26ms IVCD (P=0.014), 1.92±1.65 IACD-IVCD-ratio (P=0,029) and correlation between 67.0±24.9ms IACD and IACD-IVCD-ratio (r=0.85, P=0.031).
Conclusions: Focused transesophageal left atrial and left ventricular electrocardiography can be utilized to analyse electrical IACD and IVCD in heart failure patients. IACDIVDC- ratio may be a useful parameter to evaluate electrical left cardiac desynchronization in heart failure patients.
Phosphate-based inorganic–organic hybrid nanoparticles (IOH-NPs) with the general composition [M]2+[Rfunction(O)PO3]2– (M = ZrO, Mg2O; R = functional organic group) show multipurpose and multifunctional properties. If [Rfunction(O)PO3]2– is a fluorescent dye anion ([RdyeOPO3]2–), the IOH-NPs show blue, green, red, and near-infrared fluorescence. This is shown for [ZrO]2+[PUP]2–, [ZrO]2+[MFP]2–, [ZrO]2+[RRP]2–, and [ZrO]2+[DUT]2– (PUP = phenylumbelliferon phosphate, MFP = methylfluorescein phosphate, RRP = resorufin phosphate, DUT = Dyomics-647 uridine triphosphate). With pharmaceutical agents as functional anions ([RdrugOPO3]2–), drug transport and release of anti-inflammatory ([ZrO]2+[BMP]2–) and antitumor agents ([ZrO]2+[FdUMP]2–) with an up to 80% load of active drug is possible (BMP = betamethason phosphate, FdUMP = 5′-fluoro-2′-deoxyuridine 5′-monophosphate). A combination of fluorescent dye and drug anions is possible as well and shown for [ZrO]2+[BMP]2–0.996[DUT]2–0.004. Merging of functional anions, in general, results in [ZrO]2+([RdrugOPO3]1–x[RdyeOPO3]x)2– nanoparticles and is highly relevant for theranostics. Amine-based functional anions in [MgO]2+[RaminePO3]2– IOH-NPs, finally, show CO2 sorption (up to 180 mg g–1) and can be used for CO2/N2 separation (selectivity up to α = 23). This includes aminomethyl phosphonate [AMP]2–, 1-aminoethyl phosphonate [1AEP]2–, 2-aminoethyl phosphonate [2AEP]2–, aminopropyl phosphonate [APP]2–, and aminobutyl phosphonate [ABP]2–. All [M]2+[Rfunction(O)PO3]2– IOH-NPs are prepared via noncomplex synthesis in water, which facilitates practical handling and which is optimal for biomedical application. In sum, all IOH-NPs have very similar chemical compositions but can address a variety of different functions, including fluorescence, drug delivery, and CO2 sorption.
Cardiac resynchronization therapy with atrioventricular and interventricular delay optimized biventricular pacing is an established therapy for symptomatic heart failure patients with prolongation of QRS duration, left bundle branch block and reduced left ventricular ejection fraction. The aim of the investigation was to evaluate right atrial, right ventricular and left ventricular electrical signals of implantable electronic cardiac devices with and without signal averaging technique with novel LabVIEW software. Electrical interatrial conduction delay and inter-ventricular conduction delay may be useful parameters to evaluate electrical atrial and ventricular desynchronization in heart failure patients.
Seven cell design concepts for aqueous (alkaline) lithium–oxygen batteries are investigated using a multi-physics continuum model for predicting cell behavior and performance in terms of the specific energy and specific power. Two different silver-based cathode designs (a gas diffusion electrode and a flooded cathode) and three different separator designs (a porous separator, a stirred separator chamber, and a redox-flow separator) are compared. Cathode and separator thicknesses are varied over a wide range (50 μm–20 mm) in order to identify optimum configurations. All designs show a considerable capacity-rate effect due to spatiotemporally inhomogeneous precipitation of solid discharge product LiOH·H2O. In addition, a cell design with flooded cathode and redox-flow separator including oxygen uptake within the external tank is suggested. For this design, the model predicts specific power up to 33 W/kg and specific energy up to 570 Wh/kg (gravimetric values of discharged cell including all cell components and catholyte except housing and piping).