Refine
Year of publication
Document Type
- Conference Proceeding (926)
- Article (reviewed) (553)
- Article (unreviewed) (124)
- Part of a Book (65)
- Contribution to a Periodical (58)
- Book (29)
- Patent (29)
- Letter to Editor (28)
- Doctoral Thesis (19)
- Working Paper (19)
Conference Type
- Konferenzartikel (730)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (8)
Language
- English (1857) (remove)
Is part of the Bibliography
- yes (1857) (remove)
Keywords
- RoboCup (32)
- Dünnschichtchromatographie (26)
- Gamification (17)
- Machine Learning (17)
- Export (16)
- Kommunikation (15)
- TRIZ (13)
- Plastizität (12)
- 3D printing (11)
- Deep Leaning (11)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (562)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (486)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (357)
- Fakultät Wirtschaft (W) (257)
- INES - Institut für nachhaltige Energiesysteme (165)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (146)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (142)
- IMLA - Institute for Machine Learning and Analytics (72)
- ACI - Affective and Cognitive Institute (58)
- Fakultät Medien (M) (ab 22.04.2021) (51)
Open Access
- Open Access (801)
- Closed Access (625)
- Closed (241)
- Bronze (137)
- Gold (74)
- Diamond (62)
- Hybrid (45)
- Grün (12)
The free convection in a vertical gap is generalized to realize new analytical solutions of the Boussinesq-equations. The steady and time-dependent solutions for the temperature and velocity distribution are discussed in detail depending on the mass flux in vertical direction. The range of existence for flows with and without back flow is obtained. The transient behaviour of the solutions during the time-dependent development displays interesting physical behaviour.
The use of a TLC scanner can be regarded as a key step in high performance thin layer chromatography (HPTLC). Densitometric measurements transform the substance distribution on a TLC plate into digital computer data. Systems that allow quantitative measurements have been available for many years for either fluorescence or ultraviolet absorption measurements, while lately the reflection analysis mode for both types is the most common application. New scanning approaches are designed to aid the analyst who has common demands for TLC-densitometry without using special data, such as scanned images. Two examples that have been developed lately in the laboratories of the authors are described in this paper. These approaches were developed on the basis of current needs for analysts who employ TLC as a tool in research, as well as in routine analysis. One approach is aimed to support analysts in economically disadvantaged areas, where cost intensive apparatus is unsuitable but trace analysis by simple means is required. The other system, allows the spectral determination of chromatographic spots on TLC plates covering the ultraviolet and visible range, thus, revealing highly desired information for the analyst.
HPTLC (High Performance Thin Layer Chromatography) is a well known and versatile separation method which shows a lot of advantages and options in comparison to other separation techniques. The method is fast and inexpensive and does not need time-consuming pretreatments. Using fiber-optic elements for controlled light-guiding, the TLC-method was significantly improved: the new HPTLC-system is able to measure simultaneously at different wavelengths without destroying the plate surface or the analytes on the surface. For registration of the sample distribution on a HPTLC-plate we developed a new and sturdy diode-array HPTLC- scanner which allows registration of spectra on the TLC- plates in the range of 198 nm to 610 nm with a spectral resolution better than 1.2 nm. The spatial resolution on plate is better than 160 micrometers . In the spectral mode, the new HPTLC-scanner delivers much more information than the commonly used TLC-scanner. The measurement of 450 spectra of one separation track does not need more than three minutes. However, in the fixed wavelength mode the contour plot can be measured within 15 seconds. In this case, the signal will be summarized and averaged over a spectral range having FWHM from 10 nm to 25 nm depending on the substance under test. The new diode-array HPTLC-scanner makes various chemometric applications possible. The new method can be used easily in clinical diagnostic systems easily, e.g. for blood and uring investigations. In addition, new applications are possible. For example, the rich structured PAHs were studied. Although the separation is incomplete the 16 compounds can be quantified using suitable wavelengths.
The aim of this study was to develop a biomechanically validated finite element model to predict the biomechanical behaviour of the human lumbar spine in compression.
For validation of the finite element model, an in vitro study was performed: Twelve human lumbar cadaveric spinal segments (six segments L2/3 and six segments L4/5) were loaded in axial compression using 600 N in the intact state and following surgical treatment using two different internal stabilisation devices. Range of motion was measured and used to calculate stiffness.
A finite element model of a human spinal segment L3/4 was loaded with the same force in intact and surgically altered state, corresponding to the situation of biomechanical in vitro study.
The results of the cadaver biomechanical and finite element analysis were compared. As they were close together, the finite element model was used to predict: (1) load-sharing within human lumbar spine in compression, (2) load-sharing within osteoporotic human lumbar spine in compression and (3) the stabilising potential of the different spinal implants with respect to bone mineral density.
A finite element model as described here may be used to predict the biomechanical behaviour of the spine. Moreover, the influence of different spinal stabilisation systems may be predicted.
A systematic toxicological analysis procedure using high-performance thin layer chromatography in combination with fibre optical scanning densitometry for identification of drugs in biological samples is presented. Two examples illustrate the practicability of the technique. First, the identification of a multiple intake of analgesics: codeine, propyphenazone, tramadol, flupirtine and lidocaine, and second, the detection of the sedative diphenhydramine. In both cases, authentic urine specimens were used. The identifications were carried out by an automatic measurement and computer-based comparison of in situ UV spectra with data from a compiled library of reference spectra using the cross-correlation function. The technique allowed a parallel recording of chromatograms and in situ UV spectra in the range of 197–612 nm. Unlike the conventional densitometry, a dependency of UV spectra by concentration of substance in a range of 250–1000 ng/spot was not observed.
The importance of obtaining simultaneous particle size and concentration values has grown up with continuing discussion of the health effects, of internal combustion engine generated particulate emissions and in particular of Diesel soot emissions. In the present work an aerosol measurement system is described that delivers information about particle size and concentration directly from the undiluted exhaust gas.
Using three laser diodes of different wavelengths which form one parallel light beam, each spectral attenuation is analysed by a single detector and the particle diameter and concentration is evaluated by the use of the Mie theory and shown on-line at a frequency of 1 Hz. The system includes an optical long-path-cell (White principle) with an adjustable path length from 2.5 to 15 m, which allows the analysis within a broad concentration range.
On-line measurements of the particulate emissions in the hot, undiluted exhaust of Diesel engines are presented under stationary and transient engine’s load conditions. Mean particle diameters well below 100 nm are detected for modern Diesel engines. The measured particle concentration corresponds excellently with the traditional gravimetrical measurements of the diluted exhaust. Additionally, measurements of particle emissions (mostly condensed hydricarbons) from a two-stroke engine are presented and discussed.
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography (TLC). It is a simple means of quantification by measurement of the optical density of the separated spots directly on the plate. A new scanner has been developed which is capable of measuring TLC or HPTLC (high-performance thin-layer chromatography) plates simultaneously at different wavelengths without damaging the plate surface. Fiber optics and special fiber interfaces are used in combination with a diode-array detector. With this new scanner sophisticated plate evaluation is now possible, which enables use of chemometric methods in HPTLC. Different regression models have been introduced which enable appropriate evaluation of all analytical questions. Fluorescent measurements are possible without filters or special lamps and signal-to-noise ratios can be improved by wavelength bundling. Because of the richly structured spectra obtained from PAH, diode-array HPTLC enables quantification of all 16 EPA PAH on one track. Although the separation is incomplete all 16 compounds can be quantified by use of suitable wavelengths. All these aspects are enable substantial improvement of in-situ quantitative densitometric analysis.
HPTLC (High Performance Thin Layer Chromatography) is a well known and versatile separation method which shows many advantages when compared to other separation techniques. The method is fast and inexpensive and does not need time-consuming pretreatments. For visualisation of the sample distribution on a HPTLC-plate we developed a new and sturdy HPTLC-scanner. The scanner allows simultaneous registrations of spectra in a range from 198 nm to 612 nm with a spectral resolution of better than 0.8 nm. The on-plate spatial resolution is better than 160 μm. The measurement of 450 spectra in one separation track does not need more than two minutes. The new diode-array scanner offers a fast survey over a TLC-separation and makes various chemometric applications possible. For compound identification a cross-correlation function is described to compare UV sample spectra with appropriate library data. The cross-correlation function herein described can also be used for purity testing. Unresolved peaks can be virtually separated by use of a least squares fit algorithm. In summary, the diode arry system delivers much more information than the commonly used TLC-scanner.
A prototype multiwavelength sensor able to characterise soot emissions in Diesel exhaust in terms of size and concentration has been tested against other methods for diesel particle measurements like electrical mobility sizing (SMPS) and raw exhaust gravimetric sampling (RES). Measurements carried out with the prototype sensor were correlated with the SMPS by assuming spherical and/or fractal aggregate morphology of the particles. Correlation of RES gravimetric data against the sensor and the SMPS led to the calculation of the solid density for soot particles to be 2.3 gr/cm3.
The flow field-flow fractionation (FIFFF) technique is a promising method for separating and analysing particles and large size macromolecules from a few nanometers to approximately 50 μm. A new fractionation channel is described featuring well defined flow conditions even for low channel heights with convenient assembling and operations features. The application of the new flow field-flow fractionation channel is proved by the analysis of pigments and other small particles of technical interest in the submicrometer range. The experimental results including multimodal size distributions are presented and discussed.
Rotating flow systems are often used to study stability phenomena and structure developments. The closed spherical gap problem is generalized into an open flow system by superimposing a mass flux in meridional direction. The basic solutions at low Reynolds numbers are described by analytical methods. The nonlinear supercritical solutions are simulated numerically and realized in experiments. Novel steady and time-dependent modes of flows are obtained. The extensive results concern the stability behaviour, non-uniqueness of supercritical solutions, symmetry behaviour and transitions between steady and time-dependent solutions. The experimental investigations concern the visualization of the various instabilities and the quatitative description of the flow structures including the laminar-turbulent transition. A comparison between theoretical and experimental results shows good agreement within the limit of rotational symmetric solutions from the theory.
We generalize the fluid flow problem of an oscillating flat plate (II. Stokes problem) in two directions. We discuss first the oscillating porous flat plate with superimposed blowing or suction. The second generalization is concerned with an increasing or decreasing velocity amplitude of the oscillating flat plate. Finally we show that a combination of both effects is possible as well.
An algorithm is presented that has successfully been utilized in practice for several years. It improves data analysis in chromatography. The program runs in an extremely reliable way and evaluates chromatographic raw data with an acceptable error. The algorithm requires a minimum of preliminaries and integrates even unsmoothed noisy data correctly.
Formal verification (FV) is considered by many to be complicated and to require considerable mathematical knowledge for successful application. We have developed a methodology in which we have added formal verification to the verification process without requiring any knowledge of formal verification languages. We use only finite-state machine notation, which is familiar and intuitive to designers. Another problem associated with formal verification is state-space explosion. If that occurs, no result is returned; our method switches to random simulation after one hour without results, and no effort is lost. We have compared FV against random simulation with respect to development time, and our results indicate that FV is at least as fast as random simulation. FV is superior in terms of verification quality, however, because it is exhaustive.
This paper treats the interaction between acoustic modes and light (Brillouin scattering) in a single mode optical fibre. Different observed spectra of the Brillouin backscattering in several fibres have been already reported. In order to have a clear idea of the process, we made a simulation to be able to `draw' the theoretical Brillouin spectrum of an optical fibre and to identify the origin of the observed backscattered lines.
First, the model and the computation method used in our simulation are described. Second, the experimentally observed spectra of two real fibres are compared with their computed spectra. Real spectra and simulated spectra are in good agreement.
Our work provides an interesting tool to investigate the changes in the Brillouin spectrum when the input parameters (characteristics of an optical fibre) vary. This should give useful indications to people working on systems which use Brillouin backscattering.
In this paper a high-performance thin-layer chromatography (HPTLC) scanner is presented in which a special fibre arrangement is used as HPTLC plate scanning interface. Measurements are taken with a set of 50 fibres at a distance of 400 to 500 μm above the HPTLC plate. Spatial resolutions on the HPTLC plate of better than 160 μm are possible. It takes less than 2 min to scan 450 spectra simultaneously in a range of 198 to 610 nm. The basic improvement of the item is the use of highly transparent glass fibres which provide excellent transmission at 200 nm and the use of a special fibre arrangement for plate illumination and detection.
Shapes and structures of vortex breakdown phenomena in rotating fluids are visualized. We investigate the flow in a cylindrical container and in a cone between two spherical surfaces. The primary swirling flow is induced by the rotating upper disk in the cylindrical case and by the lower boundary in the spherical case. The upper surface can be fixed with a no slip condition or can be a stress-free surface. Depending on these boundary conditions and on the Reynolds number novel structures of recirculation zones are realized. Experiments are done to visualize the topological structure of the flow and to determine their existence range as function of the geometry and rotation rate. A comparison between the experimental and theoretical approach shows a good agreement in respect to the topological structures of the flows.
The Baroque composer Johann Sebastian Bach (1685–1750) has left us with many puzzles. The well-known oil painting by Elias Gottlob Haußmann is the only painting for which Bach actually posed in person. According to this portrait, Bach must have been quite obese. The cheeks and nose are flushed – possibly as signs of hypertension – and the eye lids are narrow – a sign of myopia. Furthermore, there is a thinning of the lateral third of the right eyebrow, which is known as Hertoghe’s sign, and indicated periorbital edema. Both signs are compatible with hypothyroidism. Bach might have been suffering from type-2 diabetes as the origin of his final illness, and the obituary reports two cataract surgeries by oculist John Taylor in March/April 1750, and, four months later, “apoplexy” followed by a high fever, of which Bach died. It may be speculated, however, that Bach’s entire illness was the result of his presumed obesity, possibly in combination with hypothyroidism.
In this paper pathophysiological interrelated deactivation/activation phenomena are set out in the example of whiplash injury. These phenomena could have been underestimated in previous positron emission tomography studies as their focus was on hypoperfusion rather than hyperperfusion. In addition, statistical parametric mapping analysis of cerebral studies is normally not fine-tuned to special interesting areas rather than to obvious clusters of difference.
Virtual reality in the hotel industry: assessing the acceptance of immersive hotel presentation
(2019)
In the hotel industry, it is crucial to reduce the inherent information asymmetry with regard to the goods offered. This asymmetry can be minimised through the use of smartphone-based virtual reality applications (SBVRs), which allow virtual simulation of real experiences and thus enable more efficient information retrieval. The aim of the study is to determine for the first time the user acceptance of these immersive hotel presentations for assessing the performance of a travel accommodation. For this purpose, the Technology Acceptance Model (TAM) was used to explain the acceptance behaviour for this new technology. A virtual reality application was specially developed, in which the participants could explore a hotel virtually. A total of 569 participants took part in the study. The structural equation model and the hypotheses were tested using a Partial Least Squares (PLS) analysis. The results illustrate that the immersive product experience leads to more efficient information gathering. The perceived usefulness significantly affects the attitude towards using the technology as well as the intention to use it. In contrast to the traditional TAM, the perceived ease of use of SBVRs has no effect on the perceived usefulness or attitude towards using the technology.
This book, now in its second, completely revised and updated edition, offers a critical approach to the challenging interpretation of the latest research data obtained using functional neuroimaging in whiplash injury. Such a comprehensive guide to recent and current international research in the field is more necessary than ever, given that the confusion regarding the condition and the medicolegal discussions surrounding it have increased further despite the publication of much literature on the subject. In recent decades especially the functional imaging methods – such as single-photon emission tomography, positron emission tomography, functional MRI, and hybrid techniques – have demonstrated a variety of significant brain alterations. Functional Neuroimaging in Whiplash Injury - New Approaches covers all aspects, including the imaging tools themselves, the various methods of image analysis, different atlas systems, and diagnostic and clinical aspects. The book will help physicians, patients and their relatives and friends, and others to understand this condition as a disease.
A simple measuring method for acquiring the radiation pattern of an ultrawide band Vivaldi antenna is presented. The measuring is performed by combining two identical Vivaldi antennas and some of the intrinsic properties of a stepped-frequency continue wave radar (SFCW radar) in the
range from 1.0 GHz to 6.0 GHz. A stepper-motor provided the azimuthal rotation for one of the antennas from 0 ◦ to 360 ◦. The tests have been performed within the conventional environment (laboratory / office) without using an anechoic chamber or absorbing materials. Special measuring devices have not been used either. This method has been tested with different pairs of Vivaldi antennas and it can be also used for different ones (with little or no change in the system), as long as their operational
bandwidth is within the frequency range of the SFCW radar.
Keywords — SFCW Radar, Antenna Gain Characterization,
Azimuthal Radiation Pattern
Oesophageal Electrode Probe and Device for Cardiological Treatment and/or Diagnosis (EP3706626A1)
(2020)
The invention relates to an oesophageal electrode probe (10) for bioimpedance measurement and/or for neurostimulation; a device (100) for transoesophageal cardiological treatment and/or cardiological diagnosis; and a method for the open-loop or closed-loop control of a cardiac catheter ablation device and/or a cardiac, circulatory and/or respiratory support device. The oesophageal electrode probe comprises a bioimpedance measuring device for measuring the bioimpedance of at least one part of the tissue surrounding the oesophageal electrode probe. The bioimpedance device comprises at least one first and one second electrode, wherein the at least one first electrode (12A) is arranged on a side (14) of the oesophageal electrode probe facing towards the heart and the at least one second electrode (12B) is arranged on a side (16) of the oesophageal electrode probe facing away from the heart. The device (100) comprises the oesophageal electrode probe (10) and a control and/or evaluation device (30), which is configured for receiving a first bioimpedance measurement signal from the at least one first electrode (12A) and a second bioimpedance measurement signal from the at least one second electrode (12B), and comparing same, and generating a control signal on the basis of the comparison. The control signal can be a signal for the open-loop or closed-loop control of a cardiac catheter ablation device and/or a cardiac, circulatory and/or respiratory support device.
Existing ultrasonic stress evaluation methods utilize the acoustoelastic effect for bulk waves propagating in volume, which is unsuitable for a surface treated material, possessing a significant variation in material properties with depth. With knowledge of nonlinear elastic parameters – third-order elastic constants (TOEC) close to the surface of the sample, the acoustoelastic effect might be used with surface acoustic waves. This work is focused on the development of an independent method of TOEC measurement using the effect of nonlinear surface acoustic waves scattering – i.e. the effect of elastic waves interaction in a nonlinear medium.
In this paper, the possible three wave interactions of surface guided waves and bulk waves are described and formulae for the efficiency of harmonic generation and mode mixing are derived. A comparison of the efficiency of surface waves scattering in an isotropic medium for different interaction types is carried out with the help of nonlinear perturbation theory. First results for surface and bulk wave mixing with known second- and third-order elastic constants are shown.
In this paper, we establish a simple model for the exchange of messages in a vehicular network and we consider fundamental limits on the achievable data rate. For a vehicular network, the exchange of data with other nearby vehicles is particularly important for traffic safety, e.g. for collision avoidance, but also for cooperative applications like platooning. These use cases are currently addressed by standards building on IEEE 802.11p, namely ITS-G5 and DSRC (dedicated short range communication), which encounter saturation problems at high vehicle densities. For this reason, we take a step back and ask for the fundamental limits for the common data rate in a vehicular network. After defining a simple single-lane model and the corresponding capacity limits for some basic multiple- access schemes, we present results for a more realistic setting. For both scenarios, non-orthogonal multiple-access (NOMA) yields the best results.
This paper evaluates the implementation of Medium Access Control (MAC) protocols suitable for massive access connectivity in 5G multi-service networks. The access protocol extends multi-packet detection receivers based on Physical Layer Network Coding (PLNC) decoding and Coded Random Access protocols considering practical aspects to implement one-stage MAC protocols for short packet communications in mMTC services. Extensions to enhance data delivery phase in two- stage protocols are also proposed. The assessment of the access protocols is extended under system level simulations where a suitable link to system interface characterization has been taken into account.
Micro-cracks give rise to non-analytic behavior of the stress-strain relation. For the case of a homogeneous spatial distribution of aligned flat micro-cracks, the influence of this property of the stress-strain relation on harmonic generation is analyzed for Rayleigh waves and for acoustic wedge waves with the help of a simple micromechanical model adopted from the literature. For the efficiencies of harmonic generation of these guided waves, explicit expressions are derived in terms of the corresponding linear wave fields. The initial growth rates of the second harmonic, i.e., the acoustic nonlinearity parameter, has been evaluated numerically for steel as matrix material. The growth rate of the second harmonic of Rayleigh waves has also been determined for microcrack distributions with random orientation, using a model expression for the strain energy in terms of strain invariants known in a geophysical context.
Nonlinearity can give rise to intermodulation distortions in surface acoustic wave (SAW) devices operating at high input power levels. To understand such undesired effects, a finite element method (FEM) simulation model in combination with a perturbation theory is applied to find out the role of different materials and higher order nonlinear tensor data for the nonlinearities in such acoustic devices. At high power, the SAW devices containing metal, piezoelectric substrate, and temperature compensating (TC) layers are subject to complicated geometrical, material, and other nonlinearities. In this paper, third-order nonlinearities in TC-SAW devices are investigated. The materials used are LiNbO 3 -rot128YX as the substrate and copper electrodes covered with a SiO 2 film as the TC layer. An effective nonlinearity constant for a given system is determined by comparison of nonlinear P-matrix simulations to third-order intermodulation measurements of test filters in a first step. By employing these constants from different systems, i.e., different metallization ratios, in nonlinear periodic P-matrix simulations, a direct comparison to nonlinear periodic FEM-simulations yields scaling factors for the materials used. Thus, the contribution of the different materials to the nonlinear behavior of TC-SAW devices is obtained and the role of metal electrodes, substrate, and TC film are discussed in detail.
For an elastic medium containing a homogeneous distribution of micro-cracks, an effective one-dimensional stress-strain relation has been determined with finite element simulations. In addition to flat micro-cracks, voids were considered that contain a Hertzian contact, which represents an example for micro-cracks with internal structure. The orientation of both types of micro-cracks was fully aligned or, for flat micro-cracks, totally random. For micro-cracks with Hertzian contacts, the case of random orientation was treated in an approximate way. The two types of defects were found to give rise to different degrees of non-analytic behavior of the effective stress-strain relation, which governs the nonlinear propagation of symmetric (S0) Lamb waves in the long-wavelength limit. The presence of flat micro-cracks causes even harmonics to grow linearly with propagation distance with amplitudes proportional to the amplitude of the fundamental wave, and gives rise to a static strain. The presence of the second type of defects leads to a linear growth of all harmonics with amplitudes proportional to the power 3/2 of the fundamental amplitude, and to a strain-dependent velocity shift. Simple expressions are given for the growth rates of higher harmonics of S0 Lamb waves in terms of the parameters occurring in the effective stress-strain relation. They have partly been determined quantitatively with the help of the FEM results for different micro-crack concentrations.
Among the various types of guided acoustic waves, acoustic wedge waves are non-diffractive and non-dispersive. Both properties make them susceptible to nonlinear effects. Investigations have recently been focused on effects of second-order nonlinearity in connection with anisotropy. The current status of these investigations is reviewed in the context of earlier work on nonlinear properties of two-dimensional guided acoustic waves, in particular surface waves. The role of weak dispersion, leading to solitary waves, is also discussed. For anti-symmetric flexural wedge waves propagating in isotropic media or in anisotropic media with reflection symmetry with respect to the wedge’s mid-plane, an evolution equation is derived that accounts for an effective third-order nonlinearity of acoustic wedge waves. For the kernel functions occurring in the nonlinear terms of this equation, expressions in terms of overlap integrals with Laguerre functions are provided, which allow for their quantitative numerical evaluation. First numerical results for the efficiency of third-harmonic generation of flexural wedge waves are presented.
This paper is discussing the development of a wireless Indoor Smart Gardening System with the focus on energy autonomous working. The Smart Gardening System, which is presented in this paper consists of a network of energy autonomous wireless sensor nodes which are used for monitoring important plant parameters like air temperature, soil moisture, pressure or humidity and in future to control an actuator for the plant irrigation and to measure further parameter as light and fertilizer level. Solar energy harvesting is used for powering the wireless nodes without the usage of a battery. Comparable Smart Gardening Systems are usually battery-powered. Furthermore, the overall Smart Gardening System consists of a battery powered gateway based on a Raspberry Pi 3 system, which controls the wireless nodes and collects their sensor data. The gateway is able to send the information to an internet server application and via Wi-Fi to mobile devices. Particularly the architecture of the energy autonomous wireless nodes will be considered because fully energy autonomous wireless networks could not be implemented without special concepts for the energy supply and architecture of the wireless nodes.
The economic dispatch (ED) problem is a large-scale optimization problem in electricity power grids. Its goal is to find a power output combination of all generator nodes that meet the demand of the customers at minimum operating cost. In recent years, distributed protocols have been proposed to replace the traditional centralized ED calculation for modern smart grid infrastructures with the most realistic being the one proposed by Binetti et al. (2014). However, we show that this protocol leaks private information of the generator nodes. We then propose a privacy-preserving distributed protocol that solves the ED problem. We analyze the security of our protocol and give experimental results from a prototype implementation to show the feasibility of the solution.
Nowadays, robotic systems are an integral part of many orthopedic interventions. Stationary robots improve the accuracy but also require adapted surgical workflows. Handheld robotic devices (HHRDs), however, are easily integrated into existing workflows and represent a more economical solution. Their limited range of motion is compensated by the dexterity of the surgeon. This work presents control algorithms for HHRDs with multiple degrees of freedom (DOF). These algorithms protect pre- or intraoperatively defined regions from being penetrated by the end effector (e.g., a burr) by controlling the joints as well as the device’s power. Accuracy tests on a stationary prototype with three DOF show that the presented control algorithms produce results similar to those of stationary robots and much better results than conventional techniques. This work presents novel and innovative algorithms, which work robustly, accurately, and open up new opportunities for orthopedic interventions.
The CAN bus still is an important fieldbus in various domains, e.g. for in-car communication or automation applications. To counter security threats and concerns in such scenarios we design, implement, and evaluate the use of an end-to-end security concept based on the Transport Layer Security protocol. It is used to establish authenticated, integrity-checked, and confidential communication channels between field devices connected via CAN. Our performance measurements show that it is possible to use TLS at least for non time-critical applications, as well as for generic embedded networks.
The increase in households with grid connected Photovoltaic (PV) battery system poses challenge for the grid due to high PV feed-in as a result of mismatch in energy production and load demand. The purpose of this paper is to show how a Model Predictive Control (MPC) strategy could be applied to an existing grid connected household with PV battery system such that the use of battery is maximized and at the same time peaks in PV energy and load demand are reduced. The benefits of this strategy are to allow increase in PV hosting capacity and load hosting capacity of the grid without the need for external signals from the grid operator. The paper includes the optimal control problem formulation to achieve the peak shaving goals along with the experiment set up and preliminary experiment results. The goals of the experiment were to verify the hardware and software interface to implement the MPC and as well to verify the ability of the MPC to deal with the weather forecast deviation. A prediction correction has also been introduced for a short time horizon of one hour within this MPC strategy to estimate the PV output power behavior.
In rural low voltage grid networks, the use of battery in the households with a grid connected Photovoltaic (PV) system is a popular solution to shave the peak PV feed-in to the grid. For a single electricity price scenario, the existing forecast based control approaches together with a decision based control layer uses weather and load forecast data for the on–off schedule of the battery operation. These approaches do bring cost benefit from the battery usage. In this paper, the focus is to develop a Model Predictive Control (MPC) to maximize the use of the battery and shave the peaks in the PV feed-in and the load demand. The solution of the MPC allows to keep the PV feed-in and the grid consumption profile as low and as smooth as possible. The paper presents the mathematical formulation of the optimal control problem along with the cost benefit analysis . The MPC implementation scheme in the laboratory and experiment results have also been presented. The results show that the MPC is able to track the deviation in the weather forecast and operate the battery by solving the optimal control problem to handle this deviation.
In recent times, the energy consumed by buildings facilities became considerable. Efficient local energy management is vital to deal with building power demand penalties. This operation becomes complex when a hybrid energy system is included in the power system. This study proposes new energy management between photovoltaic (PV) system, Battery Energy Storage System (BESS) and the power network in a building by controlling the PV/BESS inverter. The strategy is based on explicit model predictive control (MPC) to find an optimal power flow in the building for one-day ahead. The control algorithm is based on a simple power flow equation and weather forecast. Then, a cost function is formulated and optimised using genetic algorithms-based solver. The objective is reducing the imported energy from the grid preventing the saturation and emptiness of BESS. Including other targets to the control policy as energy price dynamic and BESS degradation, MPC can optimise dramatically the efficacy of the global building power system. The strategy is implemented and tested successfully using MATLAB/SimPowerSystems software, compared to classical hysteresis management, MPC has given 10% in energy cost economy and 25% improvement in BESS lifetime.
Uncontrollable manufacturing variations in electrical hardware circuits can be exploited as Physical Unclonable Functions (PUFs). Herein, we present a Printed Electronics (PE)-based PUF system architecture. Our proposed Differential Circuit PUF (DiffC-PUF) is a hybrid system, combining silicon-based and PE-based electronic circuits. The novel approach of the DiffC-PUF architecture is to provide a specially designed real hardware system architecture, that enables the automatic readout of interchangeable printed DiffC-PUF core circuits. The silicon-based addressing and evaluation circuit supplies and controls the printed PUF core and ensures seamless integration into silicon-based smart systems. Major objectives of our work are interconnected applications for the Internet of Things (IoT).
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
The Datagram Transport Layer Security (DTLS) protocol has been designed to provide end-to-end security over unreliable communication links. Where its connection establishment is concerned, DTLS copes with potential loss of protocol messages by implementing its own loss detection and retransmission scheme. However, the default scheme turns out to be suboptimal for links with high transmission error rates and low data rates, such as wireless links in electromagnetically harsh industrial environments. Therefore, in this paper, as a first step we provide an analysis of the standard DTLS handshake's performance under such adverse transmission conditions. Our studies are based on simulations that model message loss as the result of bit transmission errors. We consider several handshake variants, including endpoint authentication via pre-shared keys or certificates. As a second step, we propose and evaluate modifications to the way message loss is dealt with during the handshake, making DTLS deployable in situations which are prohibitive for default DTLS.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.
Cell lifetime diagnostics and system be-havior of stationary LFP/graphite lithium-ion batteries
(2018)
The paper describes the methodology and experimental results for revealing similarities in thermal dependencies of biases of accelerometers and gyroscopes from 250 inertial MEMS chips (MPU-9250). Temperature profiles were measured on an experimental setup with a Peltier element for temperature control. Classification of temperature curves was carried out with machine learning approach.
A perfect sensor should not have thermal dependency at all. Thus, only sensors inside the clusters with smaller dependency (smaller total temperature slopes) might be pre-selected for production of high accuracy inertial navigation modules. It was found that no unified thermal profile (“family” curve) exists for all sensors in a production batch. However, obviously, sensors might be grouped according to their parameters. Therefore, the temperature compensation profiles might be regressed for each group. 12 slope coefficients on 5 degrees temperature intervals from 0°C to +60°C were used as the features for the k-means++ clustering algorithm.
The minimum number of clusters for all sensors to be well separated from each other by bias thermal profiles in our case is 6. It was found by applying the elbow method. For each cluster a regression curve can be obtained.
Recently, the demand for scalable, efficient and accurate Indoor Positioning Systems (IPS) has seen a rising trend due to their utility in providing Location Based Services (LBS). Visible Light Communication (VLC) based IPS designs, VLC-IPS, leverage Light Emitting Diodes (LEDs) in indoor environments for localization. Among VLC-based designs, Time Difference of Arrival (TDOA) based techniques are shown to provide very low errors in the relative position of receivers. Our considered system consists of five LEDs that act as transmitters and a single receiver (photodiode or image sensor in smart phone) whose position coordinates in an indoor environment are to be determined. As a performance criterion, Cramer Rao Lower Bound (CRLB) is derived for range estimations and the impact of various factors, such as, LED transmission frequency, position of reference LED light, and the number of LED lights, on localization accuracy has been studied. Simulation results show that depending on the optimal values of these factors, location estimation on the order of few centimeters can be realistically achieved.
Modelling detailed chemistry in lithium-ion batteries: Insight into performance, ageing and safety
(2018)
Real-Time Ethernet has become the major communication technology for modern automation and industrial control systems. On the one hand, this trend increases the need for an automation-friendly security solution, as such networks can no longer be considered sufficiently isolated. On the other hand, it shows that, despite diverging requirements, the domain of Operational Technology (OT) can derive advantage from high-volume technology of the Information Technology (IT) domain. Based on these two sides of the same coin, we study the challenges and prospects of approaches to communication security in real-time Ethernet automation systems. In order to capitalize the expertise aggregated in decades of research and development, we put a special focus on the reuse of well-established security technology from the IT domain. We argue that enhancing such technology to become automation-friendly is likely to result in more robust and secure designs than greenfield designs. Because of its widespread deployment and the (to this date) nonexistence of a consistent security architecture, we use PROFINET as a showcase of our considerations. Security requirements for this technology are defined and different well-known solutions are examined according their suitability for PROFINET. Based on these findings, we elaborate the necessary adaptions for the deployment on PROFINET.
Colored glass products with various printing technologies are becoming more important in industry. The aim is to achieve individual solution in a very short delivery time. Conventional thermal treatment of burning glasses in oven for tempered color printing has predominant issues with high time consumption, energy consumption and manufacturing cost. It requires alternative process development.
This paper proposes laser process to overcome issues in conventional treatment with the latest results of tempering colored glass. Samples have been analyzed with the scanning electron microscope (SEM). Two different laser systems have been applied and the glass has been printed with black paste.
In recent years, the application of TRIZ methodology in the process engineering has been found promising to develop comprehensive inventive solution concepts for process intensification (PI). However, the effectiveness of TRIZ for PI is not measured or estimated. The paper describes an approach to evaluate the efficiency of TRIZ application in process intensification by comparing six case studies in the field of chemical, pharmaceutical, ceramic, and mineral industries. In each case study, TRIZ workshops with the teams of researchers and engineers has been performed to analyze initial complex problem situation, to identify problems, to generate new ideas, and to create solution concepts. The analysis of the workshop outcomes estimates fulfilment of the PI-goals, impact of secondary problems, variety and efficiency of ideas and solution concepts. In addition to the observed positive effect of TRIZ application, the most effective inventive principles for process engineering have been identified.
Identification of Secondary Problems of New Technologies in Process Engineering by Patent Analysis
(2018)
The implementation of new technologies in production plants often causes negative side effects and drawbacks. In this context, the prediction of the secondary problems and risks can be used advantageously for selecting best solutions for intensification of the processes. The proposed method puts primary emphasis on systematic and fast anticipation of secondary problems using patent documents, and on extraction and prediction of possible engineering contradictions within novel technical systems. The approach comprises three ways to find secondary problems: (a) direct knowledge-based identification of secondary problems in new technologies or equipment; (b) identification of secondary problems of prototypes mentioned in patent citation trees; and (c) prediction of negative side effects using the correlation matrix for invention goals and secondary problems in a specific engineering domain.
The research work analyses the relationship of 155 Process Intensification (PI) technologies to the components of the Theory of Inventive Problem Solving (TRIZ). It outlines TRIZ inventive principles frequently used in PI, and identifies opportunities for enhancing systematic innovation in process engineering by applying complementary TRIZ and PI. The study also proposes 70 additional inventive TRIZ sub-principles for the problems frequently encountered in process engineering, resulting in the advanced set of 160 inventive operators, assigned to the 40 TRIZ inventive principles. Finally, we analyse and discuss inventive principles used in 150 patent documents published in the last decade in the field of solid handling in the ceramic and pharmaceutical industries.
Process engineering (PE) focuses on the design, operation, control and optimization of chemical, physical and biological processes and has applications in many industries. Process intensification (PI) is the key development approach in the modern process engineering. The theory of inventive problem solving (TRIZ) is today considered as the most comprehensive and systematically organized invention knowledge and creative thinking methodology. This paper analyses the opportunities of TRIZ application in PE and especially in combination with PI. In this context the paper outlines the major challenges for TRIZ application in PE, conceptualizes a possible TRIZ-based approach for process intensification and problem solving in PE, and defines the corresponding research agenda. It also presents the results of the original empirical innovation research in the field of solid handling in the ceramic industry, demonstrates a method for identification and prediction of contradictions and introduces the concept of the probability of contradiction occurrence. Additionally, it describes a technique of process mapping that is based on the function and multi-screen analysis of the processes. This technique is illustrated by a case study dealing with granulation process. The research work presented in this paper is a part of the European project “Intensified by Design® platform for the intensification of processes involving solids handling”.
The modern TRIZ is today considered as the most organized and comprehensive methodology for knowledge-driven invention and innovation. When applying TRIZ for inventive problem solving, the quality of obtained solutions strongly depends on the level of completeness of the problem analysis and the abilities of designers to identify the main technical and physical contradictions in the inventive situation. These tasks are more complex and hence more time consuming in the case of interdisciplinary systems. Considering a mechatronic product as a system resulting from the integration of different technologies, the problem definition reveals two kinds of contradictions: 1) the mono-disciplinary contradictions within a homogenous sub-system, e.g., only mechanical or only electrical; 2) the interdisciplinary contradictions resulting from the interaction of the mechatronic sub-systems (mechanics, electrics, control and software). This paper presents a TRIZ-based approach for a fast and systematic problem definition and contradiction identification, which could be useful both for engineers and students facing mechatronic problems. It also proposes some useful problem formulation tech-niques such as the System Circle Diagram, the enhancement of System Operator with the Evolution Patterns, the extension of MATChEM-IB operator with Infor-mation field and Human Interactions, as well as the Cause-Effect-Matrix.
Economic growth and ecological problems motivate industries to apply eco-friendly technologies and equipment. However, environmental impact, followed by energy and material consumption still remain the main negative implications of the technological progress in process engineering. Based on extensive patent analysis, this paper assigns more than 250 identified eco-innovation problems and requirements to 14 general eco-categories with energy consumption and losses, air pollution, and acidification as top issues. It defines primary eco-engineering contradictions, in case eco-problems appear as negative side effects of the new technologies, and secondary eco-engineering contradictions, if eco-friendly solutions have new environmental drawbacks. The study conceptualizes a correlation matrix between the eco-requirements for prediction of typical eco-contradictions on example of processes involving solids handling. Finally, it summarizes major eco-innovation approaches including Process Intensification in process engineering, and chronologically reviews 66 papers on eco-innovation adapting TRIZ methodology. Based on analysis of 100 eco-patents, 58 process intensification technologies, and literature, the study identifies 20 universal TRIZ inventive principles and sub-principles that have a higher value for environmental innovation.
Economic growth and ecological problems have pushed industries to switch to eco-friendly technologies. However, environmental impact is still often neglected since production efficiency remains the main concern. Patent analysis in the field of process engineering shows that, on the one hand, some eco-issues appear as secondary problems of the new technologies, and on the other hand, eco-friendly solutions often show lower efficiency or performance capability. The study categorizes typical environmental problems and eco-contradictions in the field of process engineering involving solids handling and identifies underlying inventive principles that have a higher value for environmental innovation. Finally, 42 eco-innovation methods adapting TRIZ are chronologically presented and discussed.
Accelerated transformation of the society and industry through digi-talization, artificial intelligence and other emerging technologies has intensified the need for university graduates that are capable of rapidly finding breakthrough solutions to complex problems, and can successfully implement innovation con-cepts. However, there are only few universities making significant efforts to com-prehensively incorporate creative and systematic tools of TRIZ (theory of in-ventive problem solving) and KBI (knowledge-based innovation) into their de-gree structure. Engineering curricula offer little room for enhancing creativity and inventiveness by means of discipline‐specific subjects. Moreover, many ed-ucators mistakenly believe that students are either inherently creative, or will in-evitably obtain adequate problem-solving skills as a result of their university study. This paper discusses challenges of intelligent integration of TRIZ and KBI into university curricula. It advocates the need for development of standard guidelines and best-practice recommendations in order to facilitate sustainable education of ambitious, talented, and inventive specialists. Reflections of educa-tors that teach TRIZ and KBI to students from mechanical, electrical, process engineering, and business administration are presented.
The comprehensive assessment method includes 80 innovation performance parameters and 10 key indicators of innovation capability, such as innovation process performance, innovating system performance, market and customer orientation, technology orientation, creativity, leadership, communication and knowledge management, risk and cost management, innovative climate, and innovation competences. The cross-industry study identifies parameters critical for innovation success and reveals different innovation performance patterns in companies.
CONTEXT
The paper addresses the needs of medium and small businesses regarding qualification of R&D specialists in the interdisciplinary cross-industry innovation, which promises a considerable reduction of investments and R&D expenditures. The cross-industry innovation is commonly understood as identification of analogies and transfer of technologies, processes, technical solutions, working principles or business models between industrial sectors. However, engineering graduates and specialists frequently lack the advanced skills and knowledge required to run interdisciplinary innovation across the industry boundaries.
PURPOSE
The study compares the efficiency of the cross-industry innovation methods in one semester project-oriented course. It identifies the individual challenges and preferred working techniques of the students with different prior knowledge, sets of experiences, and cultural contexts, which require attention by engineering educators.
APPROACH
Two parallel one-semester courses were offered to the mechanical and process engineering students enrolled in bachelor’s and master’s degree programs at the faculty of mechanical and process engineering. The students from different years of study were working in 12 teams of 3…6 persons each on different innovation projects, spending two hours a week in the classroom and additionally on average two hours weekly on their project research. Students' feedback and self-assessments concerning gained skills, efficiency of learned tools and intermediate findings were documented, analysed, and discussed regularly along the course.
RESULTS
Analysis of numerous student projects allows to compare and to select the tools most appropriate for finding cross-industry solutions, such as thinking in analogies, web monitoring, function-oriented search, databases of technological effects and processes, special creativity techniques and others. The utilization of learned skills in practical innovation work strengthens the motivation of students and enhances their entrepreneurial competences. Suggested learning course and given recommendations help facilitate sustainable education of ambitious specialists.
CONCLUSIONS
The structured cross-industry innovation can be successfully run as a systematic process and learned in one semester course. The choice of the preferred working teqniques made by the students is affected by their prior knowledge in science, practical experience, and cultural contexts. Major outcomes of the students’ innovation projects such as feasibility, novelty and customer value of the concepts are primarily influenced by students’ engineering design skills, prior knowledge of the technologies, and industrial or business experience.
The production of potable water in dry areas nowadays is mainly done by the desalination of seawater. State of the art desalination plants usually are built with high production capacities and consume a lot of electrical energy or energy from primary resources such as oil. This causes difficulties in rural areas, where no infrastructure is available neither for the plants’ energy supply nor the distribution of the produced potable water. To address this need, small, self-sustaining and locally operated desalination plants came into the focus of research. In this work, a novel flash evaporator design is proposed which can be driven either by solar power or by low temperature waste heat. It offers low operation costs as well as easy maintenance. The results of an experimental setup operated with water at a feed flow rate of up to 1,600 l/h are presented. It is shown that the proof of concept regarding efficient evaporation as well as efficient gas-liquid separation is provided successfully. The experimental evaporation yield counts for 98 % of the vapor content that is expected from the vapor pressure curve of water. Neither measurements of the electrical conductivity of the gained condensate, nor the analysis of the vapor flow by optical methods show significant droplet entrainment, so there are no concerns regarding the purity of the produced condensate for the use as drinking water.
Our media-artistic performances and installations, INTERCORPOREAL SPLITS (2010–2013), BUZZ (2014–2015), W ASTELAND (2015–2016), as well as our new collaboration with Bruno Latour , DE\GLOBALIZE (2018–2020), are not just about polyphony. Here, however, we rediscover them under this heading, thus giving them a new twist, while mapping out issues, mechanisms and functional modes of the polyphonic.
A method for determining properties of a pipeline includes feeding a sound wave signal at a predetermined feed point into the pipeline so that the sound wave signal propagates in an axial direction of the pipeline. The frequency spectrum of the transmitted sound wave signal has a frequency component or a spectral range with a maximum frequency that is smaller than the lower limit frequency for the first upper mode. Reflected portions of the transmitted sound wave signal are detected as received sound wave signal and are evaluated with regard to the transmitted sound wave signal to determine at least the distance of each reflection site from the feed point.
Online comment on: "Printing ferromagnetic domains for untethered fast-transforming soft materials"
(2018)
Numerous 2,5-dimethoxy-N-benzylphenethylamines (NBOMe), carrying a variety of lipophilic substituents at the 4-position, are potent agonists at 5-hydroxytryptamine (5HT2A ) receptors and show hallucinogenic effects. The present study investigated the metabolism of 25D-NBOMe, 25E-NBOMe, and 25N-NBOMe using the microsomal model of pooled human liver microsomes (pHLM) and the microbial model of the fungi Cunninghamella elegans (C. elegans). Identification of metabolites was performed using liquid chromatography-high resolution-tandem mass spectrometry (LC-HR-MS/MS) with a quadrupole time-of-flight (QqToF) instrument. In total, 36 25D-NBOMe phase I metabolites, 26 25E-NBOMe phase I metabolites and 24 25N-NBOMe phase I metabolites were detected and identified in pHLM. Furthermore, 14 metabolites of 25D-NBOMe, 11 25E-NBOMe metabolites, and nine 25N-NBOMe metabolites could be found in C. elegans. The main biotransformation steps observed were oxidative deamination, oxidative N-dealkylation also in combination with hydroxylation, oxidative O-demethylation possibly combined with hydroxylation, oxidation of secondary alcohols, mono- and dihydroxylation, oxidation of primary alcohols, and carboxylation of primary alcohols. Additionally, oxidative di-O-demethylation for 25E-NBOMe and reduction of the aromatic nitro group and N-acetylation of the primary aromatic amine for 25N-NBOMe took place. The resulting 25N-NBOMe metabolites were unique for NBOMe compounds. For all NBOMes investigated, the corresponding 2,5-dimethoxyphenethylamine (2C-X) metabolite was detected. This study reports for the first time 25X-NBOMe N-oxide metabolites and hydroxylamine metabolites, which were identified for 25D-NBOMe and 25N-NBOMe and all three investigated NBOMes, respectively. C. elegans was capable of generating all main biotransformation steps observed in pHLM and might therefore be an interesting model for further studies of new psychoactive substances (NPS) metabolism.
Lithium-ion pouch cells with lithium titanate (Li4Ti5O12, LTO) anode and lithium nickel cobalt aluminum oxide (LiNi0.8Co0.15Al0.05O2, NCA) cathode were investigated experimentally with respect to their electrical (0.1C…4C), thermal (5 °C…50 °C) and long-time cycling behavior. The 16 Ah cell exhibits an asymmetric charge/discharge behavior which leads to a strong capacity-rate effect, as well as a significantly temperature-dependent capacity (0.37 Ah ∙ K−1) which expresses as additional high-temperature feature in the differential voltage plot. The cell was cycled for 10,000 cycles inbetween the nominal voltage limits (1.7–2.7 V) with a symmetric 4C constant-current charge/discharge protocol, corresponding to approx. 3400 equivalent full cycles. A small (0.192 mΩ/1000 cycles) but continuous increase of internal resistance was observed. Using electrochemical impedance spectroscopy (EIS), this could be identified to be caused by the NCA cathode, while the LTO anode showed only minor changes during cycling. The temperature-corrected capacity during 4C cycling exhibited a decrease of 1.28%/1000 cycles. The 1C discharge capacity faded by only 4.0% for CC discharge and 2.3% for CCCV discharge after 10,000 cycles. The cell thus exhibits very good internal-resistance stability and excellent capacity retention even under harsh (4C continuous) cycling, demonstrating the excellent stability of LTO as anode material.
One of the bottlenecks hindering the usage of polymer electrolyte membrane fuel cell technology in automotive applications is the highly load-sensitive degradation of the cell components. The cell failure cases reported in the literature show localized cell component degradation, mainly caused by flow-field dependent non-uniform distribution of reactants. The existing methodologies for diagnostics of localized cell failure are either invasive or require sophisticated and expensive apparatus. In this study, with the help of a multiscale simulation framework, a single polymer electrolyte membrane fuel cell (PEMFC) model is exposed to a standardized drive cycle provided by a system model of a fuel cell car. A 2D multiphysics model of the PEMFC is used to investigate catalyst degradation due to spatio-temporal variations in the fuel cell state variables under the highly transient load cycles. A three-step (extraction, oxidation, and dissolution) model of platinum loss in the cathode catalyst layer is used to investigate the cell performance degradation due to the consequent reduction in the electro-chemical active surface area (ECSA). By using a time-upscaling methodology, we present a comparative prediction of cell end-of-life (EOL) under different driving behavior of New European Driving Cycle (NEDC) and Worldwide Harmonized Light Vehicles Test Cycle (WLTC).
On the Fundamental and Practical Aspects of Modeling Complex Electrochemical Kinetics and Transport
(2018)
Numerous technologies, such as batteries and fuel cells, depend on electrochemical kinetics. In some cases, the responsible electrochemistry and charged-species transport is complex. However, to date, there are essentially no general-purpose modeling capabilities that facilitate the incorporation of thermodynamic, kinetic, and transport complexities into the simulation of electrochemical processes. A vast majority of the modeling literature uses only a few (often only one) global charge-transfer reactions, with the rates expressed using Butler–Volmer approximations. The objective of the present paper is to identify common aspects of electrochemistry, seeking a foundational basis for designing and implementing software with general applicability across a wide range of materials sets and applications. The development of new technologies should be accelerated and improved by enabling the incorporation of electrochemical complexity (e.g., multi-step, elementary charge-transfer reactions and as well as supporting ionic and electronic transport) into the analysis and interpretation of scientific results. The spirit of the approach is analogous to the role that Chemkin has played in homogeneous chemistry modeling, especially combustion. The Cantera software, which already has some electrochemistry capabilities, forms the foundation for future capabilities expansion.
We present an electrochemical model of a lithium iron phosphate/graphite (LFP/C6) cell that includes combined aging mechanisms: (i) Electrochemical formation of the solid electrolyte interphase (SEI) at the anode, leading to loss of lithium inventory, (ii) breaking of the SEI due to volume changes of the graphite particles, causing accelerated SEI growth, and (iii) loss of active material due to of loss percolation of the liquid electrolyte resulting from electrode dry-out. The latter requires the introduction of an activity-saturation relationship. A time-upscaling methodology is developed that allows to simulate large time spans (thousands of operating hours). The combined modeling and simulation framework is able to predict calendaric and cyclic aging up to the end of life of the battery cells. The aging parameters are adjusted to match literature calendaric and cyclic aging experiments, resulting in quantitative agreement of simulated nonlinear capacity loss with experimental data. The model predicts and provides an interpretation for the dependence of capacity loss on temperature, cycling depth, and average SOC. The introduction of a percolation threshold in the activity-saturation relationship allows to capture the strong nonlinearity of aging toward end of life (“sudden death”).
With the need for automatic control based supervisory controllers for complex energy systems, comes the need for reduced order system models representing not only the non-linear behaviour of the components but also certain unknown process dynamics like their internal control logic. At the Institute of Energy Systems Technology in Offenburg we have built a real-life microscale trigeneration plant and present in this paper a rational modelling procedure that satisfies the necessary characteristics for models to be applied in model predictive control for grid-reactive optimal scheduling of this complex energy system. These models are validated against experimental data and the efficacy of the methodology is discussed. Their application in the future for the optimal scheduling problem is also briefly motivated.
Solar irradiance prediction is vital for the power management and the cost reduction when integrating solar energy. The study is towards a ground image based solar irradiance prediction which is highly dependent on the cloud coverage. The sky images are collected by using ground based sky imager (fisheye lens). In this work, different algorithms for cloud detection being a preparation step for their segmentation are compared.
The fisheye camera has been widely studied in the field of ground based sky imagery and robot vision since it can capture a wide view of the scene at one time. However, serious image distortion is a major drawback hindering its wider use. To remedy this, this paperproposes a lens calibration and distortion correction method for detecting clouds and forecasting solar radiation. Finally, the radial distortion of the fisheye image can be corrected by incorporating the estimated calibration parameters. Experimental results validate the effectiveness of the proposed method.
This paper deals with the detection and segmentation of clouds on high-dynamic-range (HDR) images of the sky as well as the calculation of the position of the sun at any time of the year. In order to predict the movement of clouds and the radiation of the sun for a short period of time, the clouds thickness and position have to be known as precisely as possible. Consequently, the segmentation algorithm has to provide satisfactory results regardless of different weather, illumination and climatic conditions. The principle of the segmentation is based on the classification of each pixel as a cloud or as a sky. This classification is usually based on threshold methods, since these are relatively fast to implement and show a low computational burden. In order to predict if and when the sun will be covered by clouds, the position of the sun on the images has to be determined. For this purpose, the zenith and azimuth angles of the sun are determined and converted into XY coordinates.
Design of a Programmable Passive SoC for Biomedical Applications Using RFID ISO 15693/NFC5 Interface
(2018)
Low power, low cost inductively powered passive biotelemetry system involving fully customized RFID/NFC interface base SoC has gained popularity in the last decades. However, most of the SoCs developed are application specific and lacks either on-chip computational or sensor readout capability. In this paper, we present design details of a programmable passive SoC in compliance with ISO 15693/NFC5 standard for biomedical applications. The integrated system consists of a 32-bit microcontroller, a sensor readout circuit, a 12-bit SAR type ADC, 16 kB RAM, 16 kB ROM and other digital peripherals. The design is implemented in a 0.18 µm CMOS technology and used a die area of 1.52 mm × 3.24 mm. The simulated maximum power consumption of the analog block is 592 µW. The number of external components required by the SoC is limited to an external memory device, sensors, antenna and some passive components. The external memory device contains the application specific firmware. Based on the application, the firmware can be modified accordingly. The SoC design is suitable for medical implants to measure physiological parameters like temperature, pressure or ECG. As an application example, the authors have proposed a bioimplant to measure arterial blood pressure for patients suffering from Peripheral Artery Disease (PAD).
A printed electronics technology has the advantage of additive and extremely low-cost fabrication compared with the conventional silicon technology. Specifically, printed electrolyte-gated field-effect transistors (EGFETs) are attractive for low-cost applications in the Internet-of-Things domain as they can operate at low supply voltages. In this paper, we propose an empirical dc model for EGFETs, which can describe the behavior of the EGFETs smoothly and accurately over all regimes. The proposed model, built by extending the Enz-Krummenacher-Vittoz model, can also be used to model process variations, which was not possible previously due to fixed parameters for near threshold regime. It offers a single model for all the operating regions of the transistors with only one equation for the drain current. Additionally, it models the transistors with a less number of parameters but higher accuracy compared with existing techniques. Measurement results from several fabricated EGFETs confirm that the proposed model can predict the I-V more accurately compared with the state-of-the-art models in all operating regions. Additionally, the measurements on the frequency of a fabricated ring oscillator are only 4.7% different from the simulation results based on the proposed model using values for the switching capacitances extracted from measurement data, which shows more than 2× improvement compared with the state-of-the-art model.
Oxide semiconductors are highly promising candidates for the most awaited, next-generation electronics, namely, printed electronics. As a fabrication route for the solution-processed/printed oxide semiconductors, photonic curing is becoming increasingly popular, as compared to the conventional thermal curing method; the former offers numerous advantages over the latter, such as low process temperatures and short exposure time and thereby, high throughput compatibility. Here, using dissimilar photonic curing concepts (UV–visible light and UV-laser), we demonstrate facile fabrication of high performance In2O3 field-effect transistors (FETs). Beside the processing related issues (temperature, time etc.), the other known limitation of oxide electronics is the lack of high performance p-type semiconductors, which can be bypassed using unipolar logics from high mobility n-type semiconductors alone. Interestingly, here we have found that our chosen distinct photonic curing methods can offer a large variation in threshold voltage, when they are fabricated from the same precursor ink. Consequently, both depletion and enhancement-mode devices have been achieved which can be used as the pull-up and pull-down transistors in unipolar inverters. The present device fabrication recipe demonstrates fast processing of low operation voltage, high performance FETs with large threshold voltage tunability.
An Ultra-Low-Power RFID/NFC Frontend IC Using 0.18 μm CMOS Technology for Passive Tag Applications
(2018)
Battery-less passive sensor tags based on RFID or NFC technology have achieved much popularity in recent times. Passive tags are widely used for various applications like inventory control or in biotelemetry. In this paper, we present a new RFID/NFC frontend IC (integrated circuit) for 13.56 MHz passive tag applications. The design of the frontend IC is compatible with the standard ISO 15693/NFC 5. The paper discusses the analog design part in details with a brief overview of the digital interface and some of the critical measured parameters. A novel approach is adopted for the demodulator design, to demodulate the 10% ASK (amplitude shift keying) signal. The demodulator circuit consists of a comparator designed with a preset offset voltage. The comparator circuit design is discussed in detail. The power consumption of the bandgap reference circuit is used as the load for the envelope detection of the ASK modulated signal. The sub-threshold operation and low-supply-voltage are used extensively in the analog design—to keep the power consumption low. The IC was fabricated using 0.18 μm CMOS technology in a die area of 1.5 mm × 1.5 mm and an effective area of 0.7 mm2. The minimum supply voltage desired is 1.2 V, for which the total power consumption is 107 μW. The analog part of the design consumes only 36 μW, which is low in comparison to other contemporary passive tags ICs. Eventually, a passive tag is developed using the frontend IC, a microcontroller, a temperature and a pressure sensor. A smart NFC device is used to readout the sensor data from the tag employing an Android-based application software. The measurement results demonstrate the full passive operational capability. The IC is suitable for low-power and low-cost industrial or biomedical battery-less sensor applications. A figure-of-merit (FOM) is proposed in this paper which is taken as a reference for comparison with other related state-of-the-art researches.
Various methods of Digital Manufacturing (DM) have been available for the manufacturing of physical architectural models for several years. This paper highlights the advantages of 3D printing for digital manufacturing of detailed architectural models. In particular, the representation of architectural details and textures is treated. Furthermore, two new methods are being developed in order to improve the conditions for the application of digital manufacturing of architectural models.
Besides of conventional CAD systems, new, cloudbased CAD systems have also been available for some years. These CAD systems designed according to the principle of software as a service (SaaS) differ in some important features from the conventional CAD systems. Thus, these CAD systems are operated via a browser and it is not necessary to install the software on a computer. The CAD-data is stored in the cloud and not on a local computer or central server. This new approach should also facilitate the sharing and management of data. Finally, many of these new CAD systems are available as freeware for education purposes, so the universities can save license costs. The chances and risks of cloud-based systems will first be analyzed in this paper. Then two leading cloud-based CAD systems will be researched. During the process, the technical performance range these new systems offer for the product development will be initially checked and reviewed. For this purpose, various criteria are worked out and the CAD software is evaluated using these criteria. In addition, the criteria are weighted by their importance for design education. This allows one to conclude which capabilities the different CAD system offers for use in education.
Printed Electronics (PE) is a promising technology that provides mechanical flexibility and low-cost fabrication. These features make PE the key enabler for emerging applications, such as smart sensors, wearables, and Internet of Things (IoTs). Since these applications need secure communication and/or authentication, it is vital to utilize security primitives for cryptographic key and identification. Physical Unclonable Functions (PUF) have been adopted widely to provide the secure keys. In this work, we present a weak PUF based on Electrolyte-gated FETs using inorganic inkjet printed electronics. A comprehensive analysis framework including Monte Carlo simulations based on real device measurements is developed to evaluate the proposed PE-PUF. Moreover, a multi-bit PE-PUF design is proposed to optimize area usage. The analysis results show that the PE-PUF has ideal uniqueness, good reliability, and can operates at low voltage which is critical for low-power PE applications. In addition, the proposed multi-bit PE-PUF reduces the area usage around 30%.
Printed electronics offers certain technological advantages over its silicon based counterparts, such as mechanical flexibility, low process temperatures, maskless and additive manufacturing process, leading to extremely low cost manufacturing. However, to be exploited in applications such as smart sensors, Internet of Things and wearables, it is essential that the printed devices operate at low supply voltages. Electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials which are fully printed using inkjet printers at low temperatures are very promising candidates to provide such solutions. In this paper, we discuss the technology, process, modeling, fabrication, and design aspect of circuits based on EGFETs. We show how the measurements performed in the lab can accurately be modeled in order to be integrated in the design automation tool flow in the form of a Process Design Kit (PDK). We also review some of the remaining challenges in this technology and discuss our future directions to address them.