Refine
Year of publication
Document Type
- Article (reviewed) (413) (remove)
Language
- English (413) (remove)
Has Fulltext
- no (413) (remove)
Keywords
- Dünnschichtchromatographie (17)
- Adsorption (10)
- Metallorganisches Netzwerk (9)
- Lithiumbatterie (8)
- Ermüdung (6)
- Intelligentes Stromnetz (6)
- Plastizität (6)
- Brennstoffzelle (5)
- Energieversorgung (5)
- Finite-Elemente-Methode (5)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (186)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (108)
- INES - Institut für nachhaltige Energiesysteme (62)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (48)
- Fakultät Wirtschaft (W) (45)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (11)
- IfTI - Institute for Trade and Innovation (11)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (11)
- Fakultät Medien (M) (ab 22.04.2021) (5)
- IMLA - Institute for Machine Learning and Analytics (5)
Open Access
- Closed Access (181)
- Open Access (107)
- Closed (42)
- Diamond (6)
- Gold (5)
- Hybrid (5)
- Bronze (3)
- Grün (1)
BACKGROUND:
While hearing aids for a contralateral routing of signals (CROS-HA) and bone conduction devices have been the traditional treatment for single-sided deafness (SSD) and asymmetric hearing loss (AHL), in recent years, cochlear implants (CIs) have increasingly become a viable treatment choice, particularly in countries where regulatory approval and reimbursement schemes are in place. Part of the reason for this shift is that the CI is the only device capable of restoring bilateral input to the auditory system and hence of possibly reinstating binaural hearing. Although several studies have independently shown that the CI is a safe and effective treatment for SSD and AHL, clinical outcome measures in those studies and across CI centers vary greatly. Only with a consistent use of defined and agreed-upon outcome measures across centers can high-level evidence be generated to assess the safety and efficacy of CIs and alternative treatments in recipients with SSD and AHL.
METHODS:
This paper presents a comparative study design and minimum outcome measures for the assessment of current treatment options in patients with SSD/AHL. The protocol was developed, discussed, and eventually agreed upon by expert panels that convened at the 2015 APSCI conference in Beijing, China, and at the CI 2016 conference in Toronto, Canada.
RESULTS:
A longitudinal study design comparing CROS-HA, BCD, and CI treatments is proposed. The recommended outcome measures include (1) speech in noise testing, using the same set of 3 spatial configurations to compare binaural benefits such as summation, squelch, and head shadow across devices; (2) localization testing, using stimuli that rove in both level and spectral content; (3) questionnaires to collect quality of life measures and the frequency of device use; and (4) questionnaires for assessing the impact of tinnitus before and after treatment, if applicable.
CONCLUSION:
A protocol for the assessment of treatment options and outcomes in recipients with SSD and AHL is presented. The proposed set of minimum outcome measures aims at harmonizing assessment methods across centers and thus at generating a growing body of high-level evidence for those treatment options.
The effect of fluctuating maskers on speech understanding of high-performing cochlear implant users
(2016)
Objective: The present study evaluated whether the poorer baseline performance of cochlear implant (CI) users or the technical and/or physiological properties of CI stimulation are responsible for the absence of masking release. Design: This study measured speech reception thresholds (SRTs) in continuous and modulated noise as a function of signal to noise ratio (SNR). Study sample: A total of 24 subjects participated: 12 normal-hearing (NH) listeners and 12 subjects provided with recent MED-EL CI systems. Results: The mean SRT of CI users in continuous noise was −3.0 ± 1.5 dB SNR (mean ± SEM), while the normal-hearing group reached −5.9 ± 0.8 dB SNR. In modulated noise, the difference across groups increased considerably. For CI users, the mean SRT worsened to −1.4 ± 2.3 dB SNR, while it improved for normal-hearing listeners to −18.9 ± 3.8 dB SNR. Conclusions: The detrimental effect of fluctuating maskers on SRTs in CI users shown by prior studies was confirmed by the current study. Concluding, the absence of masking release is mainly caused by the technical and/or physiological properties of CI stimulation, not just the poorer baseline performance of many CI users compared to normal-hearing subjects. Speech understanding in modulated noise was more robust in CI users who had a relatively large electrical dynamic range.
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPD is highly desirable. In this study, we compared two coding strategies in BiCI users provided with CI systems from MED-EL (Innsbruck, Austria). The CI systems were bilaterally programmed either with the fine structure processing strategy FS4 or with the constant rate strategy high definition continuous interleaved sampling (HDCIS). Familiarization periods between 6 and 12 weeks were considered. The effect of IPD was measured in two types of experiments: (a) IPD detection thresholds with tonal signals addressing mainly one apical interaural electrode pair and (b) with speech in noise in terms of binaural speech intelligibility level differences (BILD) addressing multiple electrodes bilaterally. The results in (a) showed improved IPD detection thresholds with FS4 compared with HDCIS in four out of the seven BiCI users. In contrast, 12 BiCI users in (b) showed similar BILD with FS4 (0.6 ± 1.9 dB) and HDCIS (0.5 ± 2.0 dB). However, no correlation between results in (a) and (b) both obtained with FS4 was found. In conclusion, the degree of IPD sensitivity determined on an apical interaural electrode pair was not an indicator for BILD based on bilateral multielectrode stimulation.
In users of a cochlear implant (CI) together with a contralateral hearing aid (HA), so-called bimodal listeners, differences in processing latencies between digital HA and CI up to 9 ms constantly superimpose interaural time differences. In the present study, the effect of this device delay mismatch on sound localization accuracy was investigated. For this purpose, localization accuracy in the frontal horizontal plane was measured with the original and minimized device delay mismatch. The reduction was achieved by delaying the CI stimulation according to the delay of the individually worn HA. For this, a portable, programmable, battery-powered delay line based on a ring buffer running on a microcontroller was designed and assembled. After an acclimatization period to the delayed CI stimulation of 1 hr, the nine bimodal study participants showed a highly significant improvement in localization accuracy of 11.6% compared with the everyday situation without the delay line (p < .01). Concluding, delaying CI stimulation to minimize the device delay mismatch seems to be a promising method to increase sound localization accuracy in bimodal listeners.
Objectives: Speech recognition on the telephone poses a challenge for patients with cochlear implants (CIs) due to a reduced bandwidth of transmission. This trial evaluates a home-based auditory training with telephone-specific filtered speech material to improve sentence recognition. Design: Randomised controlled parallel double-blind. Setting: One tertiary referral centre. Participants: A total of 20 postlingually deafened patients with CIs. Main outcome measures: Primary outcome measure was sentence recognition assessed by a modified version of the Oldenburg Sentence Test filtered to the telephone bandwidth of 0.3-3.4 kHz. Additionally, pure tone thresholds, recognition of monosyllables and subjective hearing benefit were acquired at two separate visits before and after a home-based training period of 10-14 weeks. For training, patients received a CD with speech material, either unmodified for the unfiltered training group or filtered to the telephone bandwidth in the filtered group. Results: Patients in the unfiltered training group achieved an average sentence recognition score of 70.0%±13.6% (mean±SD) before and 73.6%±16.5% after training. Patients in the filtered training group achieved 70.7%±13.8% and 78.9%±7.0%, a statistically significant difference (P=.034, t10 =2.292; two-way RM ANOVA/Bonferroni). An increase in the recognition of monosyllabic words was noted in both groups. The subjective benefit was positive for filtered and negative for unfiltered training. Conclusions: Auditory training with specifically filtered speech material provided an improvement in sentence recognition on the telephone compared to training with unfiltered material.
This paper presents a streaming-based E-Learning environment where closer integration between learning and work is achieved by integrating multimedia services into manufacturing processes. It contains a comprehensive and detailed explanation of the proposed E-Learning streaming framework, especially the adaption of streaming services to mobile environments. We first analyze several scenarios where E-Learning streaming services can be integrated into manufacturing processes. To allow systematic and tailor-made integration, we develop a model and a specification language for E-Learning streaming services and apply the model using practical scenarios from real manufacturing processes. Adaption of multimedia streaming services to mobile devices is discussed based on Synchronized Multimedia Integration Language (SMIL). Last, we comment on the benefits of using E-Learning streaming services as part of manufacturing processes and analyze the acceptance of the developed system. The key components of our E-Learning environment are 1) an xml based streaming service specification language, 2) adaption of multimedia E-Learning services to mobile environments, and 3) Web Services for searching, registration, and creation of E-Learning streaming services.
A physical unclonable function (PUF) is a hardware circuit that produces a random sequence based on its manufacturing-induced intrinsic characteristics. In the past decade, silicon-based PUFs have been extensively studied as a security primitive for identification and authentication. The emerging field of printed electronics (PE) enables novel application fields in the scope of the Internet of Things (IoT) and smart sensors. In this paper, we design and evaluate a printed differential circuit PUF (DiffC-PUF). The simulation data are verified by Monte Carlo analysis. Our design is highly scalable while consisting of a low number of printed transistors. Furthermore, we investigate the best operating point by varying the PUF challenge configuration and analyzing the PUF security metrics in order to achieve high robustness. At the best operating point, the results show areliability of 98.37% and a uniqueness of 50.02%, respectively. This analysis also provides useful and comprehensive insights into the design of hybrid or fully printed PUF circuits. In addition, the proposed printed DiffC-PUF core has been fabricated with electrolyte-gated field-effect transistor technology to verify our design in hardware.
Despite increasing budgets for social media activities and a wide variety of performance measurement possibilities, many companies do not measure the performance of their social media activities. Research shows that those companies that measure the performance of social media activities use incorrect, too few or inappropriate metrics. A central problem is that there is often an inadequate performance measurement process. This article presents a process that focuses on the objectives of social media activities. In phase one of this process, suitable metrics are selected and target values are defined based on these objectives. In phase two, data are collected and analysed. Finally, actions are defined. The developed process helps companies to measure the performance of their social media activities.
The flow field-flow fractionation (FIFFF) technique is a promising method for separating and analysing particles and large size macromolecules from a few nanometers to approximately 50 μm. A new fractionation channel is described featuring well defined flow conditions even for low channel heights with convenient assembling and operations features. The application of the new flow field-flow fractionation channel is proved by the analysis of pigments and other small particles of technical interest in the submicrometer range. The experimental results including multimodal size distributions are presented and discussed.
Modeling and Simulation the Influence of Solid Carbon Formation on SOFC Performance and Degradation
(2013)
Impedance of the Surface Double Layer of LSCF/CGO Composite Cathodes: An Elementary Kinetic Model
(2014)
A wide range catalyst screening with noble metal and oxide catalysts for a metal–air battery with an aqueous alkaline electrolyte was carried out. Suitable catalysts reduce overpotentials during the charge and discharge process, and therefore improve the round-trip efficiency of the battery. In this case, the electrodes will be used as optimized cathodes for a future lithium–air battery with an aqueous alkaline electrolyte. Oxide catalysts were synthesized via atmospheric plasma spraying. The screening showed that IrO2, RuO2, La0.6Ca0.4Co3, Mn3O4, and Co3O4 are promising bi-functional catalysts. Considering the high price for the noble metal catalysts further investigations of the oxide catalysts were carried out to analyze their electrochemical behavior at varied temperatures, molarities, and in case of La1−x Ca x CoO3 a varying calcium content. Additionally all catalysts were tested in a longterm test to proof cyclability at varied molarities. Further investigations showed that Co3O4 seems to be the most promising bi-functional catalyst of the tested oxide catalysts. Furthermore, it was shown that a calcium content of x = 0.4 in LCCO has the best performance.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
Footwear plays a critical role in our daily lives, affecting our performance, health and overall well-being. Well-designed footwear can provide protection, comfort and improved foot functionality, while poorly designed footwear can lead to mobility problems and declines in physical activity. The overall goal of footwear research is to provide a scientific basis for professionals in the field to provide an optimal footwear solution for a given person, for a given task, in a given environment, while using sustainable manufacturing processes. This article suggests potential directions for future research with a focus on athletic footwear biomechanics. Directions include the evidence-based individualisation of footwear, the interaction between design and prolonged use, and improving the sustainability of footwear. The authors also provide a speculative outlook on methodological developments that may provide greater insight into these areas. These developments may include: (1) the use of larger scale, real-world and representative data, (2) the use of 3D printing to create experimental footwear, (3) the advancement of in silico research methods, and (4) furthering multidisciplinary collaboration. If successfully applied in the future, footwear research will contribute to active and healthy lifestyles across the lifespan.
The authors claim that location information of stationary ICT components can never be unclassified. They describe how swarm-mapping crowd sourcing is used by Apple and Google to worldwide harvest geo-location information on wireless access points and mobile telecommunication systems' base stations to build up gigantic databases with very exclusive access rights. After having highlighted the known technical facts, in the speculative part of this article, the authors argue how this may impact cyber deterrence strategies of states and alliances understanding the cyberspace as another domain of geostrategic relevance. The states and alliances spectrum of activities due to the potential existence of such databases may range from geopolitical negotiations by institutions understanding international affairs as their core business, mitigation approaches at a technical level, over means of cyber deterrence-by-retaliation.
The interaural time difference (ITD) is an important cue for the localization of sounds. ITD changes as little as 10 μs can be detected by the human auditory system. By provision of one ear with a cochlear implant (CI) ITD are altered due to the partial replacement of the peripheral auditory system. A hearing aid (HA), in contrast, does not replace but adds a processing delay component to the peripheral auditory system extending ITD. The aim of the present study was to quantify interaural stimulation timing between these different modalities to estimate the need for central auditory temporal compensation in single sided deaf CI users or bimodal CI/HA users. For this purpose, wave V latencies of auditory brainstem responses evoked either acoustically (ABR) or electrically via the CI (EABR) have been measured. The sum of delays consisting of CI signal processing measured in the MED-EL OPUS2 audio processor and EABR wave V latencies evoked on different intracochlear sites allowed an estimation of the entire CI channel-specific delay for MED-EL MAESTRO CI systems. We compared these values with ABR wave V latencies measured in the contralateral normal hearing or HA provided ear in different frequency bands. The results showed that EABR wave V latencies were consistently shorter than those evoked acoustically in the unaided normal hearing ear. Thus, artificial delays within the audio processor can be implemented to adjust interaural stimulation timing. The currently implemented group delays in the MED-EL CI system turned out to be reasonably similar to those of the unaided ear. For adjustment of CI and contralateral HA, in contrast, an adjustable additional across-frequency delay in the range of 1–11 ms implemented in the CI would be required. Especially for bimodal CI/HA users the adjustment of interaural stimulation timing may induce improved binaural hearing, reduced need for central auditory temporal compensation and increased acceptance of the CI/HA provision.
The compliant nature of distal limb muscle-tendon units is traditionally considered suboptimal in explosive movements when positive joint work is required. However, during accelerative running, ankle joint net mechanical work is positive. Therefore, this study aims to investigate how plantar flexor muscle-tendon behavior is modulated during fast accelerations. Eleven female sprinters performed maximum sprint accelerations from starting blocks, while gastrocnemius muscle fascicle lengths were estimated using ultrasonography. We combined motion analysis and ground reaction force measurements to assess lower limb joint kinematics and kinetics, and to estimate gastrocnemius muscle-tendon unit length during the first two acceleration steps. Outcome variables were resampled to the stance phase and averaged across three to five trials. Relevant scalars were extracted and analyzed using one-sample and two-sample t-tests, and vector trajectories were compared using statistical parametric mapping. We found that an uncoupling of muscle fascicle behavior from muscle-tendon unit behavior is effectively used to produce net positive mechanical work at the joint during maximum sprint acceleration. Muscle fascicles shortened throughout the first and second steps, while shortening occurred earlier during the first step, where negative joint work was lower compared with the second step. Elastic strain energy may be stored during dorsiflexion after touchdown since fascicles did not lengthen at the same time to dissipate energy. Thus, net positive work generation is accommodated by the reuse of elastic strain energy along with positive gastrocnemius fascicle work. Our results show a mechanism of how muscles with high in-series compliance can contribute to net positive joint work.
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2017)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
In many application domains, in particular automotives, guaranteeing a very low failure rate is crucial to meet functional and safety standards. Especially, reliable operation of memory components such as SRAM cells is of essential importance. Due to aggressive technology downscaling, process and runtime variations significantly impact manufacturing yield as well as functionality. For this reason, a thorough memory failure rate assessment is imperative for correct circuit operation and yield improvement. In this regard, Monte Carlo simulations have been used as the conventional method to estimate the variability induced failure rate of memory components. However, Monte Carlo methods become infeasible when estimating rare events such as high-sigma failure rates. To this end, Importance Sampling methods have been proposed which reduce the number of required simulations substantially. However, existing methods still suffer from inaccuracies and high computational efforts, in particular for high-sigma problems. In this paper, we fill this gap by presenting an efficient mixture Importance Sampling approach based on Bayesian optimization, which deploys a surface model of the objective function to find the most probable failure points. Its advantages include constant complexity independent of the dimensions of design space, the potential to find the global extrema, and higher trustworthiness of the estimated failure rate by accurately exploring the design space. The approach is evaluated on a 6T-SRAM cell as well as a master-slave latch based on a 28nm FDSOI process. The results show an improvement in accuracy, resulting in up to 63× better accuracy in estimating failure rates compared to the best state-of-the-art solutions on a 28nm technology node.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions. We present a model-based analysis of lithium-ion battery degradation in a stationary photovoltaic battery system. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including solid electrolyte interphase (SEI) formation. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics (PV), inverter, load, grid interaction, and energy management system, fed with historic weather data. Simulations are carried out for two load scenarios, a single-family house and an office tract, over annual operation cycles with one-minute time resolution. As key result, we show that the charging process causes a peak in degradation rate due to electrochemical charge overpotentials. The main drivers for cell ageing are therefore not only a high state of charge (SOC), but the charging process leading towards high SOC. We also show that the load situation not only influences system parameters like self-sufficiency and self-consumption, but also has a significant impact on battery ageing. We assess reduced charge cut-off voltage as ageing mitigation strategy.
The efficiency of a chromatographic analytical method is determined by the selectivity of the chromatographic separation and the specificity of the detection method. In high-performance thin-layer chromatography (HPTLC) the separated components can be detected and quantified directly on the plate by physical and chemical methods. By coupling high-performance thin-layer chromatography with biological or biochemical inhibition tests it is possible to detect toxic substances in situ.
Heat generation that is coupled with electricity usage, like combined heat and power generators or heat pumps, can provide operational flexibility to the electricity sector. In order to make use of this in an optimized way, the flexibility that can be provided by such plants needs to be properly quantified. This paper proposes a method for quantifying the flexibility provided through a cluster of such heat generators. It takes into account minimum operational time and minimum down-time of heat generating units. Flexibility is defined here as the time period over which plant operation can be either delayed or forced into operation, thus providing upward or downward regulation to the power system on demand. Results for one case study show that a cluster of several smaller heat generation units does not provide much more delayed operation flexibility than one large unit with the same power, while it more than doubles the forced operation flexibility. Considering minimum operational time and minimum down-time of the units considerably limits the available forced and delayed operation flexibility, especially in the case of one large unit.
The visualization of heart rhythm disturbance and atrial fibrillation therapy allows the optimization of new cardiac catheter ablations. With the simulation software CST (Computer Simulation Technology, Darmstadt) electromagnetic and thermal simulations can be carried out to analyze and optimize different heart rhythm disturbance and cardiac catheters for pulmonary vein isolation. Another form of visualization is provided by haptic, three-dimensional print models. These models can be produced using an additive manufacturing method, such as a 3d printer. The aim of the study was to produce a 3d print of the Offenburg heart rhythm model with a representation of an atrial fibrillation ablation procedure to improve the visualization of simulation of cardiac catheter ablation. The basis of 3d printing was the Offenburg heart rhythm model and the associated simulation of cryoablation of the pulmonary vein. The thermal simulation shows the pulmonary vein isolation of the left inferior pulmonary vein with the cryoballoon catheter Arctic Front Advance™ from Medtronic. After running through the simulation, the thermal propagation during the procedure was shown in the form of different colors. The three-dimensional print models were constructed on the base of the described simulation in a CAD program. Four different 3d printers are available for this purpose in a rapid prototyping laboratory at the University of Applied Science Offenburg. Two different printing processes were used and a final print model with additional representation of the esophagus and internal esophagus catheter was also prepared for printing. With the help of the thermal simulation results and the subsequent evaluation, it was possible to draw a conclusion about the propagation of the cold emanating from the catheter in the myocardium and the surrounding tissue. It was measured that just 3 mm from the balloon surface into the myocardium the temperature dropped to 25 °C. The simulation model was printed using two 3d printing methods. Both methods, as well as the different printing materials offer different advantages and disadvantages. All relevant parts, especially the balloon catheter and the conduction, are realistically represented. Only the thermal propagation in the form of different colors is not shown on this model. Three-dimensional heart rhythm models as well as virtual simulations allow very clear visualization of complex cardiac rhythm therapy and atrial fibrillation treatment methods. The printed models can be used for optimization and demonstration of cryoballoon catheter ablation in patients with atrial fibrillation.
The automatic classification of the modulation format of a detected signal is the intermediate step between signal detection and demodulation. If neither the transmitted data nor other signal parameters such as the frequency offset, phase offset and timing information are known, then automatic modulation classification (AMC) is a challenging task in radio monitoring systems. The approach of clustering algorithms is a new trend in AMC for digital modulations. A novel algorithm called `highest constellation pattern matching' is introduced to identify quadrature amplitude modulation and phase shift keying signals. The obtained simulation and measurement results outperform the existing algorithms for AMC based on clustering. Finally, it is shown that the proposed algorithm works in a real monitoring environment.
Morphological transition of a rod-shaped phase into a string of spherical particles is commonly observed in the microstructures of alloys during solidification (Ratke and Mueller, 2006). This transition phenomenon can be explained by the classic Plateau-Rayleigh theory which was derived for fluid jets based on the surface area minimization principle. The quintessential work of Plateau-Rayleigh considers tiny perturbations (amplitude much less than the radius) to the continuous phase and for large amplitude perturbations, the breakup condition for the rod-shaped phase is still a knotty issue. Here, we present a concise thermodynamic model based on the surface area minimization principle as well as a non-linear stability analysis to generalize Plateau-Rayleigh’s criterion for finite amplitude perturbations. Our results demonstrate a breakup transition from a continuous phase via dispersed particles towards a uniform-radius cylinder, which has not been found previously, but is observed in our phase-field simulations. This new observation is attributed to a geometric constraint, which was overlooked in former studies. We anticipate that our results can provide further insights on microstructures with spherical particles and cylinder-shaped phases.
Exploiting Dissent: Towards Fuzzing-based Differential Black Box Testing of TLS Implementations
(2017)
The Transport Layer Security (TLS) protocol is one of the most widely used security protocols on the internet. Yet do implementations of TLS keep on suffering from bugs and security vulnerabilities. In large part is this due to the protocol's complexity which makes implementing and testing TLS notoriously difficult. In this paper, we present our work on using differential testing as effective means to detect issues in black-box implementations of the TLS handshake protocol. We introduce a novel fuzzing algorithm for generating large and diverse corpuses of mostly-valid TLS handshake messages. Stimulating TLS servers when expecting a ClientHello message, we find messages generated with our algorithm to induce more response discrepancies and to achieve a higher code coverage than those generated with American Fuzzy Lop, TLS-Attacker, or NEZHA. In particular, we apply our approach to OpenssL, BoringSSL, WolfSSL, mbedTLS, and MatrixSSL, and find several real implementation bugs; among them a serious vulnerability in MatrixSSL 3.8.4. Besides do our findings point to imprecision in the TLS specification. We see our approach as present in this paper as the first step towards fully interactive differential testing of black-box TLS protocol implementations. Our software tools are publicly available as open source projects.
There is increasing evidence of central hyperexcitability in chronic whiplash-associated disorders (cWAD). However, little is known about how an apparently simple cervical spine injury can induce changes in cerebral processes. The present study was designed (1) to validate previous results showing alterations of regional cerebral blood flow (rCBF) in cWAD, (2) to test if central hyperexcitability reflects changes in rCBF upon non-painful stimulation of the neck, and (3) to verify our hypothesis that the missing link in understanding the underlying pathophysiology could be the close interaction between the neck and midbrain structures. For this purpose, alterations of rCBF were explored in a case-control study using H215O positron emission tomography, where each group was exposed to four different conditions, including rest and different levels of non-painful electrical stimulation of the neck. rCBF was found to be elevated in patients with cWAD in the posterior cingulate and precuneus, and decreased in the superior temporal, parahippocampal, and inferior frontal gyri, the thalamus and the insular cortex when compared with rCBF in healthy controls. No differences in rCBF were observed between different levels of electrical stimulation. The alterations in regions directly involved with pain perception and interoceptive processing indicate that cWAD symptoms might be the consequence of a mismatch during the integration of information in brain regions involved in pain processing.
Optimization of energetic refurbishment roadmaps for multi-family buildings utilizing heat pumps
(2023)
A novel methodology for calculating optimized refurbishment roadmaps is developed in this paper. The aim of the roadmaps is to determine when and how should which component of the building envelope and heat generation system be refurbished to achieve the lowest net present value. The integrated optimization approach couples a particle swarm optimization algorithm with a dynamic building simulation of the building envelope and the heat supply system. Due to a free selection of implementation times and refurbishment depth, the optimization method achieves the lowest net present value and high CO2 reduction and is therefore an important contribution to achieve climate neutrality in the building stock.
The method is exemplarily applied to a multi-family house built in 1970. In comparison to a standard refurbishment roadmap, cost savings of 6–16 % and CO2 savings of 6–59 % are possible. The sensitivity of the refurbishment roadmap measures is analyzed on the basis of a parametric analysis. Robust optimization results are obtained with a mean refurbishment level of approx. 50 kWh/m2/a of the building envelope. The preferred heat generation system is a bivalent brine-heat pump system with a share of 70 % of the heat load being covered by the electric heat pump.
There is an increasing demand by an ever-growing number of mobile customers for transfer of rich media content. This requires very high bandwidth which either cannot be provided by the current cellular systems or puts pressure on the wireless networks, affecting customer service quality. This study introduces COARSE – a novel cluster-based quality-oriented adaptive radio resource allocation scheme, which dynamically and adaptively manages the radio resources in a cluster-based two-hop multi-cellular network, having a frequency reuse of one. COARSE is a cross-layer approach across physical layer, link layer and the application layer. COARSE gathers data delivery-related information from both physical and link layers and uses it to adjust bandwidth resources among the video streaming end-users. Extensive analysis and simulations show that COARSE enables a controlled trade-off between the physical layer data rate per user and the number of users communicating using a given resource. Significantly, COARSE provides 25–75% improvement in the computed user-perceived video quality compared with that obtained from an equivalent single-hop network.
Amorphous In-Ga-Zn-O (IGZO) is a high-mobility semiconductor employed in modern thin-film transistors for displays and it is considered as a promising material for Schottky diode-based rectifiers. Properties of the electronic components based on IGZO strongly depend on the manufacturing parameters such as the oxygen partial pressure during IGZO sputtering and post-deposition thermal annealing. In this study, we investigate the combined effect of sputtering conditions of amorphous IGZO (In:Ga:Zn=1:1:1) and post-deposition thermal annealing on the properties of vertical thin-film Pt-IGZO-Cu Schottky diodes, and evaluated the applicability of the fabricated Schottky diodes for low-frequency half-wave rectifier circuits. The change of the oxygen content in the gas mixture from 1.64% to 6.25%, and post-deposition annealing is shown to increase the current rectification ratio from 10 5 to 10 7 at ±1 V, Schottky barrier height from 0.64 eV to 0.75 eV, and the ideality factor from 1.11 to 1.39. Half-wave rectifier circuits based on the fabricated Schottky diodes were simulated using parameters extracted from measured current-voltage and capacitance-voltage characteristics. The half-wave rectifier circuits were realized at 100 kHz and 300 kHz on as-fabricated Schottky diodes with active area of 200 μm × 200 μm, which is relevant for the near-field communication (125 kHz - 134 kHz), and provided the output voltage amplitude of 0.87 V for 2 V supply voltage. The simulation results matched with the measurement data, verifying the model accuracy for circuit level simulation.
We generalize the fluid flow problem of an oscillating flat plate (II. Stokes problem) in two directions. We discuss first the oscillating porous flat plate with superimposed blowing or suction. The second generalization is concerned with an increasing or decreasing velocity amplitude of the oscillating flat plate. Finally we show that a combination of both effects is possible as well.
Cardiac resynchronization therapy (CRT) is an established therapy for heart failure patients and improves quality of life in patients with sinus rhythm, reduced left ventricular ejection fraction (LVEF), left bundle branch block and wide QRS duration. Since approximately sixty percent of heart failure patients have a normal QRS duration they do not benefit or respond to the CRT. Cardiac contractility modulation (CCM) releases nonexcitatoy impulses during the absolute refractory period in order to enhance the strength of the left ventricular contraction. The aim of the investigation was to evaluate differences in cardiac index between optimized and nonoptimized CRT and CCM devices versus standard values. Impedance cardiography, a noninvasive method was used to measure cardiac index (CI), a useful parameter which describes the blood volume during one minutes heart pumps related to the body surface. CRT patients indicate an increase of 39.74 percent and CCM patients an improvement of 21.89 percent more cardiac index with an optimized device.
In this article, we present a taxonomy in Robot-Assisted Training; a growing body of research in Human–Robot Interaction which focuses on how robotic agents and devices can be used to enhance user’s performance during a cognitive or physical training task. Robot-Assisted Training systems have been successfully deployed to enhance the effects of a training session in various contexts, i.e., rehabilitation systems, educational environments, vocational settings, etc. The proposed taxonomy suggests a set of categories and parameters that can be used to characterize such systems, considering the current research trends and needs for the design, development and evaluation of Robot-Assisted Training systems. To this end, we review recent works and applications in Robot-Assisted Training systems, as well as related taxonomies in Human–Robot Interaction. The goal is to identify and discuss open challenges, highlighting the different aspects of a Robot-Assisted Training system, considering both robot perception and behavior control.
Crystal structures of two metal–organic frameworks (MFU‐1 and MFU‐2) are presented, both of which contain redox‐active CoII centres coordinated by linear 1,4‐bis[(3,5‐dimethyl)pyrazol‐4‐yl] ligands. In contrast to many MOFs reported previously, these compounds show excellent stability against hydrolytic decomposition. Catalytic turnover is achieved in oxidation reactions by employing tert‐butyl hydroperoxide and the solid catalysts are easily recovered from the reaction mixture. Whereas heterogeneous catalysis is unambiguously demonstrated for MFU‐1, MFU‐2 shows catalytic activity due to slow metal leaching, emphasising the need for a deeper understanding of structure–reactivity relationships in the future design of redox‐active metal–organic frameworks. Mechanistic details for oxidation reactions employing tert‐butyl hydroperoxide are studied by UV/Vis and IR spectroscopy and XRPD measurements. The catalytic process accompanying changes of redox states and structural changes were investigated by means of cobalt K‐edge X‐ray absorption spectroscopy. To probe the putative binding modes of molecular oxygen, the isosteric heats of adsorption of O2 were determined and compared with models from DFT calculations. The stabilities of the frameworks in an oxygen atmosphere as a reactive gas were examined by temperature‐programmed oxidation (TPO). Solution impregnation of MFU‐1 with a co‐catalyst (N‐hydroxyphthalimide) led to NHPI@MFU‐1, which oxidised a range of organic substrates under ambient conditions by employing molecular oxygen from air. The catalytic reaction involved a biomimetic reaction cascade based on free radicals. The concept of an entatic state of the cobalt centres is proposed and its relevance for sustained catalytic activity is briefly discussed.
Purpose: Participation and accessibility issues faced by gamers with multi-sensory disabilities are themes yet to be fully understood by accessible technology researchers. In this work, we examine the personal experiences and perceptions of individuals with deafblindness who play games despite their disability, as well as the reasons that lead some of them to stop playing games.
Materials and methods: We conducted 60 semi-structured interviews with individuals living with deafblindness in five European countries: United Kingdom, Germany, Netherlands, Greece and Sweden.
Results: Participants stated that reasons for playing games included them being a fun and entertaining hobby, for socialization and meeting others, or for occupying the mind. Reasons for stop playing games included essentially accessibility issues, followed by high cognitive demand, changes in gaming experience due their disability, financial reasons, or because the accessible version of a specific game was not considered as fun as the original one.
Conclusions: We identified that a considerable number of individuals with deafblindness enjoy playing casual mobile games such as Wordfeud and Sudoku as a pastime activity. Despite challenging accessibility issues, games provide meaningful social interactions to players with deafblindness. Finally, we introduce a set of user-driven recommendations for making digital games more accessible to players with a diverse combination of sensory abilities.
IMPLICATIONS FOR REHABILITATION
- Digital games were considered a fun and entertaining hobby by participants with deafblindness. Furthermore, participants play games for socialization and meeting others, or for occupying the mind.
- Digital games provide meaningful social interactions and past time to persons with deafblindness.
- On top of accessibility implications, our findings draw attention to the importance of the social element of gaming for persons with deafblindness.
- Based on interviews, we introduce a set of user-driven recommendations for making digital games more accessible to players with a diverse combination of sensory abilities.
Transcatheter aortic valve implantation is a therapy for patients with reduced left ventricular ejection fraction and symptomatic aortic stenosis. The aim of the study was to compare the pre-and post- transcatheter aortic valve implantation procedures to determine the QRS and QT ventricular conduction times as a potential predictor of permanent pacemaker therapy requirement after transcatheter aortic valve implantation. QRS and QT ventricular conduction times were prolonged after transcatheter aortic valve implantation in heart failure patients with permanent dual chamber pacemaker therapy after transcatheter aortic valve implantation. QRS and QT ventricular conduction times may be useful parameters to evaluate the risk of post-procedural ventricular conduction block and permanent pacemaker therapy in transcatheter aortic valve implantation.
Silicon (Si) has turned out to be a promising active material for next‐generation lithium‐ion battery anodes. Nevertheless, the issues known from Si as electrode material (pulverization effects, volume change etc.) are impeding the development of Si anodes to reach market maturity. In this study, we are investigating a possible application of Si anodes in low‐power printed electronic applications. Tailored Si inks are produced and the impact of carbon coating on the printability and their electrochemical behavior as printed Si anodes is investigated. The printed Si anodes contain active material loadings that are practical for powering printed electronic devices, like electrolyte gated transistors, and are able to show high capacity retentions. A capacity of 1754 mAh/gSi is achieved for a printed Si anode after 100 cycles. Additionally, the direct applicability of the printed Si anodes is shown by successfully powering an ink‐jet printed transistor.
The Future of FDI: Achieving the Sustainable Development Goals 2030 through Impact Investment
(2019)
Publicized as a global call for action in 2015, the United Nations General Assembly passed a resolution on the Sustainable Development Goals 2030 (SDGs). Before issuing the SDGs in 2015, the United Nations Conference on Trade and Development (UNCTAD) has already identified in 2014, as part of their World Investment Report, that especially developing countries are facing an estimated USD 2.5 trillion funding gap annually in the efforts to achieve the SDGs. Yet, the investment opportunities and challenges for investors, when contributing to the closure of this funding gap while benefiting from its economic potential have not been widely discussed. Despite that Foreign Direct Investments (FDI) are a key driver to sustainable economic growth and prosperity of a nation, policies and a holistic framework linking the 2030 Agenda to actionable investment opportunities for private investors are missing. Furthermore, a global platform capturing, channeling and promoting investment projects aiming to achieve the SDGs through impact investment has not been established. Utilizing global financial resources more effectively while developing new approaches and tools to promote impact investments, which demonstrate the benefits for investors to tap into the funding gap of the 2030 Agenda, will have the potential to significantly shape and influence the future of FDI.
The use of a TLC scanner can be regarded as a key step in high performance thin layer chromatography (HPTLC). Densitometric measurements transform the substance distribution on a TLC plate into digital computer data. Systems that allow quantitative measurements have been available for many years for either fluorescence or ultraviolet absorption measurements, while lately the reflection analysis mode for both types is the most common application. New scanning approaches are designed to aid the analyst who has common demands for TLC-densitometry without using special data, such as scanned images. Two examples that have been developed lately in the laboratories of the authors are described in this paper. These approaches were developed on the basis of current needs for analysts who employ TLC as a tool in research, as well as in routine analysis. One approach is aimed to support analysts in economically disadvantaged areas, where cost intensive apparatus is unsuitable but trace analysis by simple means is required. The other system, allows the spectral determination of chromatographic spots on TLC plates covering the ultraviolet and visible range, thus, revealing highly desired information for the analyst.
The production of potable water in dry areas nowadays is mainly done by the desalination of seawater. State of the art desalination plants usually are built with high production capacities and consume a lot of electrical energy or energy from primary resources such as oil. This causes difficulties in rural areas, where no infrastructure is available neither for the plants’ energy supply nor the distribution of the produced potable water. To address this need, small, self-sustaining and locally operated desalination plants came into the focus of research. In this work, a novel flash evaporator design is proposed which can be driven either by solar power or by low temperature waste heat. It offers low operation costs as well as easy maintenance. The results of an experimental setup operated with water at a feed flow rate of up to 1,600 l/h are presented. It is shown that the proof of concept regarding efficient evaporation as well as efficient gas-liquid separation is provided successfully. The experimental evaporation yield counts for 98 % of the vapor content that is expected from the vapor pressure curve of water. Neither measurements of the electrical conductivity of the gained condensate, nor the analysis of the vapor flow by optical methods show significant droplet entrainment, so there are no concerns regarding the purity of the produced condensate for the use as drinking water.
A new formula is presented for transforming fluorescence measurements in accordance with Kubelka-Munk theory. The fluorescence signals, the absorption signals, and data from a selected reference are combined in one expression. Only diode-array techniques can measure all the required data simultaneously to linearize fluorescence data correctly. To prove the new theory HPTLC quantification of the analgesic flupirtine was performed over the mass range 300 to 5000 ng per spot. The fluorescence calibration curve was linear over the whole range. The transformation of fluorescence measurements into linear mass-dependent data extends the technique of in-situ fluorescence analysis to the high concentration range. It also extends Kubelka-Munk theory from absorption to fluorescence analysis. The results presented also emphasize the importance of Kubelka-Munk theory for in-situ measurements in scattering media, especially in planar chromatography.
A Simple and Reliable HPTLC Method for the Quantification of the Intense Sweetener Sucralose®
(2003)
This paper describes a simple and fast thin layer chromatography (TLC) method for the monitoring of the relatively new intense sweetener Sucralose® in various food matrices. The method requires little or no sample preparation to isolate or concentrate the analyte. The Sucralose® extract is separated on amino‐TLC‐plates, and the analyte is derivatized “reagent‐free” by heating the developed plate for 20 min at 190°C. Spots can be measured either in the absorption or fluorescence mode. The method allows the determination of Sucralose® at the levels of interest regarding foreseen European legislation (>50 mg/kg) with excellent repeatability (RSD = 3.4%) and recovery data (95%).
High-performance thin-layer chromatography (HPTLC), as the modern form of TLC (thin-layer chromatography), is suitable for detecting pharmaceutically active compounds over a wide polarity range using the gradient multiple development (GMD) technique. Diode-array detection (DAD) in conjunction with HPTLC can simultaneously acquire ultraviolet‒visible (UV‒VIS) and fluorescence spectra directly from the plate. Visualization as a contour plot helps to identify separated zones. An orange peel extract is used as an example to show how GMD‒DAD‒HPTLC in seven different developments with seven different solvents can provide an overview of the entire sample. More than 50 compounds in the extract can be separated on a 6-cm HPTLC plate. Such separations take place in the biologically inert stationary phase of HPTLC, making it a suitable method for effect-directed analysis (EDA). HPTLC‒EDA can even be performed with living organism, as confirmed by the use of Aliivibrio fischeri bacteria to detect bioluminescence as a measure of toxicity. The combining of gradient multiple development planar chromatography with diode-array detection and effect-directed analysis (GMD‒DAD‒HPTLC‒EDA) in conjunction with specific staining methods and time-of-flight mass spectrometry (TOF‒MS) will be the method of choice to find new chemical structures from plant extracts that can serve as the basic structure for new pharmaceutically active compounds.
High performance thin layer chromatography (HPTLC) is a frequently used separation technique which works well for quantification of caffeine and quinine in beverages. Competing separation techniques, e.g. high-performance liquid chromatography (HPLC) or gas chromatography (GC), are not suitable for sugar-containing samples, because these methods need special pretreatment by the analyst. In HPTLC, however, it is possible to separate ‘dirty’ samples without time-consuming pretreatment, because disposable HPTLC plates are used. A convenient method for quantification of caffeine and quinine in beverages, without sample pretreatment, is presented below. The basic theory of in-situ quantification in HPTLC by use of remitted light is introduced and discussed. Several linearization models are discussed.
A home-made diode-array scanner has been used for quantification; this, for the first time, enables simultaneous measurements at different wavelengths. The new scanner also enables fluorescence evaluation without further equipment. Simultaneous recording at different wavelengths improves the accuracy and reliability of HPTLC analysis. These aspects result in substantial improvement of in-situ quantitative densitometric analysis and enable quantification of compounds in beverages.
Fluorescence Enhancement of Pyrene Measured by Thin-Layer Chromatography with Diode-Array Detection
(2003)
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography. It offers a simple way of quantifying by measuring the optical density of the separated spots directly on the plate. A new TLC scanner has been developed which is able to measure TLC plates or HPTLC plates, at different wavelengths simultaneously, without destroying the plate surface. The system enables absorbance and fluorescence measurements in one run. Fluorescence measurements are possible without filters or other adjustments.
The measurement of fluorescence from a TLC plate is a versatile means of making TLC analysis more sensitive. Fluorescence measurements with the new scanner are possible without filters or special lamps. Improvement of the signal-to-noise ratio is achieved by wavelength bundling. During plate scanning the scattered light and the fluorescence are both emitted from the surface of the TLC plate and this emitted light provides the desired spectral information from substances on the TLC plate. The measurement of fluorescence spectra and absorbance spectra directly from a TLC plate is based on differential measurement of light emerging from sample-free and sample-containing zones.
The literature recommends dipping TLC plates in viscous liquids to enhance fluorescence. Measurement of the fluorescence and absorbance spectra of pyrene spots reveals the mechanism of enhancement of plate dipping in viscous liquids—blocked contact of the fluorescent molecules with the stationary phase or other sample molecules is responsible for the enhanced fluorescence at lower concentrations.
In conclusion, dipping in TLC analysis is no miracle. It is based on similar mechanisms observable in liquids. The measured TLC spectra are also very similar to liquid spectra and this makes TLC spec-troscopy an important tool in separation analysis.
A new diode-array scanner in combination with a computer-controlled application system meets all the demands of modern HPTLC measurement. Automatic application, simultaneous measurements at different wavelengths, and different linearization models enable appropriate evaluation of all analytical questions. The theory of error propagation recommends quantification at reflectance values smaller than 0.8; this can be verified only by use of diode-array scanning. The same theory also recommends quantification by use of peak height data, because the theory predicts best precision only for peak height evaluation. Diode-array scanning with reflectance monitoring enables appropriate validation in TLC and HPTLC analysis. All these aspects result in substantial improvement of in-situ quantitative densitometric analysis, and simultaneous recording at different wavelengths opens the way for chemometric evaluation, e.g. peak purity monitoring, which improves the accuracy and reliability of HPTLC analysis.
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography (TLC). It is a simple means of quantification by measurement of the optical density of the separated spots directly on the plate. A new scanner has been developed which is capable of measuring TLC or HPTLC (high-performance thin-layer chromatography) plates simultaneously at different wavelengths without damaging the plate surface. Fiber optics and special fiber interfaces are used in combination with a diode-array detector. With this new scanner sophisticated plate evaluation is now possible, which enables use of chemometric methods in HPTLC. Different regression models have been introduced which enable appropriate evaluation of all analytical questions. Fluorescent measurements are possible without filters or special lamps and signal-to-noise ratios can be improved by wavelength bundling. Because of the richly structured spectra obtained from PAH, diode-array HPTLC enables quantification of all 16 EPA PAH on one track. Although the separation is incomplete all 16 compounds can be quantified by use of suitable wavelengths. All these aspects are enable substantial improvement of in-situ quantitative densitometric analysis.
In this paper a high-performance thin-layer chromatography (HPTLC) scanner is presented in which a special fibre arrangement is used as HPTLC plate scanning interface. Measurements are taken with a set of 50 fibres at a distance of 400 to 500 μm above the HPTLC plate. Spatial resolutions on the HPTLC plate of better than 160 μm are possible. It takes less than 2 min to scan 450 spectra simultaneously in a range of 198 to 610 nm. The basic improvement of the item is the use of highly transparent glass fibres which provide excellent transmission at 200 nm and the use of a special fibre arrangement for plate illumination and detection.
We present a video-densitometric high-performance thin-layer chromatography (HPTLC) quantification method for patulin in apple juice, developed in a vertical chamber from the starting point to a distance of 50 mm, using MTBE, n-pentane (9 + 5, v/v) as mobile phase. After separation the plate is sprayed with methyl-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) solution (40 mg in 20 mL methanol) and heated at 105 °C for 15 min. Patulin zones are transformed into yellow spots. The quantification is based on direct measurements using an inexpensive 48-bit flatbed scanner for color measurements (in red, green, and blue). Evaluation of the blue channel makes the measurements very specific. Quantification in fluorescence was also done by use of a 16-bit CCD-camera and UV-366 nm illumination as well as using a HPTLC DAD-scanner. For linearization the extended Kubelka–Munk expression for data transformation was used. The range of linearity covers more than two magnitudes and lies between 5 and 800 ng patulin. The extraction of 20 g apple juice and an extract application on plate up to 50 µL allows statistically defined checking the limit of detection (LOD) of 50 ng patulin per track, which is equivalent to 50 µg patulin per kg apple juice.
An Extraction Method for 17α-Ethinylestradiol from Water using a new kind of monolithic Stir-bar
(2015)
A 2D-separation of 16 polyaromatic hydrocarbons (PAHs) according to the Environmental Protecting Agency (EPA) standard was introduced. Separation took place on a TLC RP-18 plate (Merck, 1.05559). In the first direction, the plate was developed twice using n-pentane at −20°C as the mobile phase. The mixture acetonitrile-methanol-acetone-water (12:8:3:3, v/v) was used for developing the plate in the second direction. Both developments were carried out over a distance of 43 mm. Further on in this publication, a specific and very sensitive indication method for benzo[a]pyrene and perylene was presented. The method can detect these hazardous compounds even in complicated PAH mixtures. These compounds can be quantified by a simple chemiluminescent reaction with a limit of detection (LOD) of 48 pg per band for perylene and 95 pg per band for benzo[a]pyrene. Although these compounds were separated from all other PAHs in the standard, a separation of both compounds was not possible from one another. The method is suitable for tracing benzo[a]pyrene and/or perylene. The proposed chemiluminescence screening test on PAHs is extremely sensitive but may indicate a false positive result for benzo[a]pyrene.
Two solvent mixtures for high-performance thin-layer chromatographic (HPTLC) separation of some compounds showing estrogenic activity in the yeast estrogen screen (YES) assay are presented. The new method, planar yeast estrogen screen (pYES) combines the YES assay and a chromatographic separation on silica gel HPTLC plates with the performance of the YES assay. For separation, the analytes were applied bandwise to HPTLC plates (10 × 20 cm) with fluorescent dye (Merck, Germany). The plates were developed in a vertical developing chamber after 30 min of chamber saturation over a separation distance of 70 mm, using cyclohexane‒methyl-ethyl ketone (2:1, V/V) or cyclohexane‒CPME (3:2, V/V) as solvents. Both solvents allow separation of estriol, daidzein, genistein, 17β-estradiol, 17α-ethinyl estradiol, estrone, 4-nonylphenol and bis(2-ethylhexyl) phthalate.
An algorithm is presented that has successfully been utilized in practice for several years. It improves data analysis in chromatography. The program runs in an extremely reliable way and evaluates chromatographic raw data with an acceptable error. The algorithm requires a minimum of preliminaries and integrates even unsmoothed noisy data correctly.
Improved separation of highly toxic contact herbicides paraquat (1,1′-dimethyl-4-4′-bipyridinium), diquat (6,7-dihydrodipyridol[ 1,2-a:2′,1′-c]pyrazine-5,8-di-ium), difenzoquat (1,2-dimethyl-3,5-diphenyl-1H-pyrazolium-methyl sulfate), mepiquat (1,1-dimethyl-piperidinium), and chloromequat (2-chloroethyltrimethylammonium) were presented by high-performance thin-layer chromatography (HPTLC). The quantification is based on a derivatization reaction, using sodium tetraphenylborate. Measurements were made in the wavelength range from 500 to 535 nm, using a light-emitting diode (LED) for excitation purposes, which emits very dense light at 365 nm. For calculations, a new theory of standard addition method was used, thus leading to a minimal error if exactly the same amount of sample content is added as a standard. The method provides a fast and inexpensive approach to quantification of the five most important quats used for plant protection purposes. The method works reliably because it takes into account losses during pre-treatment procedure. The method meets the European legislation limits for paraquat and diquat in drinking water according to United States Environmental Protection Agency (US EPA) method 549.2 which are 680 ng L−1 for paraquat and 720 ng L−1 for diquat. The method of standard addition in planar chromatography can be beneficially used to reduce systematic errors. Although recovery rates of 33.7% to 65.2% are observed, calculated contents according to the method of standard addition lie between 69% and 127% of the theoretical amounts.
Fully Printed Inverters using Metal‐Oxide Semiconductor and Graphene Passives on Flexible Substrates
(2020)
Printed and flexible metal‐oxide transistor technology has recently demonstrated great promise due to its high performance and robust mechanical stability. Herein, fully printed inverter structures using electrolyte‐gated oxide transistors on a flexible polyimide (PI) substrate are discussed in detail. Conductive graphene ink is printed as the passive structures and interconnects. The additive printed transistors on PI substrates show an on/off ratio of 106 and show mobilities similar to the state‐of‐the‐art printed transistors on rigid substrates. Printed meander structures of graphene are used as pull‐up resistances in a transistor–resistor logic to create fully printed inverters. The printed and flexible inverters show a signal gain of 3.5 and a propagation delay of 30 ms. These printed inverters are able to withstand a tensile strain of 1.5% following more than 200 cycles of mechanical bending. The stability of the electrical direct current (DC) properties has been observed over a period of 5 weeks. These oxide transistor‐based fully printed inverters are relevant for digital printing methods which could be implemented into roll‐to‐roll processes.
Development of Fully Printed Oxide Field-Effect Transistors using Graphene Passive Structures
(2019)
During the past decade to the present time, the topic of printed electronics has gained a lot of attention for their potential use in a number of practical applications, including biosensors, photovoltaic devices, RFIDs, flexible displays, large-area circuits, and so on. To fully realize printed electronic components and devices, effective techniques for the printing of passive structures and electrically and chemically compatible materials in the printed devices need to be developed first. The opportunity of using electrically conducting graphene inks will enable the integration of passive structures into active devices, as for example, printed electrolyte-gated transistors (EGTs). Accordingly, in this study, we present the parametric results obtained on fully printed electrolyte-gated transistors having graphene as the passive electrodes, an inorganic oxide semiconductor as the active channel, and a composite solid polymer electrolyte (CSPE) as the gate insulating material. This configuration offers high chemical and electrical stability while at the same time allowing EGT operation at low potentials, implying the distinct advantage of operation at low input voltages. The printed in-plane EGTs we developed exhibit excellent performance with device mobility up to 16 cm2 V–1 s–1, an ION/IOFF ratio of 105, and a subthreshold slope of 120 mV dec–1.
Selective separation of CO2-CH4 mixed gases via magnesium aminoethylphosphonate nanoparticles
(2016)
The CO2 uptake on nanoscale AlO(OH) hollow spheres (260 mg g−1) as a new material is comparable to that on many metal–organic frameworks although their specific surface area is much lower (530 m2 g¬1versus 1500–6000 m2g¬1). Suited temperature–pressure cycles allow for reversible storage and separation of CO2 while the CO2 uptake is 4.3-times higher as compared to N2.
The increasing number of transistors being clocked at high frequencies of modern microprocessors lead to an increasing power consumption, which calls for an active dynamic thermal management. In a research project a system environment has been developed, which includes thermal modeling of the microprocessor in the board system, a software environment to control the characteristics of the system’s timing behavior, and a modified Linux scheduler, which is enhanced with a prediction controller. Measurement results are shown for this development for a Freescale i.MX6Q quad-core microprocessor.
Many SMEs are still faced with the problematic fact that their corporate structures and processes are not designed for efficient development and market positioning and there is a lack of appropriate methods and tools. SMEs are often inefficiently targeted to the internal or external demands for services. The following key questions are answered in this article: 1) Which studies are available in terms of strategic planning in young SMEs? 2) Which aspects should be considered in the implementation and control of these instruments?
High-precision signal processing algorithm to evaluate SAW properties as a function of temperature
(2013)
This paper presents a signal processing algorithm which accurately evaluates the SAW properties of a substrate as functions of temperature. The investigated acoustic properties are group velocity, phase velocity, propagation loss, and coupling coefficient. With several measurements carried out at different temperatures, we obtain the temperature dependency of the SAW properties. The analysis algorithm starts by reading the transfer functions of short and long delay lines. The analysis algorithm determines the center frequency of the delay lines and obtains the delay time difference between the short and long delay lines. The extracted parameters are then used to calculate the acoustic properties of the SAW material. To validate the algorithm, its accuracy is studied by determining the error in the calculating delay time difference, center frequency, and group velocity.
Mass transfer phenomena in membrane fuel cells are complex and diversified because of the presence of complex transport pathways including porous media of very different pore sizes and possible formation of liquid water. Electrochemical impedance spectroscopy, although allowing valuable information on ohmic phenomena, charge transfer and mass transfer phenomena, may nevertheless appear insufficient below 1 Hz. Use of another variable, that is, back pressure, as an excitation variable for electrochemical pressure impedance spectroscopy is shown here a promising tool for investigations and diagnosis of fuel cells.
We present a video-densitometric quantification method for the pain killer known as diclofenac and ibuprofen. These non-steroidal anti-inflammatory drugs were separated on cyanopropyl bonded plates using CH2Cl2, methanol, cyclohexane (95 + 5 + 40, v/v) as mobile phase. The quantification is based on a bio-effective-linked analysis using Vibrio fisheri bacteria. Within 10 min a CCD-camera registered the white light of the light-emitting bacteria. Diclofenac and ibuprofen effectively suppressed the bacterial light emission which can be used for quantification within a linear range of 10 to 2000 ng. The detection limit for ibuprofen is 20 ng and the limit of quantification 26 ng per zone. Measurements were carried out using a 16-bit ST-1603ME CCD camera with 1.56 megapixels (from Santa Barbara Instrument Group, Inc., Santa Barbara, USA). The range of linearity covers more than two magnitudes because the extended Kubelka-Munk expression is used for data transformation. The separation method is inexpensive, fast, and reliable.
We present a video-densitometric quantification method in combination with diode-array quantification for the methyl-, ethyl-, propyl-, and butylparaben in cosmetics. These parabens were separated on cyanopropyl bonded plates using water-acetonitrile-dioxane-ethanol-NH3 (25%) (8:2:1:1:0.05, v/v) as mobile phase. The quantification is based on UV-measurements at 255 nm and a bioeffectively-linked analysis using Vibrio fischeri bacteria. Within 5 min, a Tidas S 700 diode-array scanner (J&M, Aalen, Germany) scans 8 tracks and thus measures in total 5600 spectra in the wavelengths range from 190 to 1000 nm. The quantification range for all these parabens is from 20 to 400 ng per band, measured at 255 nm. In the V. fischeri assay a CCD-camera registers the white light of the light-emitting bacteria within 10 min. All parabens effectively suppress the bacterial light emission which can be used for quantifying within a linear range from 100 to 400 ng. Measurements were carried out using a 16-bit MicroChemi chemiluminescence system (biostep GmbH, Jahnsdorf, Germany), using a CCD camera with 4.19 megapixels. The range of linearity is achieved because the extended Kubelka-Munk expression was used for data transformation. The separation method is inexpensive, fast, and reliable.
Cast iron materials are used as materials for cylinder heads for heavy duty internal combustion engines. These components must withstand severe cyclic mechanical and thermal loads throughout their service life. While high-cycle fatigue (HCF) is dominant for the material in the water jacket region, the combination of thermal transients with mechanical load cycles results in thermomechanical fatigue (TMF) of the material in the fire deck region, even including superimposed TMF and HCF loads. Increasing the efficiency of the engines directly leads to increasing combustion pressure and temperature and, thus, lower safety margins for the currently used cast iron materials or alternatively the need for superior cast iron materials. In this paper (Part I), the TMF properties of the lamellar graphite cast iron GJL250 and the vermicular graphite cast iron GJV450 are characterized in uniaxial tests and a mechanism-based model for TMF life prediction is developed for both materials. The model can be used to estimate the fatigue life of components by means of finite-element calculations (Part II of the paper) and supports engineers in finding the appropriate material and design. Furthermore, the effect of the elastic, plastic and creep properties of the materials on the fatigue life can be evaluated with the model. However, for a material selection also the thermophysical properties, controlling to a high level the thermal stresses in the component, must be considered. Hence, the need for integral concepts for material characterization and selection from a multitude of existing and soon-to-be developed cast iron materials is discussed.
Cast aluminum alloys are frequently used as materials for cylinder head applications in internal combustion gasoline engines. These components must withstand severe cyclic mechanical and thermal loads throughout their lifetime. Reliable computational methods allow for accurate estimation of stresses, strains, and temperature fields and lead to more realistic Thermomechanical Fatigue (TMF) lifetime predictions. With accurate numerical methods, the components could be optimized via computer simulations and the number of required bench tests could be reduced significantly. These types of alloys are normally optimized for peak hardness from a quenched state that maximizes the strength of the material. However due to high temperature exposure, in service or under test conditions, the material would experience an over-ageing effect that leads to a significant reduction in the strength of the material. To numerically account for ageing effects, the Shercliff & Ashby ageing model is combined with a Chaboche-type viscoplasticity model available in the finite-element program ABAQUS by defining field variables. The constitutive model with ageing effects is correlated with uniaxial cyclic isothermal tests in the T6 state, the overaged state, as well as thermomechanical tests. On the other hand, the mechanism-based TMF damage model (DTMF) is calibrated for both T6 and over-aged state. Both the constitutive and the damage model are applied to a cylinder head component simulating several cycles on an engine dynamometer test. The effects of including ageing for both models are shown.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
Membrane distillation (MD) is a thermal separation process which possesses a hydrophobic, microporous
membrane as vapor space. A high potential application for MD is the concentration of hypersaline brines, such as
e.g. reverse osmosis retentate or other saline effluents to be concentrated to a near saturation level with a Zero
Liquid Discharge process chain. In order to further commercialize MD for these target applications, adapted MD
module designs are required along with strategies for the mitigation of membrane wetting phenomena. This
work presents the experimental results of pilot operation with an adapted Air Gap Membrane Distillation
(AGMD) module for hypersaline brine concentration within a range of 0–240 g NaCl /kg solution. Key performance
indicators such as flux, GOR and thermal efficiency are analyzed. A new strategy for wetting mitigation
by active draining of the air gap channel by low pressure air blowing is tested and analyzed. Only small reductions
in flux and GOR of 1.2% and 4.1% respectively, are caused by air sparging into the air gap channel.
Wetting phenomena are significantly reduced by avoiding stagnant distillate in the air gap making the air blower
a seemingly worth- while additional system component.
Purpose
This study aims to investigate a systematic approach to the production and use of additively manufactured injection mould inserts in product development (PD) processes. For this purpose, an evaluation of the additive tooling design method (ATDM) is performed.
Design/methodology/approach
The evaluation of the ATDM is conducted within student workshops, where students develop products and validate them using AT-prototypes. The evaluation process includes the analysis of work results as well as the use of questionnaires and participant observation.
Findings
This study shows that the ATDM can be successfully used to assist in producing and using AT mould inserts to produce valid AT prototypes. As a reference for the implementation of AT in industrial PD, extracts from the work of the student project groups and suitable process parameters for prototype production are presented.
Originality/value
This paper presents the application and evaluation of a method to support AT in PD that has not yet been scientifically evaluated.
A Hybrid Optoelectronic Sensor Platform with an Integrated Solution‐Processed Organic Photodiode
(2021)
Hybrid systems, unifying printed electronics with silicon‐based technology, can be seen as a driving force for future sensor development. Especially interesting are sensing elements based on printed devices in combination with silicon‐based high‐performance electronics for data acquisition and communication. In this work, a hybrid system integrating a solution‐processed organic photodiode in a silicon‐based system environment, which enables flexible device measurement and application‐driven development, is presented. For performance evaluation of the integrated organic photodiode, the measurements are compared to a silicon‐based counterpart. Therefore, the steady state response of the hybrid system is presented. Promising application scenarios are described, where a solution‐processed organic photodiode is fully integrated in a silicon system.
Demand Side Management for Thermally Activated Building Systems based on Multiple Linear Regression
(2015)
There is a growing trend for the use of thermo-active building systems (TABS) for the heating and cooling of buildings, because these systems are known to be very economical and efficient. However, their control is complicated due to the large thermal inertia, and their parameterization is time-consuming. With conventional TABS-control strategies, the required thermal comfort in buildings can often not be maintained, particularly if the internal heat sources are suddenly changed. This paper shows measurement results and evaluations of the operation of a novel adaptive and predictive calculation method, based on a multiple linear regression (AMLR) for the control of TABS. The measurement results are compared with the standard TABS strategy. The results show that the electrical pump energy could be reduced by more than 86%. Including the weather adjustment, it could be demonstrated that thermal energy savings of over 41% could be reached. In addition, the thermal comfort could be improved due to the possibility to specify mean room set-point temperatures. With the AMLR, comfort category I of the comfort norms ISO 7730 and DIN EN 15251 are observed in about 95% of occasions. With the standard TABS strategy, only about 24% are within category I.
Adaptive predictive control of thermo-active building systems (TABS) based on a multiple regression algorithm: First practical test. Available from: https://www.researchgate.net/publication/305903009_Adaptive_predictive_control_of_thermo-active_building_systems_TABS_based_on_a_multiple_regression_algorithm_First_practical_test [accessed Jul 7, 2017].
Photovoltaics Energy Prediction Under Complex Conditions for a Predictive Energy Management System
(2015)
The building sector is one of the main consumers of energy. Therefore, heating and cooling concepts for renewable energy sources become increasingly important. For this purpose, low-temperature systems such as thermo-active building systems (TABS) are particularly suitable. This paper presents results of the use of a novel adaptive and predictive computation method, based on multiple linear regression (AMLR) for the control of TABS in a passive seminar building. Detailed comparisons are shown between the standard TABS and AMLR strategies over a period of nine months each. In addition to the reduction of thermal energy use by approx. 26% and a significant reduction of the TABS pump operation time, this paper focuses on investment savings in a passive seminar building through the use of the AMLR strategy. This includes the reduction of peak power of the chilled beams (auxiliary system) as well as a simplification of the TABS hydronic circuit and the saving of an external temperature sensor. The AMLR proves its practicality by learning from the historical building operation, by dealing with forecasting errors and it is easy to integrate into a building automation system.
Formal verification (FV) is considered by many to be complicated and to require considerable mathematical knowledge for successful application. We have developed a methodology in which we have added formal verification to the verification process without requiring any knowledge of formal verification languages. We use only finite-state machine notation, which is familiar and intuitive to designers. Another problem associated with formal verification is state-space explosion. If that occurs, no result is returned; our method switches to random simulation after one hour without results, and no effort is lost. We have compared FV against random simulation with respect to development time, and our results indicate that FV is at least as fast as random simulation. FV is superior in terms of verification quality, however, because it is exhaustive.
The following contribution deals with the growth of cracks in low-cycle fatigue (LCF) and thermomechanical fatigue (TMF) tested specimens of Inconel 718 measured by using the replica method. The specimens are loaded with different strain rates. The material shows a significantly higher crack growth rate if the strain rate is decreased. Electron backscatter diffraction (EBSD) is adopted to identify the failure mechanism and the misorientation relationship of failed grain boundaries in secondary cracks. The analyzed cracks propagated mainly transgranular but also intergranular failure can be observed in some areas. It is found that grain boundaries with coincidence site lattice (CSL) boundary structure are generally less susceptible for intergranular failure than grain boundaries with random misorientation. For modeling the experimentally identified crack behavior an existing model for fatigue crack growth based on the mechanism of time dependent elastic–plastic crack tip blunting is enhanced to describe environmental effects based on the mechanism of oxygen diffusion at the crack tip. For the diffusion process the temperature dependent parabolic diffusion law is assumed. As a result, the time dependent cyclic crack tip opening displacement (DCTOD) is used as representative value to describe both mechanisms. Thus, most
of the included model parameters characterize the deformation behavior of the material and can be determined by independent material tests. With the determined material properties, the proposed model describes the experimentally measured crack growth curves very well. The model is validated based on predictions of the number of cycles to failure of LCF as well as in-phase and out-of-phase TMF tests in the temperature range between room temperature and 650 °C.
The following contribution deals with the experimental investigation and theoretical evaluation of fatigue crack growth under isothermal and non-isothermal conditions at the nickel alloy 617. The microstructure and mechanical properties of alloy 617 are influenced significantly by the thermal heat treatment and the following thermal exposure in service. Hence, a solution annealed and a long-time service exposed material condition is studied. The crack growth measurement is carried out by using an alternate current potential drop system, which is integrated into a thermomechanical fatigue (TMF) test facility. The measured fatigue crack growth rates results in a function of material condition, temperature and load waveform. Furthermore, the results of the non-isothermal tests depend on the phase between thermal and mechanical load (in-phase, out-of-phase). A fracture mechanic based, time dependent model is upgraded by an approach to consider environmental effects, where almost all model parameters represent directly measureable values. A consistent description of all results and a good correlation with the experimental data can be achieved.
Spinal cord stimulation (SCS) is the most commonly used technique of neurostimulation. It involves the stimulation of the spinal cord and is therefore used to treat chronic pain. The existing esophageal catheters are used for temperature monitoring during an electrophysiology study with ablation and transesophageal echocardiography. The aim of the study was to model the spine and new esophageal electrodes for the transesophageal electrical pacing of the spinal cord, and to integrate them in the Offenburg heart rhythm model for the static and dynamic simulation of transesophageal neurostimulation. The modeling and simulation were both performed with the electromagnetic and thermal simulation software CST (Computer Simulation Technology, Darmstadt). Two new esophageal catheters were modelled as well as a thoracic spine based on the dimensions of a human skeleton. The simulation of directed transesophageal neurostimulation is performed using the esophageal balloon catheter with an electric pacing potential of 5 V and a trapezoidal signal. A potential of 4.33 V can be measured directly at the electrode, 3.71 V in the myocardium at a depth of 2 mm, 2.68 V in the thoracic vertebra at a depth of 10 mm, 2.1 V in the thoracic vertebra at a depth of 50 mm and 2.09 V in the spinal cord at a depth of 70 mm. The relation between the voltage delivered to the electrodes and the voltage applied to the spinal cord is linear. Virtual heart rhythm and catheter models as well as the simulation of electrical pacing fields and electrical sensing fields allow the static and dynamic simulation of directed transesophageal electrical pacing of the spinal cord. The 3D simulation of the electrical sensing and pacing fields may be used to optimize transesophageal neurostimulation.
Material flow simulation is a core technology of Industry 4.0. It can analyze and improve large-scale production systems through experimentation with digital simulation models. However, modeling in discrete event simulation is considered as an effortful and time-consuming activity and challenges especially small and medium-sized enterprises. Systematic experiments and what-if-analysis require a large number of models. Modeling and simulation becomes a repetitive activity and the ability to model and simulate instantly becomes crucial for industry, 4.0. However, model generation typically uses specific methods to build models with individual properties for specific physical systems. A general literature review cannot sufficiently describe the current state of model generation. This study aims to provide an analysis of model generation based on the modeling strategy, modeling view, and production system type, as well as model properties and limitations.
In this paper, the Bauschinger effect and latent hardening of single crystals are assessed in finite element calculations using a single crystal plasticity model with kinematic hardening. To this end, results of cyclic micro-bending experiments on single crystal Alloy 718 in different crystal orientations (single slip and multi slip) with respect to the loading direction are used to determine the slip system related material properties of the single crystal plasticity model. Two kinematic hardening laws are considered: a kinematic hardening law describing latent hardening and a kinematic hardening law without latent hardening. For the determination of material properties for both hardening laws, a gradient-based optimization method is used. The results show that the different strength levels observed for micro-bending tests on different crystal orientations can only be described with latent kinematic hardening well, whereas the pronounced Bauschinger effect is described well by both kinematic hardening laws. It is concluded that cyclic micro-bending experiments on single crystals using different crystal orientations give an appropriate data base for the determination of the slip system related material properties of the single crystal plasticity model with latent kinematic hardening.
Cost effectiveness of preventive screening programmes for type 2 diabetes mellitus in Germany
(2010)
As in several other industrialized countries, Germany’s statutory health insurance (SHI) is facing rising healthcare costs as well as the challenges caused by a double-aging society. The early detection and prevention of chronic diseases is considered a possible way to reduce the impact of these developments. However, controversy surrounds the costs and effects in terms of medical and financial outcomes of such programmes.
The energy system of the future will transform from the current centralised fossil based to a decentralised, clean, highly efficient, and intelligent network. This transformation will require innovative technologies and ideas like trigeneration and the crowd energy concept to pave the way ahead. Even though trigeneration systems are extremely energy efficient and can play a vital role in the energy system, turning around their deployment is hindered by various barriers. These barriers are theoretically analysed in a multiperspective approach and the role decentralised trigeneration systems can play in the crowd energy concept is highlighted. It is derived from an initial literature research that a multiperspective (technological, energy-economic, and user) analysis is necessary for realising the potential of trigeneration systems in a decentralised grid. And to experimentally quantify these issues we are setting up a microscale trigeneration lab at our institute and the motivation for this lab is also briefly introduced.
Cooling towers or recoolers are one of the major consumers of electricity in a HVAC plant. The implementation and analysis of advanced control methods in a practical application and its comparison with conventional controllers is necessary to establish a framework for their feasibility especially in the field of decentralised energy systems. A standard industrial controller, a PID and a model based controller were developed and tested in an experimental set-up using market-ready components. The characteristics of these controllers such as settling time, control difference, and frequency of control actions are compared based on the monitoring data. Modern controllers demonstrated clear advantages in terms of energy savings and higher accuracy and a model based controller was easier to set-up than a PID.
Drawing off the technical flexibility of building polygeneration systems to support a rapidly expanding renewable electricity grid requires the application of advanced controllers like model predictive control (MPC) that can handle multiple inputs and outputs, uncertainties in forecast data, and plant constraints amongst other features. In this original work, an economic-MPC-based optimal scheduling of a real-world building energy system is demonstrated and its performance is evaluated against a conventional controller. The demonstration includes the steps to integrate an optimisation-based supervisory controller into a standard building automation and control system with off-the-shelf HVAC components and usage of state-of-art algorithms for solving complex nonlinear mixed integer optimal control problems. With the MPC, quantitative benefits in terms of 6–12% demand-cost savings and qualitative benefits in terms of better controller adaptability and hardware-friendly operation are identified. Further research potential for improving the MPC framework in terms of field-level stability, minimising constraint violations, and inter-system communication for its deployment in a prosumer-network is also identified.
Optimisation based economic despatch of real-world complex energy systems demands reduced order and continuously differentiable component models that can represent their part-load behaviour and dynamic responses. A literature study of existing modelling methods and the necessary characteristics the models should meet for their successful application in model predictive control of a polygeneration system are presented. Deriving from that, a rational modelling procedure using engineering principles and assumptions to develop simplified component models is applied. The models are quantitatively and qualitatively evaluated against experimental data and their efficacy for application in a building automation and control architecture is established.
Research is often conducted to investigate footwear mechanical properties and their effects on running biomechanics, but little is known about their influence on runner satisfaction, or how well the shoe is perceived. A tool to predict runner satisfaction in a shoe from its mechanical properties would be advantageous for footwear companies. Data in this study were from a database (n = 615 subject-shoe pairings) of satisfaction ratings (gathered after participants ran on a treadmill), and mechanical testing data for 87 unique subjects across 61 unique shoes. Random forest and elastic net logistic regression models were built to test if footwear mechanical properties and subject characteristics could predict runner satisfaction in 3 ways: degree-of-satisfaction on a 7-point Likert scale, overall satisfaction on a 3-point Likert scale, and willingness-to-purchase the shoe (yes/no response). Data were divided into training and validation sets, using an 80–20 split, to build the models and test their accuracy, respectively. Model accuracies were compared against the no-information rate (i.e. proportion of data belonging to the largest class). The models were not able to predict degree-of-satisfaction or overall satisfaction from footwear mechanical properties but could predict runner’s willingness to purchase with 68–75% accuracy. Midsole Gmax at the heel and forefoot appeared in the top five of variable importance rankings across both willingness-to-purchase models, suggesting its role as a major factor in purchase decisions. The negative regression coefficient for both heel and forefoot Gmax indicated that softer midsoles increase the likelihood of a shoe purchase. Future models to predict satisfaction may improve accuracy with the addition of more subject-specific parameters, such as running goals or foot proportions.
Energy consumption for cooling is growing dramatically. In the last years, electricity peak consumption grew significantly, switching from winter to summer in many EU countries. This is endangering the stability of electricity grids. This article outlines a comprehensive analysis of an office building performances in terms of energy consumption and thermal comfort (in accordance with static – ISO 7730:2005 – and adaptive thermal comfort criteria – EN 15251:2007 –) related to different cooling concepts in six different European climate zones. The work is based on a series of dynamic simulations carried out in the Trnsys 17 environment for a typical office building. The simulation study was accomplished for five cooling technologies: natural ventilation (NV), mechanical night ventilation (MV), fan-coils (FC), suspended ceiling panels (SCP), and concrete core conditioning (CCC) applied in Stockholm, Hamburg, Stuttgart, Milan, Rome, and Palermo. Under this premise, the authors propose a methodology for the evaluation of the cooling concepts taking into account both, thermal comfort and energy consumption.
Inadequate mechanical compliance of orthopedic implants can result in excessive strain of the bone interface, and ultimately, aseptic loosening. It is hypothesized that a fiber-based biometal with adjustable anisotropic mechanical properties can reduce interface strain, facilitate continuous remodeling, and improve implant survival under complex loads. The biometal is based on strategically layered sintered titanium fibers. Six different topologies are manufactured. Specimens are tested under compression in three orthogonal axes under 3-point bending and torsion until failure. Biocompatibility testing involves murine osteoblasts. Osseointegration is investigated by micro-computed tomography and histomorphometry after implantation in a metaphyseal trepanation model in sheep. The material demonstrates compressive yield strengths of up to 50 MPa and anisotropy correlating closely with fiber layout. Samples with 75% porosity are both stronger and stiffer than those with 85% porosity. The highest bending modulus is found in samples with parallel fiber orientation, while the highest shear modulus is found in cross-ply layouts. Cell metabolism and morphology indicate uncompromised biocompatibility. Implants demonstrate robust circumferential osseointegration in vivo after 8 weeks. The biometal introduced in this study demonstrates anisotropic mechanical properties similar to bone, and excellent osteoconductivity and feasibility as an orthopedic implant material.