Refine
Year of publication
Document Type
- Article (reviewed) (215) (remove)
Has Fulltext
- no (215) (remove)
Keywords
- Dünnschichtchromatographie (17)
- Adsorption (10)
- Metallorganisches Netzwerk (9)
- Ermüdung (8)
- Plastizität (6)
- Mikrostruktur (5)
- Simulation (5)
- Chromatographie (4)
- Eisenguss (4)
- Energieversorgung (4)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (215) (remove)
Open Access
- Closed Access (103)
- Open Access (45)
- Closed (30)
- Diamond (5)
- Gold (3)
- Hybrid (3)
Thermisch angetriebene (Adsorptions-)Kältemaschinen können mit einem verhältnismäßig geringen elektrischen Energieaufwand bzw. mit einer hohen elektrischen Leistungszahl Kälte bereitstel-len. Wird die zum Antrieb erforderliche Wärme aus industrieller Abwärme bereitgestellt, ist diese Kältebereitstellung energetisch effizienter als die Kältebereitstellung über eine Kompressionskäl-temaschine. Wird die Wärme jedoch in Kraft-Wärme-Kopplung bereitgestellt, ist die primärenergetische Bewertung sowohl von mehreren Teilwirkungsgraden als auch den Primärenergiefaktoren für den eingesetzten Brennstoff und die erzeugte bzw. bezogene elektrische Energie abhängig. Eine umfangreiche Messkampagne im Sommer 2018 liefert unter realitätsnahen Randbedingungen in einer Labor umgebung detaillierte Energiekennzahlen für einen typischen Tagesgang des Kältebedarfs. Damit gelingt es, Teilenergiekennwerte für die Planungspraxis abzuleiten und das Gesamtsystem energetisch mit einer konventionellen Kompressionskältemaschine zu vergleichen.
The aim of this study was to develop a biomechanically validated finite element model to predict the biomechanical behaviour of the human lumbar spine in compression.
For validation of the finite element model, an in vitro study was performed: Twelve human lumbar cadaveric spinal segments (six segments L2/3 and six segments L4/5) were loaded in axial compression using 600 N in the intact state and following surgical treatment using two different internal stabilisation devices. Range of motion was measured and used to calculate stiffness.
A finite element model of a human spinal segment L3/4 was loaded with the same force in intact and surgically altered state, corresponding to the situation of biomechanical in vitro study.
The results of the cadaver biomechanical and finite element analysis were compared. As they were close together, the finite element model was used to predict: (1) load-sharing within human lumbar spine in compression, (2) load-sharing within osteoporotic human lumbar spine in compression and (3) the stabilising potential of the different spinal implants with respect to bone mineral density.
A finite element model as described here may be used to predict the biomechanical behaviour of the spine. Moreover, the influence of different spinal stabilisation systems may be predicted.
DEM–FEA estimation of pores arrangement effect on the compressive Young’s modulus for Mg foams
(2015)
This work reports the study of the effect of the pore arrangement on the compressive behavior of Mg foams with regular pore size and porosities ranging from 25% to 45%. Pore arrangements were modeled using Finite Element Analysis (FEA), with random and ordered models, and compared to the estimations obtained for a previous work. The coordinates of the random pore arrangements were firstly generated using Discrete Element Method (DEM), and used in a second stage for modeling the pores by FEA. Estimations were also compared to the experimental results for Mg foams obtained by means of powder metallurgy. Results show important drops in the Young’s moduli as the porosity increases for both, experimental results and FEA estimations. Estimations obtained using ordered pore arrangements presented significant differences when compared to the estimations acquired from models with random arrangements. The randomly arranged models represent more accurately the real topologies of the experimental metallic foams. The Young’s moduli estimated using these models were in excellent agreement with the experiments, whilst the estimations obtained using ordered models presented relative errors significantly higher. The importance of the use of more realistic FEA models for improving the predicting ability of this method was probed, for the study of the mechanical properties of metallic foams.
Lithium-ion batteries exhibit a well-known trade-off between energy and power, which is problematic for electric vehicles which require both high energy during discharge (high driving range) and high power during charge (fast-charge capability). We use two commercial lithium-ion cells (high-energy [HE] and high-power) to parameterize and validate physicochemical pseudo-two-dimensional models. In a systematic virtual design study, we vary electrode thicknesses, cell temperature, and the type of charging protocol. We are able to show that low anode potentials during charge, inducing lithium plating and cell aging, can be effectively avoided either by using high temperatures or by using a constant-current/constant-potential/constant-voltage charge protocol which includes a constant anode potential phase. We introduce and quantify a specific charging power as the ratio of discharged energy (at slow discharge) and required charging time (at a fast charge). This value is shown to exhibit a distinct optimum with respect to electrode thickness. At 35°C, the optimum was achieved using an HE electrode design, yielding 23.8 Wh/(min L) volumetric charging power at 15.2 min charging time (10% to 80% state of charge) and 517 Wh/L discharge energy density. By analyzing the various overpotential contributions, we were able to show that electrolyte transport losses are dominantly responsible for the insufficient charge and discharge performance of cells with very thick electrodes.
In this paper, the influence of the material hardening behavior on plasticity-induced fatigue crack closure is investigated for strain-controlled loading and fully plastic, large-scale yielding conditions by means of the finite element method. The strain amplitude and the strain ratio are varied for given Ramberg–Osgood material properties representing materials with different hardening behavior. The results show a pronounced influence of the hardening behavior on crack closure, while no significant effect is found from the considered strain amplitude and strain ratio. The effect of the hardening behavior on the crack opening stress cannot be described by existing crack opening stress equations.
Adsorption of N2 and CO2 on Activated Carbon, AlO(OH) Nanoparticles, and AlO(OH) Hollow Spheres
(2015)
Adsorption behaviors of nitrogen and CO2 on Norit R1 Extra and AlO(OH) nanoparticles and hollow spheres were measured under different temperature and pressure conditions using a magnetic suspension balance. Independent from the substrate investigated, all isotherms increase at lower pressure, reach a maximum, and then decrease with increasing pressure. In addition, selected experimental data were correlated with different model approaches and compared with reliable literature data. In case of CO2 on AlO(OH), capillary condensation was observed at two defined temperatures. The results suggest that the conversion of the liquid into a supercritical adsorbate phase does not take place suddenly.
Turbinen aus der Tragetasche
(2014)
Am Karlsruher Institut für Technologie wurde der Prototyp eines Kleinwindkraftwerks zur autarken Stromversorgung entwickelt. Das "Energypack" genannte System besteht aus einem 1,10 m langen PVC-Gehäuse mit dreieckigem Querschnitt, Generator, Seilen sowie Funkgerät und kostet ab 50 Euro. Mit dem Energypack kann ein Dynamo mit 2,5 W angetrieben werden. An der Dualen Hochschule BW Heidenheim wurde das System "Anemotec" entworfen, das anstelle von Windflügeln über ein schraubenähnliches Gewinde verfügt. Der Rotor und die Windführung bestehen aus glasfaserverstärkten Werkstoffen. Die erzeugte Leistung liegt bei 365 W, die Stromgestehungskosten liegen bei 23 Cent pro kWh. Die von der Hochschule Offenburg entwickelte Windturbine "Windzip" arbeitet mit einem H-Rotor und besteht aus neun leicht gekrümmten Blechen, die wie Planeten in unterschiedlichem Abstand und in unterschiedlicher Höhe um einen vertikalen Stab kreisen. Bei einer Windgeschwindigkeit von 10 Meter pro Sekunde wurde damit eine Leistung von 40 W erzeugt.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
Der Endkundenvertrieb ist für die Bewahrung und Weiterentwicklung des Kundenstamms eines Energieversorgers essenziell. Doch um knappe Mittel im Vertrieb möglichst wirkungsvoll einsetzen zu können, wird Wissen darüber benötigt, wie sich die durchschnittlich erzielbaren Strompreise und die zu erwartende Kundenbindungsdauer zwischen verschiedenen Vertriebskanälen unterscheiden. Leitet man anhand dieser Informationen den Wert eines Kunden je Vertriebskanal ab, lässt sich treffsicherer über einzusetzende Marketing-Budgets entscheiden.
Since 2003, most European countries established heat health warning systems to alert the population to heat load. Heat health warning systems are based on predicted meteorological conditions outdoors. But the majority of the European population spends a substantial amount of time indoors, and indoor thermal conditions can differ substantially from outdoor conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) extended the existing heat health warning system (HHWS) with a thermal building simulation model to consider heat load indoors. In this study, the thermal building simulation model is used to simulate a standardized building representing a modern nursing home, because elderly and sick people are most sensitive to heat stress. Different types of natural ventilation were simulated. Based on current and future test reference years, changes in the future heat load indoors were analyzed. Results show differences between the various ventilation options and the possibility to minimize the thermal heat stress during summer by using an appropriate ventilation method. Nighttime ventilation for indoor thermal comfort is most important. A fully opened window at nighttime and the 2-h ventilation in the morning and evening are more sufficient to avoid heat stress than a tilted window at nighttime and the 1-h ventilation in the morning and the evening. Especially the ventilation in the morning seems to be effective to keep the heat load indoors low. Comparing the results for the current and the future test reference years, an increase of heat stress on all ventilation types can be recognized.
With the growing share of renewable energies in the electricity supply, transmission and distribution grids have to be adapted. A profound understanding of the structural characteristics of distribution grids is essential to define suitable strategies for grid expansion. Many countries have a large number of distribution system operators (DSOs) whose standards vary widely, which contributes to coordination problems during peak load hours. This study contributes to targeted distribution grid development by classifying DSOs according to their remuneration requirement. To examine the amendment potential, structural and grid development data from 109 distribution grids in South-Western Germany, are collected, referring to publications of the respective DSOs. The resulting data base is assessed statistically to identify clusters of DSOs according to the fit of demographic requirements and grid-construction status and thus identify development needs to enable a broader use of regenerative energy resources. Three alternative algorithms are explored to manage this task. The study finds the novel Gauss-Newton algorithm optimal to analyse the fit of grid conditions to regional requirements and successfully identifies grids with remuneration needs. It is superior to the so far used K-Means algorithm. The method developed here is transferable to other areas for grid analysis and targeted, cost-efficient development.
Research is often conducted to investigate footwear mechanical properties and their effects on running biomechanics, but little is known about their influence on runner satisfaction, or how well the shoe is perceived. A tool to predict runner satisfaction in a shoe from its mechanical properties would be advantageous for footwear companies. Data in this study were from a database (n = 615 subject-shoe pairings) of satisfaction ratings (gathered after participants ran on a treadmill), and mechanical testing data for 87 unique subjects across 61 unique shoes. Random forest and elastic net logistic regression models were built to test if footwear mechanical properties and subject characteristics could predict runner satisfaction in 3 ways: degree-of-satisfaction on a 7-point Likert scale, overall satisfaction on a 3-point Likert scale, and willingness-to-purchase the shoe (yes/no response). Data were divided into training and validation sets, using an 80–20 split, to build the models and test their accuracy, respectively. Model accuracies were compared against the no-information rate (i.e. proportion of data belonging to the largest class). The models were not able to predict degree-of-satisfaction or overall satisfaction from footwear mechanical properties but could predict runner’s willingness to purchase with 68–75% accuracy. Midsole Gmax at the heel and forefoot appeared in the top five of variable importance rankings across both willingness-to-purchase models, suggesting its role as a major factor in purchase decisions. The negative regression coefficient for both heel and forefoot Gmax indicated that softer midsoles increase the likelihood of a shoe purchase. Future models to predict satisfaction may improve accuracy with the addition of more subject-specific parameters, such as running goals or foot proportions.
Optimisation based economic despatch of real-world complex energy systems demands reduced order and continuously differentiable component models that can represent their part-load behaviour and dynamic responses. A literature study of existing modelling methods and the necessary characteristics the models should meet for their successful application in model predictive control of a polygeneration system are presented. Deriving from that, a rational modelling procedure using engineering principles and assumptions to develop simplified component models is applied. The models are quantitatively and qualitatively evaluated against experimental data and their efficacy for application in a building automation and control architecture is established.
Drawing off the technical flexibility of building polygeneration systems to support a rapidly expanding renewable electricity grid requires the application of advanced controllers like model predictive control (MPC) that can handle multiple inputs and outputs, uncertainties in forecast data, and plant constraints amongst other features. In this original work, an economic-MPC-based optimal scheduling of a real-world building energy system is demonstrated and its performance is evaluated against a conventional controller. The demonstration includes the steps to integrate an optimisation-based supervisory controller into a standard building automation and control system with off-the-shelf HVAC components and usage of state-of-art algorithms for solving complex nonlinear mixed integer optimal control problems. With the MPC, quantitative benefits in terms of 6–12% demand-cost savings and qualitative benefits in terms of better controller adaptability and hardware-friendly operation are identified. Further research potential for improving the MPC framework in terms of field-level stability, minimising constraint violations, and inter-system communication for its deployment in a prosumer-network is also identified.
Cooling towers or recoolers are one of the major consumers of electricity in a HVAC plant. The implementation and analysis of advanced control methods in a practical application and its comparison with conventional controllers is necessary to establish a framework for their feasibility especially in the field of decentralised energy systems. A standard industrial controller, a PID and a model based controller were developed and tested in an experimental set-up using market-ready components. The characteristics of these controllers such as settling time, control difference, and frequency of control actions are compared based on the monitoring data. Modern controllers demonstrated clear advantages in terms of energy savings and higher accuracy and a model based controller was easier to set-up than a PID.
The energy system of the future will transform from the current centralised fossil based to a decentralised, clean, highly efficient, and intelligent network. This transformation will require innovative technologies and ideas like trigeneration and the crowd energy concept to pave the way ahead. Even though trigeneration systems are extremely energy efficient and can play a vital role in the energy system, turning around their deployment is hindered by various barriers. These barriers are theoretically analysed in a multiperspective approach and the role decentralised trigeneration systems can play in the crowd energy concept is highlighted. It is derived from an initial literature research that a multiperspective (technological, energy-economic, and user) analysis is necessary for realising the potential of trigeneration systems in a decentralised grid. And to experimentally quantify these issues we are setting up a microscale trigeneration lab at our institute and the motivation for this lab is also briefly introduced.
Am 1. Juli 2022 trafen sich im Rahmen des Abschlusskolloquiums des Projekts ACA-Modes rund 60 Teilnehmende aus Forschung, Lehre und Industrie zu einer internationalen Konferenz an der Hochschule Offenburg. Hier wurden die Projektergebnisse rund um die erfolgreiche Implementierung modellprädiktiver Regelstrategien vorgestellt, aktuelle Fragestellungen diskutiert und Entwicklungspfade hin zu einem netzdienlichen Betrieb von Energieverbundsystemen skizziert.
Cost effectiveness of preventive screening programmes for type 2 diabetes mellitus in Germany
(2010)
As in several other industrialized countries, Germany’s statutory health insurance (SHI) is facing rising healthcare costs as well as the challenges caused by a double-aging society. The early detection and prevention of chronic diseases is considered a possible way to reduce the impact of these developments. However, controversy surrounds the costs and effects in terms of medical and financial outcomes of such programmes.
In this paper, the Bauschinger effect and latent hardening of single crystals are assessed in finite element calculations using a single crystal plasticity model with kinematic hardening. To this end, results of cyclic micro-bending experiments on single crystal Alloy 718 in different crystal orientations (single slip and multi slip) with respect to the loading direction are used to determine the slip system related material properties of the single crystal plasticity model. Two kinematic hardening laws are considered: a kinematic hardening law describing latent hardening and a kinematic hardening law without latent hardening. For the determination of material properties for both hardening laws, a gradient-based optimization method is used. The results show that the different strength levels observed for micro-bending tests on different crystal orientations can only be described with latent kinematic hardening well, whereas the pronounced Bauschinger effect is described well by both kinematic hardening laws. It is concluded that cyclic micro-bending experiments on single crystals using different crystal orientations give an appropriate data base for the determination of the slip system related material properties of the single crystal plasticity model with latent kinematic hardening.
The following contribution deals with the experimental investigation and theoretical evaluation of fatigue crack growth under isothermal and non-isothermal conditions at the nickel alloy 617. The microstructure and mechanical properties of alloy 617 are influenced significantly by the thermal heat treatment and the following thermal exposure in service. Hence, a solution annealed and a long-time service exposed material condition is studied. The crack growth measurement is carried out by using an alternate current potential drop system, which is integrated into a thermomechanical fatigue (TMF) test facility. The measured fatigue crack growth rates results in a function of material condition, temperature and load waveform. Furthermore, the results of the non-isothermal tests depend on the phase between thermal and mechanical load (in-phase, out-of-phase). A fracture mechanic based, time dependent model is upgraded by an approach to consider environmental effects, where almost all model parameters represent directly measureable values. A consistent description of all results and a good correlation with the experimental data can be achieved.
The following contribution deals with the growth of cracks in low-cycle fatigue (LCF) and thermomechanical fatigue (TMF) tested specimens of Inconel 718 measured by using the replica method. The specimens are loaded with different strain rates. The material shows a significantly higher crack growth rate if the strain rate is decreased. Electron backscatter diffraction (EBSD) is adopted to identify the failure mechanism and the misorientation relationship of failed grain boundaries in secondary cracks. The analyzed cracks propagated mainly transgranular but also intergranular failure can be observed in some areas. It is found that grain boundaries with coincidence site lattice (CSL) boundary structure are generally less susceptible for intergranular failure than grain boundaries with random misorientation. For modeling the experimentally identified crack behavior an existing model for fatigue crack growth based on the mechanism of time dependent elastic–plastic crack tip blunting is enhanced to describe environmental effects based on the mechanism of oxygen diffusion at the crack tip. For the diffusion process the temperature dependent parabolic diffusion law is assumed. As a result, the time dependent cyclic crack tip opening displacement (DCTOD) is used as representative value to describe both mechanisms. Thus, most
of the included model parameters characterize the deformation behavior of the material and can be determined by independent material tests. With the determined material properties, the proposed model describes the experimentally measured crack growth curves very well. The model is validated based on predictions of the number of cycles to failure of LCF as well as in-phase and out-of-phase TMF tests in the temperature range between room temperature and 650 °C.
Photovoltaics Energy Prediction Under Complex Conditions for a Predictive Energy Management System
(2015)
There is a growing trend for the use of thermo-active building systems (TABS) for the heating and cooling of buildings, because these systems are known to be very economical and efficient. However, their control is complicated due to the large thermal inertia, and their parameterization is time-consuming. With conventional TABS-control strategies, the required thermal comfort in buildings can often not be maintained, particularly if the internal heat sources are suddenly changed. This paper shows measurement results and evaluations of the operation of a novel adaptive and predictive calculation method, based on a multiple linear regression (AMLR) for the control of TABS. The measurement results are compared with the standard TABS strategy. The results show that the electrical pump energy could be reduced by more than 86%. Including the weather adjustment, it could be demonstrated that thermal energy savings of over 41% could be reached. In addition, the thermal comfort could be improved due to the possibility to specify mean room set-point temperatures. With the AMLR, comfort category I of the comfort norms ISO 7730 and DIN EN 15251 are observed in about 95% of occasions. With the standard TABS strategy, only about 24% are within category I.
Adaptive predictive control of thermo-active building systems (TABS) based on a multiple regression algorithm: First practical test. Available from: https://www.researchgate.net/publication/305903009_Adaptive_predictive_control_of_thermo-active_building_systems_TABS_based_on_a_multiple_regression_algorithm_First_practical_test [accessed Jul 7, 2017].
Demand Side Management for Thermally Activated Building Systems based on Multiple Linear Regression
(2015)
Membrane distillation (MD) is a thermal separation process which possesses a hydrophobic, microporous
membrane as vapor space. A high potential application for MD is the concentration of hypersaline brines, such as
e.g. reverse osmosis retentate or other saline effluents to be concentrated to a near saturation level with a Zero
Liquid Discharge process chain. In order to further commercialize MD for these target applications, adapted MD
module designs are required along with strategies for the mitigation of membrane wetting phenomena. This
work presents the experimental results of pilot operation with an adapted Air Gap Membrane Distillation
(AGMD) module for hypersaline brine concentration within a range of 0–240 g NaCl /kg solution. Key performance
indicators such as flux, GOR and thermal efficiency are analyzed. A new strategy for wetting mitigation
by active draining of the air gap channel by low pressure air blowing is tested and analyzed. Only small reductions
in flux and GOR of 1.2% and 4.1% respectively, are caused by air sparging into the air gap channel.
Wetting phenomena are significantly reduced by avoiding stagnant distillate in the air gap making the air blower
a seemingly worth- while additional system component.
Cast aluminum alloys are frequently used as materials for cylinder head applications in internal combustion gasoline engines. These components must withstand severe cyclic mechanical and thermal loads throughout their lifetime. Reliable computational methods allow for accurate estimation of stresses, strains, and temperature fields and lead to more realistic Thermomechanical Fatigue (TMF) lifetime predictions. With accurate numerical methods, the components could be optimized via computer simulations and the number of required bench tests could be reduced significantly. These types of alloys are normally optimized for peak hardness from a quenched state that maximizes the strength of the material. However due to high temperature exposure, in service or under test conditions, the material would experience an over-ageing effect that leads to a significant reduction in the strength of the material. To numerically account for ageing effects, the Shercliff & Ashby ageing model is combined with a Chaboche-type viscoplasticity model available in the finite-element program ABAQUS by defining field variables. The constitutive model with ageing effects is correlated with uniaxial cyclic isothermal tests in the T6 state, the overaged state, as well as thermomechanical tests. On the other hand, the mechanism-based TMF damage model (DTMF) is calibrated for both T6 and over-aged state. Both the constitutive and the damage model are applied to a cylinder head component simulating several cycles on an engine dynamometer test. The effects of including ageing for both models are shown.
Cast iron materials are used as materials for cylinder heads for heavy duty internal combustion engines. These components must withstand severe cyclic mechanical and thermal loads throughout their service life. While high-cycle fatigue (HCF) is dominant for the material in the water jacket region, the combination of thermal transients with mechanical load cycles results in thermomechanical fatigue (TMF) of the material in the fire deck region, even including superimposed TMF and HCF loads. Increasing the efficiency of the engines directly leads to increasing combustion pressure and temperature and, thus, lower safety margins for the currently used cast iron materials or alternatively the need for superior cast iron materials. In this paper (Part I), the TMF properties of the lamellar graphite cast iron GJL250 and the vermicular graphite cast iron GJV450 are characterized in uniaxial tests and a mechanism-based model for TMF life prediction is developed for both materials. The model can be used to estimate the fatigue life of components by means of finite-element calculations (Part II of the paper) and supports engineers in finding the appropriate material and design. Furthermore, the effect of the elastic, plastic and creep properties of the materials on the fatigue life can be evaluated with the model. However, for a material selection also the thermophysical properties, controlling to a high level the thermal stresses in the component, must be considered. Hence, the need for integral concepts for material characterization and selection from a multitude of existing and soon-to-be developed cast iron materials is discussed.
We present a video-densitometric quantification method in combination with diode-array quantification for the methyl-, ethyl-, propyl-, and butylparaben in cosmetics. These parabens were separated on cyanopropyl bonded plates using water-acetonitrile-dioxane-ethanol-NH3 (25%) (8:2:1:1:0.05, v/v) as mobile phase. The quantification is based on UV-measurements at 255 nm and a bioeffectively-linked analysis using Vibrio fischeri bacteria. Within 5 min, a Tidas S 700 diode-array scanner (J&M, Aalen, Germany) scans 8 tracks and thus measures in total 5600 spectra in the wavelengths range from 190 to 1000 nm. The quantification range for all these parabens is from 20 to 400 ng per band, measured at 255 nm. In the V. fischeri assay a CCD-camera registers the white light of the light-emitting bacteria within 10 min. All parabens effectively suppress the bacterial light emission which can be used for quantifying within a linear range from 100 to 400 ng. Measurements were carried out using a 16-bit MicroChemi chemiluminescence system (biostep GmbH, Jahnsdorf, Germany), using a CCD camera with 4.19 megapixels. The range of linearity is achieved because the extended Kubelka-Munk expression was used for data transformation. The separation method is inexpensive, fast, and reliable.
We present a video-densitometric quantification method for the pain killer known as diclofenac and ibuprofen. These non-steroidal anti-inflammatory drugs were separated on cyanopropyl bonded plates using CH2Cl2, methanol, cyclohexane (95 + 5 + 40, v/v) as mobile phase. The quantification is based on a bio-effective-linked analysis using Vibrio fisheri bacteria. Within 10 min a CCD-camera registered the white light of the light-emitting bacteria. Diclofenac and ibuprofen effectively suppressed the bacterial light emission which can be used for quantification within a linear range of 10 to 2000 ng. The detection limit for ibuprofen is 20 ng and the limit of quantification 26 ng per zone. Measurements were carried out using a 16-bit ST-1603ME CCD camera with 1.56 megapixels (from Santa Barbara Instrument Group, Inc., Santa Barbara, USA). The range of linearity covers more than two magnitudes because the extended Kubelka-Munk expression is used for data transformation. The separation method is inexpensive, fast, and reliable.
Mass transfer phenomena in membrane fuel cells are complex and diversified because of the presence of complex transport pathways including porous media of very different pore sizes and possible formation of liquid water. Electrochemical impedance spectroscopy, although allowing valuable information on ohmic phenomena, charge transfer and mass transfer phenomena, may nevertheless appear insufficient below 1 Hz. Use of another variable, that is, back pressure, as an excitation variable for electrochemical pressure impedance spectroscopy is shown here a promising tool for investigations and diagnosis of fuel cells.
The CO2 uptake on nanoscale AlO(OH) hollow spheres (260 mg g−1) as a new material is comparable to that on many metal–organic frameworks although their specific surface area is much lower (530 m2 g¬1versus 1500–6000 m2g¬1). Suited temperature–pressure cycles allow for reversible storage and separation of CO2 while the CO2 uptake is 4.3-times higher as compared to N2.
Selective separation of CO2-CH4 mixed gases via magnesium aminoethylphosphonate nanoparticles
(2016)
A 2D-separation of 16 polyaromatic hydrocarbons (PAHs) according to the Environmental Protecting Agency (EPA) standard was introduced. Separation took place on a TLC RP-18 plate (Merck, 1.05559). In the first direction, the plate was developed twice using n-pentane at −20°C as the mobile phase. The mixture acetonitrile-methanol-acetone-water (12:8:3:3, v/v) was used for developing the plate in the second direction. Both developments were carried out over a distance of 43 mm. Further on in this publication, a specific and very sensitive indication method for benzo[a]pyrene and perylene was presented. The method can detect these hazardous compounds even in complicated PAH mixtures. These compounds can be quantified by a simple chemiluminescent reaction with a limit of detection (LOD) of 48 pg per band for perylene and 95 pg per band for benzo[a]pyrene. Although these compounds were separated from all other PAHs in the standard, a separation of both compounds was not possible from one another. The method is suitable for tracing benzo[a]pyrene and/or perylene. The proposed chemiluminescence screening test on PAHs is extremely sensitive but may indicate a false positive result for benzo[a]pyrene.
Two solvent mixtures for high-performance thin-layer chromatographic (HPTLC) separation of some compounds showing estrogenic activity in the yeast estrogen screen (YES) assay are presented. The new method, planar yeast estrogen screen (pYES) combines the YES assay and a chromatographic separation on silica gel HPTLC plates with the performance of the YES assay. For separation, the analytes were applied bandwise to HPTLC plates (10 × 20 cm) with fluorescent dye (Merck, Germany). The plates were developed in a vertical developing chamber after 30 min of chamber saturation over a separation distance of 70 mm, using cyclohexane‒methyl-ethyl ketone (2:1, V/V) or cyclohexane‒CPME (3:2, V/V) as solvents. Both solvents allow separation of estriol, daidzein, genistein, 17β-estradiol, 17α-ethinyl estradiol, estrone, 4-nonylphenol and bis(2-ethylhexyl) phthalate.
An algorithm is presented that has successfully been utilized in practice for several years. It improves data analysis in chromatography. The program runs in an extremely reliable way and evaluates chromatographic raw data with an acceptable error. The algorithm requires a minimum of preliminaries and integrates even unsmoothed noisy data correctly.
Improved separation of highly toxic contact herbicides paraquat (1,1′-dimethyl-4-4′-bipyridinium), diquat (6,7-dihydrodipyridol[ 1,2-a:2′,1′-c]pyrazine-5,8-di-ium), difenzoquat (1,2-dimethyl-3,5-diphenyl-1H-pyrazolium-methyl sulfate), mepiquat (1,1-dimethyl-piperidinium), and chloromequat (2-chloroethyltrimethylammonium) were presented by high-performance thin-layer chromatography (HPTLC). The quantification is based on a derivatization reaction, using sodium tetraphenylborate. Measurements were made in the wavelength range from 500 to 535 nm, using a light-emitting diode (LED) for excitation purposes, which emits very dense light at 365 nm. For calculations, a new theory of standard addition method was used, thus leading to a minimal error if exactly the same amount of sample content is added as a standard. The method provides a fast and inexpensive approach to quantification of the five most important quats used for plant protection purposes. The method works reliably because it takes into account losses during pre-treatment procedure. The method meets the European legislation limits for paraquat and diquat in drinking water according to United States Environmental Protection Agency (US EPA) method 549.2 which are 680 ng L−1 for paraquat and 720 ng L−1 for diquat. The method of standard addition in planar chromatography can be beneficially used to reduce systematic errors. Although recovery rates of 33.7% to 65.2% are observed, calculated contents according to the method of standard addition lie between 69% and 127% of the theoretical amounts.
An Extraction Method for 17α-Ethinylestradiol from Water using a new kind of monolithic Stir-bar
(2015)
We present a video-densitometric high-performance thin-layer chromatography (HPTLC) quantification method for patulin in apple juice, developed in a vertical chamber from the starting point to a distance of 50 mm, using MTBE, n-pentane (9 + 5, v/v) as mobile phase. After separation the plate is sprayed with methyl-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) solution (40 mg in 20 mL methanol) and heated at 105 °C for 15 min. Patulin zones are transformed into yellow spots. The quantification is based on direct measurements using an inexpensive 48-bit flatbed scanner for color measurements (in red, green, and blue). Evaluation of the blue channel makes the measurements very specific. Quantification in fluorescence was also done by use of a 16-bit CCD-camera and UV-366 nm illumination as well as using a HPTLC DAD-scanner. For linearization the extended Kubelka–Munk expression for data transformation was used. The range of linearity covers more than two magnitudes and lies between 5 and 800 ng patulin. The extraction of 20 g apple juice and an extract application on plate up to 50 µL allows statistically defined checking the limit of detection (LOD) of 50 ng patulin per track, which is equivalent to 50 µg patulin per kg apple juice.
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography (TLC). It is a simple means of quantification by measurement of the optical density of the separated spots directly on the plate. A new scanner has been developed which is capable of measuring TLC or HPTLC (high-performance thin-layer chromatography) plates simultaneously at different wavelengths without damaging the plate surface. Fiber optics and special fiber interfaces are used in combination with a diode-array detector. With this new scanner sophisticated plate evaluation is now possible, which enables use of chemometric methods in HPTLC. Different regression models have been introduced which enable appropriate evaluation of all analytical questions. Fluorescent measurements are possible without filters or special lamps and signal-to-noise ratios can be improved by wavelength bundling. Because of the richly structured spectra obtained from PAH, diode-array HPTLC enables quantification of all 16 EPA PAH on one track. Although the separation is incomplete all 16 compounds can be quantified by use of suitable wavelengths. All these aspects are enable substantial improvement of in-situ quantitative densitometric analysis.
In this paper a high-performance thin-layer chromatography (HPTLC) scanner is presented in which a special fibre arrangement is used as HPTLC plate scanning interface. Measurements are taken with a set of 50 fibres at a distance of 400 to 500 μm above the HPTLC plate. Spatial resolutions on the HPTLC plate of better than 160 μm are possible. It takes less than 2 min to scan 450 spectra simultaneously in a range of 198 to 610 nm. The basic improvement of the item is the use of highly transparent glass fibres which provide excellent transmission at 200 nm and the use of a special fibre arrangement for plate illumination and detection.
A new diode-array scanner in combination with a computer-controlled application system meets all the demands of modern HPTLC measurement. Automatic application, simultaneous measurements at different wavelengths, and different linearization models enable appropriate evaluation of all analytical questions. The theory of error propagation recommends quantification at reflectance values smaller than 0.8; this can be verified only by use of diode-array scanning. The same theory also recommends quantification by use of peak height data, because the theory predicts best precision only for peak height evaluation. Diode-array scanning with reflectance monitoring enables appropriate validation in TLC and HPTLC analysis. All these aspects result in substantial improvement of in-situ quantitative densitometric analysis, and simultaneous recording at different wavelengths opens the way for chemometric evaluation, e.g. peak purity monitoring, which improves the accuracy and reliability of HPTLC analysis.
Fluorescence Enhancement of Pyrene Measured by Thin-Layer Chromatography with Diode-Array Detection
(2003)
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography. It offers a simple way of quantifying by measuring the optical density of the separated spots directly on the plate. A new TLC scanner has been developed which is able to measure TLC plates or HPTLC plates, at different wavelengths simultaneously, without destroying the plate surface. The system enables absorbance and fluorescence measurements in one run. Fluorescence measurements are possible without filters or other adjustments.
The measurement of fluorescence from a TLC plate is a versatile means of making TLC analysis more sensitive. Fluorescence measurements with the new scanner are possible without filters or special lamps. Improvement of the signal-to-noise ratio is achieved by wavelength bundling. During plate scanning the scattered light and the fluorescence are both emitted from the surface of the TLC plate and this emitted light provides the desired spectral information from substances on the TLC plate. The measurement of fluorescence spectra and absorbance spectra directly from a TLC plate is based on differential measurement of light emerging from sample-free and sample-containing zones.
The literature recommends dipping TLC plates in viscous liquids to enhance fluorescence. Measurement of the fluorescence and absorbance spectra of pyrene spots reveals the mechanism of enhancement of plate dipping in viscous liquids—blocked contact of the fluorescent molecules with the stationary phase or other sample molecules is responsible for the enhanced fluorescence at lower concentrations.
In conclusion, dipping in TLC analysis is no miracle. It is based on similar mechanisms observable in liquids. The measured TLC spectra are also very similar to liquid spectra and this makes TLC spec-troscopy an important tool in separation analysis.
High performance thin layer chromatography (HPTLC) is a frequently used separation technique which works well for quantification of caffeine and quinine in beverages. Competing separation techniques, e.g. high-performance liquid chromatography (HPLC) or gas chromatography (GC), are not suitable for sugar-containing samples, because these methods need special pretreatment by the analyst. In HPTLC, however, it is possible to separate ‘dirty’ samples without time-consuming pretreatment, because disposable HPTLC plates are used. A convenient method for quantification of caffeine and quinine in beverages, without sample pretreatment, is presented below. The basic theory of in-situ quantification in HPTLC by use of remitted light is introduced and discussed. Several linearization models are discussed.
A home-made diode-array scanner has been used for quantification; this, for the first time, enables simultaneous measurements at different wavelengths. The new scanner also enables fluorescence evaluation without further equipment. Simultaneous recording at different wavelengths improves the accuracy and reliability of HPTLC analysis. These aspects result in substantial improvement of in-situ quantitative densitometric analysis and enable quantification of compounds in beverages.
High-performance thin-layer chromatography (HPTLC), as the modern form of TLC (thin-layer chromatography), is suitable for detecting pharmaceutically active compounds over a wide polarity range using the gradient multiple development (GMD) technique. Diode-array detection (DAD) in conjunction with HPTLC can simultaneously acquire ultraviolet‒visible (UV‒VIS) and fluorescence spectra directly from the plate. Visualization as a contour plot helps to identify separated zones. An orange peel extract is used as an example to show how GMD‒DAD‒HPTLC in seven different developments with seven different solvents can provide an overview of the entire sample. More than 50 compounds in the extract can be separated on a 6-cm HPTLC plate. Such separations take place in the biologically inert stationary phase of HPTLC, making it a suitable method for effect-directed analysis (EDA). HPTLC‒EDA can even be performed with living organism, as confirmed by the use of Aliivibrio fischeri bacteria to detect bioluminescence as a measure of toxicity. The combining of gradient multiple development planar chromatography with diode-array detection and effect-directed analysis (GMD‒DAD‒HPTLC‒EDA) in conjunction with specific staining methods and time-of-flight mass spectrometry (TOF‒MS) will be the method of choice to find new chemical structures from plant extracts that can serve as the basic structure for new pharmaceutically active compounds.
A Simple and Reliable HPTLC Method for the Quantification of the Intense Sweetener Sucralose®
(2003)
This paper describes a simple and fast thin layer chromatography (TLC) method for the monitoring of the relatively new intense sweetener Sucralose® in various food matrices. The method requires little or no sample preparation to isolate or concentrate the analyte. The Sucralose® extract is separated on amino‐TLC‐plates, and the analyte is derivatized “reagent‐free” by heating the developed plate for 20 min at 190°C. Spots can be measured either in the absorption or fluorescence mode. The method allows the determination of Sucralose® at the levels of interest regarding foreseen European legislation (>50 mg/kg) with excellent repeatability (RSD = 3.4%) and recovery data (95%).
A new formula is presented for transforming fluorescence measurements in accordance with Kubelka-Munk theory. The fluorescence signals, the absorption signals, and data from a selected reference are combined in one expression. Only diode-array techniques can measure all the required data simultaneously to linearize fluorescence data correctly. To prove the new theory HPTLC quantification of the analgesic flupirtine was performed over the mass range 300 to 5000 ng per spot. The fluorescence calibration curve was linear over the whole range. The transformation of fluorescence measurements into linear mass-dependent data extends the technique of in-situ fluorescence analysis to the high concentration range. It also extends Kubelka-Munk theory from absorption to fluorescence analysis. The results presented also emphasize the importance of Kubelka-Munk theory for in-situ measurements in scattering media, especially in planar chromatography.
The production of potable water in dry areas nowadays is mainly done by the desalination of seawater. State of the art desalination plants usually are built with high production capacities and consume a lot of electrical energy or energy from primary resources such as oil. This causes difficulties in rural areas, where no infrastructure is available neither for the plants’ energy supply nor the distribution of the produced potable water. To address this need, small, self-sustaining and locally operated desalination plants came into the focus of research. In this work, a novel flash evaporator design is proposed which can be driven either by solar power or by low temperature waste heat. It offers low operation costs as well as easy maintenance. The results of an experimental setup operated with water at a feed flow rate of up to 1,600 l/h are presented. It is shown that the proof of concept regarding efficient evaporation as well as efficient gas-liquid separation is provided successfully. The experimental evaporation yield counts for 98 % of the vapor content that is expected from the vapor pressure curve of water. Neither measurements of the electrical conductivity of the gained condensate, nor the analysis of the vapor flow by optical methods show significant droplet entrainment, so there are no concerns regarding the purity of the produced condensate for the use as drinking water.