Refine
Year of publication
Document Type
- Article (reviewed) (285) (remove)
Is part of the Bibliography
- yes (285) (remove)
Keywords
- Dünnschichtchromatographie (17)
- Adsorption (10)
- Metallorganisches Netzwerk (9)
- Ermüdung (8)
- Plastizität (8)
- Simulation (6)
- 3D printing (5)
- Energieversorgung (5)
- HPTLC (5)
- Mikrostruktur (5)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (285) (remove)
Open Access
- Open Access (114)
- Closed Access (104)
- Closed (30)
- Gold (29)
- Hybrid (14)
- Diamond (10)
The significant market growth of stationary electrical energy storage systems both for private and commercial applications has raised the question of battery lifetime under practical operation conditions. Here, we present a study of two 8 kWh lithium-ion battery (LIB) systems, each equipped with 14 lithium iron phosphate/graphite (LFP) single cells in different cell configurations. One system was based on a standard configuration with cells connected in series, including a cell-balancing system and a 48 V inverter. The other system featured a novel configuration of two stacks with a parallel connection of seven cells each, no cell-balancing system, and a 4 V inverter. The two systems were operated as part of a microgrid both in continuous cycling mode between 30% and 100% state of charge, and in solar-storage mode with day–night cycling. The aging characteristics in terms of capacity loss and internal resistance change in the cells were determined by disassembling the systems for regular checkups and characterizing the individual cells under well-defined laboratory conditions. As a main result, the two systems showed cell-averaged capacity losses of 18.6% and 21.4% for the serial and parallel configurations, respectively, after 2.5 years of operation with 810 (serial operation) and 881 (parallel operation) cumulated equivalent full cycles. This is significantly higher than the aging of a reference single cell cycled under laboratory conditions at 20 °C, which showed a capacity loss of only 10% after 1000 continuous full cycles.
This article presents a comparative experimental study of the electrical, structural and chemical properties of large‐format, 180 Ah prismatic lithium iron phosphate (LFP)/graphite lithium‐ion battery cells from two different manufacturers. These cells are particularly used in the field of stationary energy storage such as home‐storage systems. The investigations include (1) cell‐to‐cell performance assessment, for which a total of 28 cells was tested from each manufacturer, (2) electrical charge/discharge characteristics at different currents and ambient temperatures, (3) internal cell geometries, components, and weight analysis after cell opening, (4) microstructural analysis of the electrodes via light microscopy and scanning electron microscopy, (5) chemical analysis of the electrode materials using energy‐dispersive X‐ray spectroscopy, and (6) mathematical analysis of the electrode balances. The combined results give a detailed and comparative insight into the cell characteristics, providing essential information needed for system integration. The study also provides complete and self‐consistent parameter sets for the use in cells models needed for performance prediction or state diagnosis.
We present a two-dimensional (2D) planar chromatographic separation of estrogenic active compounds on RP-18 W (Merck, 1.14296) phase. A mixture of 8 substances was separated using a solvent mix consisting of hexane, ethyl acetate, acetone (55:15:10, v/v) in the first direction and of acetone and water (15:10, v/v) in the second direction. Separation was performed on an RP-18 W plate over a distance of 70 mm. This 2D-separation method can be used to quantify 17α-ethinylestradiol (EE2) in an effect-directed analysis, using the yeast strain Saccharomyces cerevisiae BJ3505. The test strain (according to McDonnell) contains the estrogen receptor. Its activation by estrogen active compounds is measured by inducing the reporter gene lacZ which encodes the enzyme β-galactosidase. This enzyme activity is determined on plate by using the fluorescent substrate MUG (4-methylumbelliferyl-β-d-galactopyranoside).
Footwear plays a critical role in our daily lives, affecting our performance, health and overall well-being. Well-designed footwear can provide protection, comfort and improved foot functionality, while poorly designed footwear can lead to mobility problems and declines in physical activity. The overall goal of footwear research is to provide a scientific basis for professionals in the field to provide an optimal footwear solution for a given person, for a given task, in a given environment, while using sustainable manufacturing processes. This article suggests potential directions for future research with a focus on athletic footwear biomechanics. Directions include the evidence-based individualisation of footwear, the interaction between design and prolonged use, and improving the sustainability of footwear. The authors also provide a speculative outlook on methodological developments that may provide greater insight into these areas. These developments may include: (1) the use of larger scale, real-world and representative data, (2) the use of 3D printing to create experimental footwear, (3) the advancement of in silico research methods, and (4) furthering multidisciplinary collaboration. If successfully applied in the future, footwear research will contribute to active and healthy lifestyles across the lifespan.
Bewegungsanalysesysteme in der Forschung und für niedergelassene Orthopädinnen und Orthopäden
(2023)
Hintergrund
Komplexe biomechanische Bewegungsanalysen können für eine Vielzahl orthopädischer Fragestellungen wichtige Informationen liefern. Bei der Beschaffung von Bewegungsanalysesystemen sind neben den klassischen Messgütekriterien (Validität, Reliabilität, Objektivität) auch räumliche und zeitliche Rahmenbedingungen sowie Anforderungen an die Qualifikation des Messpersonals zu berücksichtigen.
Anwendung
In der komplexen Bewegungsanalyse werden Systeme zur Bestimmung der Kinematik, der Kinetik und der Muskelaktivität (Elektromyographie) eingesetzt. Der vorliegende Artikel gibt einen Überblick über Methoden der komplexen biomechanischen Bewegungsanalyse für den Einsatz in der orthopädischen Forschung oder in der individuellen Patientenversorgung. Neben dem Einsatz zur reinen Bewegungsanalyse wird auch der Einsatz von Bewegungsanalyseverfahren im Bereich des Biofeedbacktrainings diskutiert.
Beschaffung
Für die konkrete Anschaffung von Bewegungsanalysesystemen empfiehlt sich die Kontaktaufnahme mit Fachgesellschaften (z. B. Deutsche Gesellschaft für Biomechanik), Hochschulen und Universitäten mit vorhandenen Bewegungsanalyseeinrichtungen oder Vertriebsfirmen im Bereich der Biomechanik.
Treadmills are essential to the study of human and animal locomotion as well as for applied diagnostics in both sports and medicine. The quantification of relevant biomechanical and physiological variables requires a precise regulation of treadmill belt velocity (TBV). Here, we present a novel method for time-efficient tracking of TBV using standard 3D motion capture technology. Further, we analyzed TBV fluctuations of four different treadmills as seven participants walked and ran at target speeds ranging from 1.0 to 4.5 m/s. Using the novel method, we show that TBV regulation differs between treadmill types, and that certain features of TBV regulation are affected by the subjects’ body mass and their locomotion speed. With higher body mass, the TBV reductions in the braking phase of stance became higher, even though this relationship differed between locomotion speeds and treadmill type (significant body mass × speed × treadmill type interaction). Average belt speeds varied between about 98 and 103% of the target speed. For three of the four treadmills, TBV reduction during the stance phase of running was more intense (> 5% target speed) and occurred earlier (before 50% of stance phase) unlike the typical overground center of mass velocity patterns reported in the literature. Overall, the results of this study emphasize the importance of monitoring TBV during locomotor research and applied diagnostics. We provide a novel method that is freely accessible on Matlab’s file exchange server (“getBeltVelocity.m”) allowing TBV tracking to become standard practice in locomotion research.
Background: Running overuse injuries (ROIs) occur within a complex, partly injury-specific interplay between training loads and extrinsic and intrinsic risk factors. Biomechanical risk factors (BRFs) are related to the individual running style. While BRFs have been reviewed regarding general ROI risk, no systematic review has addressed BRFs for specific ROIs using a standardized methodology.
Objective: To identify and evaluate the evidence for the most relevant BRFs for ROIs determined during running and to
suggest future research directions.
Design: Systematic review considering prospective and retrospective studies. (PROSPERO_ID: 236,832).
Data Sources: PubMed. Connected Papers. The search was performed in February 2021.
Eligibility Criteria: English language. Studies on participants whose primary sport is running addressing the risk for the seven most common ROIs and at least one kinematic, kinetic (including pressure measurements), or electromyographic BRF. A BRF needed to be identified in at least one prospective or two independent retrospective studies. BRFs needed to be determined during running.
Results: Sixty-six articles fulfilled our eligibility criteria. Levels of evidence for specific ROIs ranged from conflicting to moderate evidence. Running populations and methods applied varied considerably between studies. While some BRFs appeared for several ROIs, most BRFs were specific for a particular ROI. Most BRFs derived from lower-extremity joint kinematics and kinetics were located in the frontal and transverse planes of motion. Further, plantar pressure, vertical ground reaction force loading rate and free moment-related parameters were identified as kinetic BRFs.
Conclusion: This study offers a comprehensive overview of BRFs for the most common ROIs, which might serve as a starting point to develop ROI-specific risk profiles of individual runners. We identified limited evidence for most ROI-specific risk factors, highlighting the need for performing further high-quality studies in the future. However, consensus on data collection standards (including the quantification of workload and stress tolerance variables and the reporting of injuries) is warranted.
Background: Many countries have restricted public life in order to contain the spread of the novel coronavirus (SARS-CoV2). As a side effect of related measures, physical activity (PA) levels may have decreased.
Objective: We aimed (1) to quantify changes in PA and (2) to identify variables potentially predicting PA reductions.
Methods: A systematic review with random-effects multilevel meta-analysis was performed, pooling the standardized mean differences in PA measures before and during public life restrictions.
Results: A total of 173 trials with moderate methodological quality (modified Downs and Black checklist) were identified. Compared to pre-pandemic, total PA (SMD − 0.65, 95% CI − 1.10 to − 0.21) and walking (SMD − 0.52, 95% CI − 0.29 to − 0.76) decreased while sedentary behavior increased (SMD 0.91, 95% CI: 0.17 to 1.65). Reductions in PA affected all intensities (light: SMD − 0.35, 95% CI − 0.09 to − 0.61, p = .013; moderate: SMD − 0.33, 95% CI − 0.02 to − 0.6; vigorous: SMD − 0.33, − 0.08 to − 0.58, 95% CI − 0.08 to − 0.58) to a similar degree. Moderator analyses revealed no influence of variables such as sex, age, body mass index, or health status. However, the only continent without a PA reduction was Australia and cross-sectional trials yielded higher effect sizes (p < .05).
Conclusion: Public life restrictions associated with the COVID-19 pandemic resulted in moderate reductions in PA levels and large increases in sedentary behavior. Health professionals and policy makers should therefore join forces to develop strategies counteracting the adverse effects of inactivity.
Governments have restricted public life during the COVID-19 pandemic, inter alia closing sports facilities and gyms. As regular exercise is essential for health, this study examined the effect of pandemic-related confinements on physical activity (PA) levels. A multinational survey was performed in 14 countries. Times spent in moderate-to-vigorous physical activity (MVPA) as well as in vigorous physical activity only (VPA) were assessed using the Nordic Physical Activity Questionnaire (short form). Data were obtained for leisure and occupational PA pre- and during restrictions. Compliance with PA guidelines was calculated based on the recommendations of the World Health Organization (WHO). In total, n = 13,503 respondents (39 ± 15 years, 59% females) were surveyed. Compared to pre-restrictions, overall self-reported PA declined by 41% (MVPA) and 42.2% (VPA). Reductions were higher for occupational vs. leisure time, young and old vs. middle-aged persons, previously more active vs. less active individuals, but similar between men and women. Compared to pre-pandemic, compliance with WHO guidelines decreased from 80.9% (95% CI: 80.3–81.7) to 62.5% (95% CI: 61.6–63.3). Results suggest PA levels have substantially decreased globally during the COVID-19 pandemic. Key stakeholders should consider strategies to mitigate loss in PA in order to preserve health during the pandemic.
The compliant nature of distal limb muscle-tendon units is traditionally considered suboptimal in explosive movements when positive joint work is required. However, during accelerative running, ankle joint net mechanical work is positive. Therefore, this study aims to investigate how plantar flexor muscle-tendon behavior is modulated during fast accelerations. Eleven female sprinters performed maximum sprint accelerations from starting blocks, while gastrocnemius muscle fascicle lengths were estimated using ultrasonography. We combined motion analysis and ground reaction force measurements to assess lower limb joint kinematics and kinetics, and to estimate gastrocnemius muscle-tendon unit length during the first two acceleration steps. Outcome variables were resampled to the stance phase and averaged across three to five trials. Relevant scalars were extracted and analyzed using one-sample and two-sample t-tests, and vector trajectories were compared using statistical parametric mapping. We found that an uncoupling of muscle fascicle behavior from muscle-tendon unit behavior is effectively used to produce net positive mechanical work at the joint during maximum sprint acceleration. Muscle fascicles shortened throughout the first and second steps, while shortening occurred earlier during the first step, where negative joint work was lower compared with the second step. Elastic strain energy may be stored during dorsiflexion after touchdown since fascicles did not lengthen at the same time to dissipate energy. Thus, net positive work generation is accommodated by the reuse of elastic strain energy along with positive gastrocnemius fascicle work. Our results show a mechanism of how muscles with high in-series compliance can contribute to net positive joint work.
Immunosorbent turnip vein clearing virus (TVCV) particles displaying the IgG-binding domains D and E of Staphylococcus aureus protein A (PA) on every coat protein (CP) subunit (TVCVPA) were purified from plants via optimized and new protocols. The latter used polyethylene glycol (PEG) raw precipitates, from which virions were selectively re-solubilized in reverse PEG concentration gradients. This procedure improved the integrity of both TVCVPA and the wild-type subgroup 3 tobamovirus. TVCVPA could be loaded with more than 500 IgGs per virion, which mediated the immunocapture of fluorescent dyes, GFP, and active enzymes. Bi-enzyme ensembles of cooperating glucose oxidase and horseradish peroxidase were tethered together on the TVCVPA carriers via a single antibody type, with one enzyme conjugated chemically to its Fc region, and the other one bound as a target, yielding synthetic multi-enzyme complexes. In microtiter plates, the TVCVPA-displayed sugar-sensing system possessed a considerably increased reusability upon repeated testing, compared to the IgG-bound enzyme pair in the absence of the virus. A high coverage of the viral adapters was also achieved on Ta2O5 sensor chip surfaces coated with a polyelectrolyte interlayer, as a prerequisite for durable TVCVPA-assisted electrochemical biosensing via modularly IgG-assembled sensor enzymes.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions. We present a model-based analysis of lithium-ion battery degradation in a stationary photovoltaic battery system. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including solid electrolyte interphase (SEI) formation. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics (PV), inverter, load, grid interaction, and energy management system, fed with historic weather data. Simulations are carried out for two load scenarios, a single-family house and an office tract, over annual operation cycles with one-minute time resolution. As key result, we show that the charging process causes a peak in degradation rate due to electrochemical charge overpotentials. The main drivers for cell ageing are therefore not only a high state of charge (SOC), but the charging process leading towards high SOC. We also show that the load situation not only influences system parameters like self-sufficiency and self-consumption, but also has a significant impact on battery ageing. We assess reduced charge cut-off voltage as ageing mitigation strategy.
The efficiency of a chromatographic analytical method is determined by the selectivity of the chromatographic separation and the specificity of the detection method. In high-performance thin-layer chromatography (HPTLC) the separated components can be detected and quantified directly on the plate by physical and chemical methods. By coupling high-performance thin-layer chromatography with biological or biochemical inhibition tests it is possible to detect toxic substances in situ.
Introduction: The use of scaffolds in tissue engineering is becoming increasingly important as solutions need to be found to preserve human tissues such as bone or cartilage. Various factors, including cells, biomaterials, cell and tissue culture conditions, play a crucial role in tissue engineering. The in vivo environment of the cells exerts complex stimuli on the cells, thereby directly influencing cell behavior, including proliferation and differentiation. Therefore, to create suitable replacement or regeneration procedures for human tissues, the conditions of the cells’ natural environment should be well mimicked. Therefore, current research is trying to develop 3-dimensional scaffolds (scaffolds) that can elicit appropriate cellular responses and thus help the body regenerate or replace tissues. In this work, scaffolds were printed from the biomaterial polycaprolactone (PCL) on a 3D bioplotter. Biocompatibility testing was used to determine whether the printed scaffolds were suitable for use in tissue engineering.
Material and Methods: An Envisiontec 3D bioplotter was used to fabricate the scaffolds. For better cell-scaffold interaction, the printed polycaprolactone scaffolds were coated with type-I collagen. Three different cell types were then cultured on the scaffolds and various tests were used to investigate the biocompatibility of the scaffolds.
Results: Reproducible scaffolds could be printed from polycaprolactone. In addition, a coating process with collagen was developed, which significantly improved the cell-scaffold interaction. Biocompatibility tests showed that the PCL-collagen scaffolds are suitable for use with cells. The cells adhered to the surface of the scaffolds and as a result extensive cell growth was observed on the scaffolds. The inner part of the scaffolds, however, remained largely uninhabited. In the cytotoxicity studies, it was found that toxicity below 20% was present in some experimental runs. The determination of the compressive strength by means of the universal testing machine Z005 by ZWICK according to DIN EN ISO 604 of the scaffolds resulted in a value of 68.49 ± 0.47 MPa.
Der Einbau von Smart Metern und deren intelligente Vernetzung in Richtung eines Smart Grid wird Stromverbrauchsmuster bis in die Haushalte hinein verändern. Über die technisch geprägte Diskussion um die Komponenten dafür darf deshalb keinesfalls die Einbeziehung der Gesellschaft in den anstehenden Wandel vergessen werden. Transparenz bei den Kosten, die Förderung von Vertrauen insbesondere in die Datenschutzstandards und eine verständliche Aufklärungsarbeit sind Schlüssel für den notwendigen Dialog zwischen Energieversorgern, Politik und Bürgern.
Im Eurocode 3 wird im Gegensatz zu DIN 18800 die Bemessung von Verbindungen nicht in der Grundnorm DIN EN 1993‐1‐1, sondern in anderen Normenteilen geregelt. Dieser Beitrag behandelt die Bemessung geschweißter Verbindungen nach DIN EN 1993‐1‐8, die auch Hohlprofile, aber weder dünnwandige Bauteile noch Stähle höherer Festigkeit als S460 einschließt, vergleicht diese Bemessung mit der nach DIN 18800‐1, erläutert sie an Beispielen und hebt die wesentlichen Änderungen hervor. Da diese Änderungen auch die im Vergleich zu DIN 18800 viel stärkere Verknüpfung der in der Tragwerksplanung ansetzbaren Beanspruchbarkeiten mit dem Aufwand der Prüfung und Qualitätsüberwachung bei der Herstellung betreffen, werden abschließend wichtige Regelungen der DIN EN 1090‐2 zur Ausführung und Prüfung von Schweißnähten beschrieben, die auch der Tragwerksplaner kennen muss.
Optimization of energetic refurbishment roadmaps for multi-family buildings utilizing heat pumps
(2023)
A novel methodology for calculating optimized refurbishment roadmaps is developed in this paper. The aim of the roadmaps is to determine when and how should which component of the building envelope and heat generation system be refurbished to achieve the lowest net present value. The integrated optimization approach couples a particle swarm optimization algorithm with a dynamic building simulation of the building envelope and the heat supply system. Due to a free selection of implementation times and refurbishment depth, the optimization method achieves the lowest net present value and high CO2 reduction and is therefore an important contribution to achieve climate neutrality in the building stock.
The method is exemplarily applied to a multi-family house built in 1970. In comparison to a standard refurbishment roadmap, cost savings of 6–16 % and CO2 savings of 6–59 % are possible. The sensitivity of the refurbishment roadmap measures is analyzed on the basis of a parametric analysis. Robust optimization results are obtained with a mean refurbishment level of approx. 50 kWh/m2/a of the building envelope. The preferred heat generation system is a bivalent brine-heat pump system with a share of 70 % of the heat load being covered by the electric heat pump.
Predictive control has great potential in the home energy management domain. However, such controls need reliable predictions of the system dynamics as well as energy consumption and generation, and the actual implementation in the real system is associated with many challenges. This paper presents the implementation of predictive controls for a heat pump with thermal storage in a real single-family house with a photovoltaic rooftop system. The predictive controls make use of a novel cloud camera-based short-term solar energy prediction and an intraday prediction system that includes additional data sources. In addition, machine learning methods were used to model the dynamics of the heating system and predict loads using extensive measured data. The results of the real and simulated operation will be presented.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features which address the tactile experience of working in real-world labs.
Crystal structures of two metal–organic frameworks (MFU‐1 and MFU‐2) are presented, both of which contain redox‐active CoII centres coordinated by linear 1,4‐bis[(3,5‐dimethyl)pyrazol‐4‐yl] ligands. In contrast to many MOFs reported previously, these compounds show excellent stability against hydrolytic decomposition. Catalytic turnover is achieved in oxidation reactions by employing tert‐butyl hydroperoxide and the solid catalysts are easily recovered from the reaction mixture. Whereas heterogeneous catalysis is unambiguously demonstrated for MFU‐1, MFU‐2 shows catalytic activity due to slow metal leaching, emphasising the need for a deeper understanding of structure–reactivity relationships in the future design of redox‐active metal–organic frameworks. Mechanistic details for oxidation reactions employing tert‐butyl hydroperoxide are studied by UV/Vis and IR spectroscopy and XRPD measurements. The catalytic process accompanying changes of redox states and structural changes were investigated by means of cobalt K‐edge X‐ray absorption spectroscopy. To probe the putative binding modes of molecular oxygen, the isosteric heats of adsorption of O2 were determined and compared with models from DFT calculations. The stabilities of the frameworks in an oxygen atmosphere as a reactive gas were examined by temperature‐programmed oxidation (TPO). Solution impregnation of MFU‐1 with a co‐catalyst (N‐hydroxyphthalimide) led to NHPI@MFU‐1, which oxidised a range of organic substrates under ambient conditions by employing molecular oxygen from air. The catalytic reaction involved a biomimetic reaction cascade based on free radicals. The concept of an entatic state of the cobalt centres is proposed and its relevance for sustained catalytic activity is briefly discussed.
Hintergrund
In diesem Artikel wird ein Überblick und Vergleich der am häufigsten verwendeten zementierten Hüftschäfte, gruppiert in die verschiedenen Schafttypen und Zementmanteldicken, gegeben, um zu sehen, welche Kombination gut abschneidet.
Methodik
Aus dem Endoprothesenregister Deutschland wurden die Revisionsraten zementierter Schaftarten kategorisiert und die Revisionsraten von 3 und 5 Jahren erfasst und analysiert. Für die Recherche lag die Konzentration auf den Schäften Exeter, C‑Stem, MS-30, Excia, Bicontact, Charnley, Müller Geradschaft, Twinsys, Corail, Avenir, Quadra und dem Lubinus SP II. Ein wichtiger Aspekt lag darin, welcher Schaft favorisiert implantiert wird und welche Zementiertechnik in Hinblick auf die geplante Zementmanteldicke angewendet wird. Um einen Trend in der zementierten Hüftendoprothetik herauszufinden, wurden zusätzlich die Daten des dänischen, schwedischen, norwegischen, schweizerischen, neuseeländischen, englischen und australischen Endoprothesenregister verglichen.
Ergebnisse und Schlussfolgerung
Die meisten Länder nutzen zementierte Prothesen nach dem Kraftschlussprinzip (Exeter, MS30, C‑Stem etc.) oder dem Formschlussprinzip (Charnley, Excia, Bicontact), welche mit einer Zementmanteldicke von 2–4 mm implantiert werden. Jedoch hat sich in Deutschland und der Schweiz ein Trend zur Line-to-Line-Technik, mit einer geplanten Zementmanteldicke von 1 mm (Twinsys, Corail, Avenir, Quadra) aufgezeigt, dem Prinzip der Müller-Geradschaft-Prothese und der Kerboul-Charnley-Prothese folgend, auch wenn diese an sich als „french paradoxon“ postuliert werden. In den EPRD-5-Jahres-Ergebnissen scheinen die neueren Line-to-Line-Prothesen etwas schlechter abzuschneiden. Die besten Ergebnisse erzielt der „MS 30“ in Deutschland und der „Exeter“ in England. Hierbei handelt es sich um polierte Geradschäfte mit Zentraliser und Subsidence-Raum an der Spitze mit einem 2–4 mm Zementmantel in guter Zementiertechnik.
Silicon (Si) has turned out to be a promising active material for next‐generation lithium‐ion battery anodes. Nevertheless, the issues known from Si as electrode material (pulverization effects, volume change etc.) are impeding the development of Si anodes to reach market maturity. In this study, we are investigating a possible application of Si anodes in low‐power printed electronic applications. Tailored Si inks are produced and the impact of carbon coating on the printability and their electrochemical behavior as printed Si anodes is investigated. The printed Si anodes contain active material loadings that are practical for powering printed electronic devices, like electrolyte gated transistors, and are able to show high capacity retentions. A capacity of 1754 mAh/gSi is achieved for a printed Si anode after 100 cycles. Additionally, the direct applicability of the printed Si anodes is shown by successfully powering an ink‐jet printed transistor.
The use of a TLC scanner can be regarded as a key step in high performance thin layer chromatography (HPTLC). Densitometric measurements transform the substance distribution on a TLC plate into digital computer data. Systems that allow quantitative measurements have been available for many years for either fluorescence or ultraviolet absorption measurements, while lately the reflection analysis mode for both types is the most common application. New scanning approaches are designed to aid the analyst who has common demands for TLC-densitometry without using special data, such as scanned images. Two examples that have been developed lately in the laboratories of the authors are described in this paper. These approaches were developed on the basis of current needs for analysts who employ TLC as a tool in research, as well as in routine analysis. One approach is aimed to support analysts in economically disadvantaged areas, where cost intensive apparatus is unsuitable but trace analysis by simple means is required. The other system, allows the spectral determination of chromatographic spots on TLC plates covering the ultraviolet and visible range, thus, revealing highly desired information for the analyst.
The production of potable water in dry areas nowadays is mainly done by the desalination of seawater. State of the art desalination plants usually are built with high production capacities and consume a lot of electrical energy or energy from primary resources such as oil. This causes difficulties in rural areas, where no infrastructure is available neither for the plants’ energy supply nor the distribution of the produced potable water. To address this need, small, self-sustaining and locally operated desalination plants came into the focus of research. In this work, a novel flash evaporator design is proposed which can be driven either by solar power or by low temperature waste heat. It offers low operation costs as well as easy maintenance. The results of an experimental setup operated with water at a feed flow rate of up to 1,600 l/h are presented. It is shown that the proof of concept regarding efficient evaporation as well as efficient gas-liquid separation is provided successfully. The experimental evaporation yield counts for 98 % of the vapor content that is expected from the vapor pressure curve of water. Neither measurements of the electrical conductivity of the gained condensate, nor the analysis of the vapor flow by optical methods show significant droplet entrainment, so there are no concerns regarding the purity of the produced condensate for the use as drinking water.
A new formula is presented for transforming fluorescence measurements in accordance with Kubelka-Munk theory. The fluorescence signals, the absorption signals, and data from a selected reference are combined in one expression. Only diode-array techniques can measure all the required data simultaneously to linearize fluorescence data correctly. To prove the new theory HPTLC quantification of the analgesic flupirtine was performed over the mass range 300 to 5000 ng per spot. The fluorescence calibration curve was linear over the whole range. The transformation of fluorescence measurements into linear mass-dependent data extends the technique of in-situ fluorescence analysis to the high concentration range. It also extends Kubelka-Munk theory from absorption to fluorescence analysis. The results presented also emphasize the importance of Kubelka-Munk theory for in-situ measurements in scattering media, especially in planar chromatography.
A Simple and Reliable HPTLC Method for the Quantification of the Intense Sweetener Sucralose®
(2003)
This paper describes a simple and fast thin layer chromatography (TLC) method for the monitoring of the relatively new intense sweetener Sucralose® in various food matrices. The method requires little or no sample preparation to isolate or concentrate the analyte. The Sucralose® extract is separated on amino‐TLC‐plates, and the analyte is derivatized “reagent‐free” by heating the developed plate for 20 min at 190°C. Spots can be measured either in the absorption or fluorescence mode. The method allows the determination of Sucralose® at the levels of interest regarding foreseen European legislation (>50 mg/kg) with excellent repeatability (RSD = 3.4%) and recovery data (95%).
High-performance thin-layer chromatography (HPTLC), as the modern form of TLC (thin-layer chromatography), is suitable for detecting pharmaceutically active compounds over a wide polarity range using the gradient multiple development (GMD) technique. Diode-array detection (DAD) in conjunction with HPTLC can simultaneously acquire ultraviolet‒visible (UV‒VIS) and fluorescence spectra directly from the plate. Visualization as a contour plot helps to identify separated zones. An orange peel extract is used as an example to show how GMD‒DAD‒HPTLC in seven different developments with seven different solvents can provide an overview of the entire sample. More than 50 compounds in the extract can be separated on a 6-cm HPTLC plate. Such separations take place in the biologically inert stationary phase of HPTLC, making it a suitable method for effect-directed analysis (EDA). HPTLC‒EDA can even be performed with living organism, as confirmed by the use of Aliivibrio fischeri bacteria to detect bioluminescence as a measure of toxicity. The combining of gradient multiple development planar chromatography with diode-array detection and effect-directed analysis (GMD‒DAD‒HPTLC‒EDA) in conjunction with specific staining methods and time-of-flight mass spectrometry (TOF‒MS) will be the method of choice to find new chemical structures from plant extracts that can serve as the basic structure for new pharmaceutically active compounds.
High performance thin layer chromatography (HPTLC) is a frequently used separation technique which works well for quantification of caffeine and quinine in beverages. Competing separation techniques, e.g. high-performance liquid chromatography (HPLC) or gas chromatography (GC), are not suitable for sugar-containing samples, because these methods need special pretreatment by the analyst. In HPTLC, however, it is possible to separate ‘dirty’ samples without time-consuming pretreatment, because disposable HPTLC plates are used. A convenient method for quantification of caffeine and quinine in beverages, without sample pretreatment, is presented below. The basic theory of in-situ quantification in HPTLC by use of remitted light is introduced and discussed. Several linearization models are discussed.
A home-made diode-array scanner has been used for quantification; this, for the first time, enables simultaneous measurements at different wavelengths. The new scanner also enables fluorescence evaluation without further equipment. Simultaneous recording at different wavelengths improves the accuracy and reliability of HPTLC analysis. These aspects result in substantial improvement of in-situ quantitative densitometric analysis and enable quantification of compounds in beverages.
Fluorescence Enhancement of Pyrene Measured by Thin-Layer Chromatography with Diode-Array Detection
(2003)
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography. It offers a simple way of quantifying by measuring the optical density of the separated spots directly on the plate. A new TLC scanner has been developed which is able to measure TLC plates or HPTLC plates, at different wavelengths simultaneously, without destroying the plate surface. The system enables absorbance and fluorescence measurements in one run. Fluorescence measurements are possible without filters or other adjustments.
The measurement of fluorescence from a TLC plate is a versatile means of making TLC analysis more sensitive. Fluorescence measurements with the new scanner are possible without filters or special lamps. Improvement of the signal-to-noise ratio is achieved by wavelength bundling. During plate scanning the scattered light and the fluorescence are both emitted from the surface of the TLC plate and this emitted light provides the desired spectral information from substances on the TLC plate. The measurement of fluorescence spectra and absorbance spectra directly from a TLC plate is based on differential measurement of light emerging from sample-free and sample-containing zones.
The literature recommends dipping TLC plates in viscous liquids to enhance fluorescence. Measurement of the fluorescence and absorbance spectra of pyrene spots reveals the mechanism of enhancement of plate dipping in viscous liquids—blocked contact of the fluorescent molecules with the stationary phase or other sample molecules is responsible for the enhanced fluorescence at lower concentrations.
In conclusion, dipping in TLC analysis is no miracle. It is based on similar mechanisms observable in liquids. The measured TLC spectra are also very similar to liquid spectra and this makes TLC spec-troscopy an important tool in separation analysis.
A new diode-array scanner in combination with a computer-controlled application system meets all the demands of modern HPTLC measurement. Automatic application, simultaneous measurements at different wavelengths, and different linearization models enable appropriate evaluation of all analytical questions. The theory of error propagation recommends quantification at reflectance values smaller than 0.8; this can be verified only by use of diode-array scanning. The same theory also recommends quantification by use of peak height data, because the theory predicts best precision only for peak height evaluation. Diode-array scanning with reflectance monitoring enables appropriate validation in TLC and HPTLC analysis. All these aspects result in substantial improvement of in-situ quantitative densitometric analysis, and simultaneous recording at different wavelengths opens the way for chemometric evaluation, e.g. peak purity monitoring, which improves the accuracy and reliability of HPTLC analysis.
In-situ densitometry for qualitative or quantitative purposes is a key step in thin-layer chromatography (TLC). It is a simple means of quantification by measurement of the optical density of the separated spots directly on the plate. A new scanner has been developed which is capable of measuring TLC or HPTLC (high-performance thin-layer chromatography) plates simultaneously at different wavelengths without damaging the plate surface. Fiber optics and special fiber interfaces are used in combination with a diode-array detector. With this new scanner sophisticated plate evaluation is now possible, which enables use of chemometric methods in HPTLC. Different regression models have been introduced which enable appropriate evaluation of all analytical questions. Fluorescent measurements are possible without filters or special lamps and signal-to-noise ratios can be improved by wavelength bundling. Because of the richly structured spectra obtained from PAH, diode-array HPTLC enables quantification of all 16 EPA PAH on one track. Although the separation is incomplete all 16 compounds can be quantified by use of suitable wavelengths. All these aspects are enable substantial improvement of in-situ quantitative densitometric analysis.
In this paper a high-performance thin-layer chromatography (HPTLC) scanner is presented in which a special fibre arrangement is used as HPTLC plate scanning interface. Measurements are taken with a set of 50 fibres at a distance of 400 to 500 μm above the HPTLC plate. Spatial resolutions on the HPTLC plate of better than 160 μm are possible. It takes less than 2 min to scan 450 spectra simultaneously in a range of 198 to 610 nm. The basic improvement of the item is the use of highly transparent glass fibres which provide excellent transmission at 200 nm and the use of a special fibre arrangement for plate illumination and detection.
We present a video-densitometric high-performance thin-layer chromatography (HPTLC) quantification method for patulin in apple juice, developed in a vertical chamber from the starting point to a distance of 50 mm, using MTBE, n-pentane (9 + 5, v/v) as mobile phase. After separation the plate is sprayed with methyl-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) solution (40 mg in 20 mL methanol) and heated at 105 °C for 15 min. Patulin zones are transformed into yellow spots. The quantification is based on direct measurements using an inexpensive 48-bit flatbed scanner for color measurements (in red, green, and blue). Evaluation of the blue channel makes the measurements very specific. Quantification in fluorescence was also done by use of a 16-bit CCD-camera and UV-366 nm illumination as well as using a HPTLC DAD-scanner. For linearization the extended Kubelka–Munk expression for data transformation was used. The range of linearity covers more than two magnitudes and lies between 5 and 800 ng patulin. The extraction of 20 g apple juice and an extract application on plate up to 50 µL allows statistically defined checking the limit of detection (LOD) of 50 ng patulin per track, which is equivalent to 50 µg patulin per kg apple juice.
An Extraction Method for 17α-Ethinylestradiol from Water using a new kind of monolithic Stir-bar
(2015)
We present a densitometric quantification method for triclosan in toothpaste, separated by high-performance thin-layer chromatography (HPTLC) and using a 48-bit flatbed scanner as the detection system. The sample was band-wise applied to HPTLC plates (10 × 20 cm), with fluorescent dye, Merck, Germany (1.05554). The plates were developed in a vertical developing chamber with 20 min of chamber saturation over 70 mm, using n-heptane–methyl tert-butyl ether–acetic acid (92:8:0.1, V/V) as solvent. The RF value of triclosan is hRF = 22.4, and quantification is based on direct measurements using an inexpensive 48-bit flatbed scanner for color measurements (in red, green, and blue) after plate staining with 2,6-dichloroquinone-4-chloroimide (Gibbs' reagent). Evaluation of the red channel makes the measurements of triclosan very specific. For linearization, an extended Kubelka–Munk expression was used for data transformation. The range of linearity covers more than two orders of magnitude and is between 91 and 1000 ng. The separation method is inexpensive, fast and reliable.
A 2D-separation of 16 polyaromatic hydrocarbons (PAHs) according to the Environmental Protecting Agency (EPA) standard was introduced. Separation took place on a TLC RP-18 plate (Merck, 1.05559). In the first direction, the plate was developed twice using n-pentane at −20°C as the mobile phase. The mixture acetonitrile-methanol-acetone-water (12:8:3:3, v/v) was used for developing the plate in the second direction. Both developments were carried out over a distance of 43 mm. Further on in this publication, a specific and very sensitive indication method for benzo[a]pyrene and perylene was presented. The method can detect these hazardous compounds even in complicated PAH mixtures. These compounds can be quantified by a simple chemiluminescent reaction with a limit of detection (LOD) of 48 pg per band for perylene and 95 pg per band for benzo[a]pyrene. Although these compounds were separated from all other PAHs in the standard, a separation of both compounds was not possible from one another. The method is suitable for tracing benzo[a]pyrene and/or perylene. The proposed chemiluminescence screening test on PAHs is extremely sensitive but may indicate a false positive result for benzo[a]pyrene.
Two solvent mixtures for high-performance thin-layer chromatographic (HPTLC) separation of some compounds showing estrogenic activity in the yeast estrogen screen (YES) assay are presented. The new method, planar yeast estrogen screen (pYES) combines the YES assay and a chromatographic separation on silica gel HPTLC plates with the performance of the YES assay. For separation, the analytes were applied bandwise to HPTLC plates (10 × 20 cm) with fluorescent dye (Merck, Germany). The plates were developed in a vertical developing chamber after 30 min of chamber saturation over a separation distance of 70 mm, using cyclohexane‒methyl-ethyl ketone (2:1, V/V) or cyclohexane‒CPME (3:2, V/V) as solvents. Both solvents allow separation of estriol, daidzein, genistein, 17β-estradiol, 17α-ethinyl estradiol, estrone, 4-nonylphenol and bis(2-ethylhexyl) phthalate.
An algorithm is presented that has successfully been utilized in practice for several years. It improves data analysis in chromatography. The program runs in an extremely reliable way and evaluates chromatographic raw data with an acceptable error. The algorithm requires a minimum of preliminaries and integrates even unsmoothed noisy data correctly.
Improved separation of highly toxic contact herbicides paraquat (1,1′-dimethyl-4-4′-bipyridinium), diquat (6,7-dihydrodipyridol[ 1,2-a:2′,1′-c]pyrazine-5,8-di-ium), difenzoquat (1,2-dimethyl-3,5-diphenyl-1H-pyrazolium-methyl sulfate), mepiquat (1,1-dimethyl-piperidinium), and chloromequat (2-chloroethyltrimethylammonium) were presented by high-performance thin-layer chromatography (HPTLC). The quantification is based on a derivatization reaction, using sodium tetraphenylborate. Measurements were made in the wavelength range from 500 to 535 nm, using a light-emitting diode (LED) for excitation purposes, which emits very dense light at 365 nm. For calculations, a new theory of standard addition method was used, thus leading to a minimal error if exactly the same amount of sample content is added as a standard. The method provides a fast and inexpensive approach to quantification of the five most important quats used for plant protection purposes. The method works reliably because it takes into account losses during pre-treatment procedure. The method meets the European legislation limits for paraquat and diquat in drinking water according to United States Environmental Protection Agency (US EPA) method 549.2 which are 680 ng L−1 for paraquat and 720 ng L−1 for diquat. The method of standard addition in planar chromatography can be beneficially used to reduce systematic errors. Although recovery rates of 33.7% to 65.2% are observed, calculated contents according to the method of standard addition lie between 69% and 127% of the theoretical amounts.
Selective separation of CO2-CH4 mixed gases via magnesium aminoethylphosphonate nanoparticles
(2016)
The CO2 uptake on nanoscale AlO(OH) hollow spheres (260 mg g−1) as a new material is comparable to that on many metal–organic frameworks although their specific surface area is much lower (530 m2 g¬1versus 1500–6000 m2g¬1). Suited temperature–pressure cycles allow for reversible storage and separation of CO2 while the CO2 uptake is 4.3-times higher as compared to N2.
Mass transfer phenomena in membrane fuel cells are complex and diversified because of the presence of complex transport pathways including porous media of very different pore sizes and possible formation of liquid water. Electrochemical impedance spectroscopy, although allowing valuable information on ohmic phenomena, charge transfer and mass transfer phenomena, may nevertheless appear insufficient below 1 Hz. Use of another variable, that is, back pressure, as an excitation variable for electrochemical pressure impedance spectroscopy is shown here a promising tool for investigations and diagnosis of fuel cells.
Pure orbital blowout fractures occur within the confines of the internal orbital wall. Restoration of orbital form and volume is paramount to prevent functional and esthetic impairment. The anatomical peculiarity of the orbit has encouraged surgeons to develop implants with customized features to restore its architecture. This has resulted in worldwide clinical demand for patient-specific implants (PSIs) designed to fit precisely in the patient’s unique anatomy. Material extrusion or Fused filament fabrication (FFF) three-dimensional (3D) printing technology has enabled the fabrication of implant-grade polymers such as Polyetheretherketone (PEEK), paving the way for a more sophisticated generation of biomaterials. This study evaluates the FFF 3D printed PEEK orbital mesh customized implants with a metric considering the relevant design, biomechanical, and morphological parameters. The performance of the implants is studied as a function of varying thicknesses and porous design constructs through a finite element (FE) based computational model and a decision matrix based statistical approach. The maximum stress values achieved in our results predict the high durability of the implants, and the maximum deformation values were under one-tenth of a millimeter (mm) domain in all the implant profile configurations. The circular patterned implant (0.9 mm) had the best performance score. The study demonstrates that compounding multi-design computational analysis with 3D printing can be beneficial for the optimal restoration of the orbital floor.
We present a video-densitometric quantification method for the pain killer known as diclofenac and ibuprofen. These non-steroidal anti-inflammatory drugs were separated on cyanopropyl bonded plates using CH2Cl2, methanol, cyclohexane (95 + 5 + 40, v/v) as mobile phase. The quantification is based on a bio-effective-linked analysis using Vibrio fisheri bacteria. Within 10 min a CCD-camera registered the white light of the light-emitting bacteria. Diclofenac and ibuprofen effectively suppressed the bacterial light emission which can be used for quantification within a linear range of 10 to 2000 ng. The detection limit for ibuprofen is 20 ng and the limit of quantification 26 ng per zone. Measurements were carried out using a 16-bit ST-1603ME CCD camera with 1.56 megapixels (from Santa Barbara Instrument Group, Inc., Santa Barbara, USA). The range of linearity covers more than two magnitudes because the extended Kubelka-Munk expression is used for data transformation. The separation method is inexpensive, fast, and reliable.
We present a video-densitometric quantification method in combination with diode-array quantification for the methyl-, ethyl-, propyl-, and butylparaben in cosmetics. These parabens were separated on cyanopropyl bonded plates using water-acetonitrile-dioxane-ethanol-NH3 (25%) (8:2:1:1:0.05, v/v) as mobile phase. The quantification is based on UV-measurements at 255 nm and a bioeffectively-linked analysis using Vibrio fischeri bacteria. Within 5 min, a Tidas S 700 diode-array scanner (J&M, Aalen, Germany) scans 8 tracks and thus measures in total 5600 spectra in the wavelengths range from 190 to 1000 nm. The quantification range for all these parabens is from 20 to 400 ng per band, measured at 255 nm. In the V. fischeri assay a CCD-camera registers the white light of the light-emitting bacteria within 10 min. All parabens effectively suppress the bacterial light emission which can be used for quantifying within a linear range from 100 to 400 ng. Measurements were carried out using a 16-bit MicroChemi chemiluminescence system (biostep GmbH, Jahnsdorf, Germany), using a CCD camera with 4.19 megapixels. The range of linearity is achieved because the extended Kubelka-Munk expression was used for data transformation. The separation method is inexpensive, fast, and reliable.
Cast iron materials are used as materials for cylinder heads for heavy duty internal combustion engines. These components must withstand severe cyclic mechanical and thermal loads throughout their service life. While high-cycle fatigue (HCF) is dominant for the material in the water jacket region, the combination of thermal transients with mechanical load cycles results in thermomechanical fatigue (TMF) of the material in the fire deck region, even including superimposed TMF and HCF loads. Increasing the efficiency of the engines directly leads to increasing combustion pressure and temperature and, thus, lower safety margins for the currently used cast iron materials or alternatively the need for superior cast iron materials. In this paper (Part I), the TMF properties of the lamellar graphite cast iron GJL250 and the vermicular graphite cast iron GJV450 are characterized in uniaxial tests and a mechanism-based model for TMF life prediction is developed for both materials. The model can be used to estimate the fatigue life of components by means of finite-element calculations (Part II of the paper) and supports engineers in finding the appropriate material and design. Furthermore, the effect of the elastic, plastic and creep properties of the materials on the fatigue life can be evaluated with the model. However, for a material selection also the thermophysical properties, controlling to a high level the thermal stresses in the component, must be considered. Hence, the need for integral concepts for material characterization and selection from a multitude of existing and soon-to-be developed cast iron materials is discussed.
Cast aluminum alloys are frequently used as materials for cylinder head applications in internal combustion gasoline engines. These components must withstand severe cyclic mechanical and thermal loads throughout their lifetime. Reliable computational methods allow for accurate estimation of stresses, strains, and temperature fields and lead to more realistic Thermomechanical Fatigue (TMF) lifetime predictions. With accurate numerical methods, the components could be optimized via computer simulations and the number of required bench tests could be reduced significantly. These types of alloys are normally optimized for peak hardness from a quenched state that maximizes the strength of the material. However due to high temperature exposure, in service or under test conditions, the material would experience an over-ageing effect that leads to a significant reduction in the strength of the material. To numerically account for ageing effects, the Shercliff & Ashby ageing model is combined with a Chaboche-type viscoplasticity model available in the finite-element program ABAQUS by defining field variables. The constitutive model with ageing effects is correlated with uniaxial cyclic isothermal tests in the T6 state, the overaged state, as well as thermomechanical tests. On the other hand, the mechanism-based TMF damage model (DTMF) is calibrated for both T6 and over-aged state. Both the constitutive and the damage model are applied to a cylinder head component simulating several cycles on an engine dynamometer test. The effects of including ageing for both models are shown.
This work aimed to determine the influence of two hydrogels (alginate, alginate-di-aldehyde (ADA)/gelatin) on the mechanical strength of microporous ceramics, which have been loaded with these hydrogels. For this purpose, the compressive strength was determined using a Zwick Z005 universal testing machine. In addition, the degradation behavior according to ISO EN 10993-14 in TRIS buffer pH 5.0 and pH 7.4 over 60 days was determined, and its effects on the compressive strength were investigated. The loading was carried out by means of a flow-chamber. The weight of the samples (manufacturer: Robert Mathys Foundation (RMS) and Curasan) in TRIS solutions pH 5 and pH 7 increased within 4 h (mean 48 ± 32 mg) and then remained constant over the experimental period of 60 days. The determination surface roughness showed a decrease in the value for the ceramics incubated in TRIS compared to the untreated ceramics. In addition, an increase in protein concentration in solution was determined for ADA gelatin-loaded ceramics. The macroporous Curasan ceramic exhibited a maximum failure load of 29 ± 9.0 N, whereas the value for the microporous RMS ceramic was 931 ± 223 N. Filling the RMS ceramic with ADA gelatin increased the maximum failure load to 1114 ± 300 N. The Curasan ceramics were too fragile for loading. The maximum failure load decreased for the RMS ceramics to 686.55 ± 170 N by incubation in TRIS pH 7.4 and 651 ± 287 N at pH 5.0.
Purpose
To summarize the mechanical loading of the spine in different activities of daily living and sports.
Methods
Since the direct measurement is not feasible in sports activities, a mathematical model was applied to quantify spinal loading of more than 600 physical tasks in more than 200 athletes from several sports disciplines. The outcome is compression and torque (normalized to body weight/mass) at L4/L5.
Results
The data demonstrate high compressive forces on the lumbar spine in sport-related activities, which are much higher than forces reported in normal daily activities and work tasks. Especially ballistic jumping and landing skills yield high estimated compression at L4/L5 of more than ten times body weight. Jumping, landing, heavy lifting and weight training in sports demonstrate compression forces significantly higher than guideline recommendations for working tasks.
Conclusion
These results may help to identify acute and long-term risks of low back pain and, thus, may guide the development of preventive interventions for low back pain or injury in athletes.
Techno-economic comparison of membrane distillation and MVC in a zero liquid discharge application
(2018)
Membrane distillation (MD) is a thermally driven membrane process for the separation of vapour from a liquid stream through a hydrophobic, microporous membrane. However, a commercial breakthrough on a large scale has not been achieved so far. Specific developments on MD technology are required to adapt the technology for applications in which its properties can potentially outshine state of the art technologies such as standard evaporation. In order to drive these developments in a focused manner, firstly it must be shown that MD can be economically attractive in comparison to state of the art systems. Thus, this work presents a technological design and economic analysis for AGMD and v-AGMD for application in a zero liquid discharge (ZLD) process chain and compares it to the costs of mechanical vapour compression (MVC) for the same application. The results show that MD can potentially be ~40% more cost effective than MVC for a system capacity of 100 m3/day feed water, and up to ~75% more cost effective if the MD is driven with free waste heat.
Membrane distillation (MD) is a thermal separation process which possesses a hydrophobic, microporous
membrane as vapor space. A high potential application for MD is the concentration of hypersaline brines, such as
e.g. reverse osmosis retentate or other saline effluents to be concentrated to a near saturation level with a Zero
Liquid Discharge process chain. In order to further commercialize MD for these target applications, adapted MD
module designs are required along with strategies for the mitigation of membrane wetting phenomena. This
work presents the experimental results of pilot operation with an adapted Air Gap Membrane Distillation
(AGMD) module for hypersaline brine concentration within a range of 0–240 g NaCl /kg solution. Key performance
indicators such as flux, GOR and thermal efficiency are analyzed. A new strategy for wetting mitigation
by active draining of the air gap channel by low pressure air blowing is tested and analyzed. Only small reductions
in flux and GOR of 1.2% and 4.1% respectively, are caused by air sparging into the air gap channel.
Wetting phenomena are significantly reduced by avoiding stagnant distillate in the air gap making the air blower
a seemingly worth- while additional system component.
Over the last few decades, several grid coupling techniques for hierarchically refined Cartesian grids have been developed to provide the possibility of varying mesh resolution in lattice Boltzmann methods. The proposed schemes can be roughly categorized based on the individual grid transition interface layout they are adapted to, namely cell-vertex or cell-centered approaches, as well as a combination of both. It stands to reason that the specific properties of each of these grid-coupling algorithms influence the stability and accuracy of the numerical scheme. Consequently, this naturally leads to a curiosity regarding the extent to which this is the case. The present study compares three established grid-coupling techniques regarding their stability ranges by conducting a series of numerical experiments for a square duct flow, including various collision models. Furthermore the hybrid-recursive regularized collision model, originally introduced for cell-vertex algorithms with co-located coarse and fine grid nodes, has been adapted to cell-centered and combined methods.
Anterior cruciate ligament (ACL) ruptures are frequent in the age group of 15–19 years, particularly for female athletes. Although injury-prevention programs effectively reduce severe knee injuries, little is known about the underlying mechanisms and changes of biomechanical risk factors. Thus, this study analyzes the effects of a neuromuscular injury-prevention program on biomechanical parameters associated with ACL injuries in elite youth female handball players. In a nonrandomized, controlled intervention study, 19 players allocated to control (n = 12) and intervention (n = 7) group were investigated for single- and double-leg landings as well as unanticipated side-cutting maneuvers before and after a 12-week study period. The lower-extremity motion of the athletes was captured using a three-dimensional motion capture system consisting of 12 infrared cameras. A lower-body marker set of 40 markers together with a rigid body model, including a forefoot, rearfoot, shank, thigh, and pelvis segment in combination with two force plates was used to determine knee joint angles, resultant external joint moments, and vertical ground reaction forces. The two groups did not differ significantly during pretesting. Only the intervention group showed significant improvements in the initial knee abduction angle during single leg landing (p = 0.038: d = 0.518), knee flexion moment during double-leg landings (p = 0.011; d = −1.086), knee abduction moment during single (p = 0.036; d = 0.585) and double-leg landing (p = 0.006; d = 0.944) and side-cutting (p = 0.015;d = 0.561) as well as vertical ground reaction force during double-leg landing (p = 0.004; d = 1.482). Control group demonstrated no significant changes in kinematics and kinetics. However, at postintervention both groups were not significantly different in any of the biomechanical outcomes except for the normalized knee flexion moment of the dominant leg during single-leg landing. This study provides first indications that the implementation of a training intervention with specific neuromuscular exercises has positive impacts on biomechanical risk factors associated with ACL injury risk and, therefore, may help prevent severe knee injuries in elite youth female handball players.
Lithium-ion battery cells exhibit a complex and nonlinear coupling of thermal, electrochemical,and mechanical behavior. In order to increase insight into these processes, we report the development of a pseudo-three-dimensional (P3D) thermo-electro-mechanical model of a commercial lithium-ion pouch cell with graphite negative electrode and lithium nickel cobalt aluminum oxide/lithium cobalt oxide blend positive electrode. Nonlinear molar volumes of the active materials as function of lithium stoichiometry are taken from literature and implemented into the open-source software Cantera for convenient coupling to battery simulation codes. The model is parameterized and validated using electrical, thermal and thickness measurements over a wide range of C-rates from 0.05 C to 10 C. The combined experimental and simulated analyses show that thickness change during cycling is dominated by intercalation-induced swelling of graphite, while swelling of the two blend components partially cancel each other. At C-rates above 2 C, electrochemistry-induced temperature increase significantly contributes to cell swelling due to thermal expansion. The thickness changes are nonlinearly distributed over the thickness of the electrode pair due to gradients in the local lithiation, which may accelerate local degradation. Remaining discrepancies between simulation and experiment at high C-rates might be attributed to lithium plating, which is not considered in the model at present.
Demand Side Management for Thermally Activated Building Systems based on Multiple Linear Regression
(2015)
There is a growing trend for the use of thermo-active building systems (TABS) for the heating and cooling of buildings, because these systems are known to be very economical and efficient. However, their control is complicated due to the large thermal inertia, and their parameterization is time-consuming. With conventional TABS-control strategies, the required thermal comfort in buildings can often not be maintained, particularly if the internal heat sources are suddenly changed. This paper shows measurement results and evaluations of the operation of a novel adaptive and predictive calculation method, based on a multiple linear regression (AMLR) for the control of TABS. The measurement results are compared with the standard TABS strategy. The results show that the electrical pump energy could be reduced by more than 86%. Including the weather adjustment, it could be demonstrated that thermal energy savings of over 41% could be reached. In addition, the thermal comfort could be improved due to the possibility to specify mean room set-point temperatures. With the AMLR, comfort category I of the comfort norms ISO 7730 and DIN EN 15251 are observed in about 95% of occasions. With the standard TABS strategy, only about 24% are within category I.
Adaptive predictive control of thermo-active building systems (TABS) based on a multiple regression algorithm: First practical test. Available from: https://www.researchgate.net/publication/305903009_Adaptive_predictive_control_of_thermo-active_building_systems_TABS_based_on_a_multiple_regression_algorithm_First_practical_test [accessed Jul 7, 2017].
Photovoltaics Energy Prediction Under Complex Conditions for a Predictive Energy Management System
(2015)
The following contribution deals with the growth of cracks in low-cycle fatigue (LCF) and thermomechanical fatigue (TMF) tested specimens of Inconel 718 measured by using the replica method. The specimens are loaded with different strain rates. The material shows a significantly higher crack growth rate if the strain rate is decreased. Electron backscatter diffraction (EBSD) is adopted to identify the failure mechanism and the misorientation relationship of failed grain boundaries in secondary cracks. The analyzed cracks propagated mainly transgranular but also intergranular failure can be observed in some areas. It is found that grain boundaries with coincidence site lattice (CSL) boundary structure are generally less susceptible for intergranular failure than grain boundaries with random misorientation. For modeling the experimentally identified crack behavior an existing model for fatigue crack growth based on the mechanism of time dependent elastic–plastic crack tip blunting is enhanced to describe environmental effects based on the mechanism of oxygen diffusion at the crack tip. For the diffusion process the temperature dependent parabolic diffusion law is assumed. As a result, the time dependent cyclic crack tip opening displacement (DCTOD) is used as representative value to describe both mechanisms. Thus, most
of the included model parameters characterize the deformation behavior of the material and can be determined by independent material tests. With the determined material properties, the proposed model describes the experimentally measured crack growth curves very well. The model is validated based on predictions of the number of cycles to failure of LCF as well as in-phase and out-of-phase TMF tests in the temperature range between room temperature and 650 °C.
The following contribution deals with the experimental investigation and theoretical evaluation of fatigue crack growth under isothermal and non-isothermal conditions at the nickel alloy 617. The microstructure and mechanical properties of alloy 617 are influenced significantly by the thermal heat treatment and the following thermal exposure in service. Hence, a solution annealed and a long-time service exposed material condition is studied. The crack growth measurement is carried out by using an alternate current potential drop system, which is integrated into a thermomechanical fatigue (TMF) test facility. The measured fatigue crack growth rates results in a function of material condition, temperature and load waveform. Furthermore, the results of the non-isothermal tests depend on the phase between thermal and mechanical load (in-phase, out-of-phase). A fracture mechanic based, time dependent model is upgraded by an approach to consider environmental effects, where almost all model parameters represent directly measureable values. A consistent description of all results and a good correlation with the experimental data can be achieved.
In this paper, a temperature-dependent viscoplasticity model is presented that describes thermal and cyclic softening of the hot work steel X38CrMoV5-3 under thermomechanical fatigue loading. The model describes the softening state of the material by evolution equations, the material properties of which can be determined on the basis of a defined experimental program. A kinetic model is employed to capture the effect of coarsening carbides and a new isotropic cyclic softening model is developed that takes history effects during thermomechanical loadings into account. The temperature-dependent material properties of the viscoplasticity model are determined on the basis of experimental data measured in isothermal and thermomechanical fatigue tests for the material X38CrMoV5-3 in the temperature range between 20 and 650 ∘C. The comparison of the model and an existing model for isotropic softening shows an improved description of the softening behavior under thermomechanical fatigue loading. A good overall description of the experimental data is possible with the presented viscoplasticity model, so that it is suited for the assessment of operating loads of hot forging tools.
In this paper, the Bauschinger effect and latent hardening of single crystals are assessed in finite element calculations using a single crystal plasticity model with kinematic hardening. To this end, results of cyclic micro-bending experiments on single crystal Alloy 718 in different crystal orientations (single slip and multi slip) with respect to the loading direction are used to determine the slip system related material properties of the single crystal plasticity model. Two kinematic hardening laws are considered: a kinematic hardening law describing latent hardening and a kinematic hardening law without latent hardening. For the determination of material properties for both hardening laws, a gradient-based optimization method is used. The results show that the different strength levels observed for micro-bending tests on different crystal orientations can only be described with latent kinematic hardening well, whereas the pronounced Bauschinger effect is described well by both kinematic hardening laws. It is concluded that cyclic micro-bending experiments on single crystals using different crystal orientations give an appropriate data base for the determination of the slip system related material properties of the single crystal plasticity model with latent kinematic hardening.
Electrochemical pressure impedance spectroscopy (EPIS) has recently been developed as a potential diagnosis tool for polymer electrolyte membrane fuel cells (PEMFC). It is based on analyzing the frequency response of the cell voltage with respect to an excitation of the gas-phase pressure. We present here a combined modeling and experimental study of EPIS. A pseudo-twodimensional PEMFC model was parameterized to a 100 cm2 laboratory cell installed in its test bench, and used to reproduce steady-state cell polarization and electrochemical impedance spectra (EIS). Pressure impedance spectra were obtained both in experiment and simulation by applying a harmonic pressure excitation at the cathode outlet. The model shows good agreement with experimental data for current densities ⩽ 0.4 A cm−2. Here it allows a further simulative analysis of observed EPIS features, including the magnitude and shape of spectra. Key findings include a strong influence of the humidifier gas volume on EPIS and a substantial increase in oxygen partial pressure oscillations towards the channel outlet at the resonance frequency. At current densities ⩾ 0.8 A cm−2 the experimental EIS and EPIS data cannot be fully reproduced. This deviation might be associated with the formation and transport of liquid water, which is not included in the model.
Electrochemical pressure impedance spectroscopy (EPIS) is an emerging tool for the diagnosis of polymer electrolyte membrane fuel cells (PEMFC). It is based on analyzing the frequency response of the cell voltage with respect to an excitation of the gas-phase pressure. Several experimental studies in the past decade have shown the complexity of EPIS signals, and so far there is no agreement on the interpretation of EPIS features. The present study contributes to shed light into the physicochemical origin of EPIS features, by using a combination of pseudo-two-dimensional modeling and analytical interpretation. Using static simulations, the contributions of cathode equilibrium potential, cathode overpotential, and membrane resistance on the quasi-static EPIS response are quantified. Using model reduction, the EPIS responses of individual dynamic processes are predicted and compared to the response of the full model. We show that the EPIS signal of the PEMFC studied here is dominated by the humidifier. The signal is further analyzed by using transfer functions between various internal cell states and the outlet pressure excitation. We show that the EPIS response of the humidifier is caused by an oscillating oxygen molar fraction due to an oscillating mass flow rate.
Cost effectiveness of preventive screening programmes for type 2 diabetes mellitus in Germany
(2010)
As in several other industrialized countries, Germany’s statutory health insurance (SHI) is facing rising healthcare costs as well as the challenges caused by a double-aging society. The early detection and prevention of chronic diseases is considered a possible way to reduce the impact of these developments. However, controversy surrounds the costs and effects in terms of medical and financial outcomes of such programmes.
It is considered necessary to implement advanced controllers such as model predictive control (MPC) to utilize the technical flexibility of a building polygeneration system to support the rapidly expanding renewable electricity grid. These can handle multiple inputs and outputs, uncertainties in forecast data, and plant constraints, amongst other features. One of the main issues identified in the literature regarding deploying these controllers is the lack of experimental demonstrations using standard components and communication protocols. In this original work, the economic-MPC-based optimal scheduling of a real-world heat pump-based building energy plant is demonstrated, and its performance is evaluated against two conventional controllers. The demonstration includes the steps to integrate an optimization-based supervisory controller into a typical building automation and control system with off-the-shelf HVAC components and usage of state-of-art algorithms to solve a mixed integer quadratic problem. Technological benefits in terms of fewer constraint violations and a hardware-friendly operation with MPC were identified. Additionally, a strong dependency of the economic benefits on the type of load profile, system design and controller parameters was also identified. Future work for the quantification of these benefits, the application of machine learning algorithms, and the study of forecast deviations is also proposed.
Am 1. Juli 2022 trafen sich im Rahmen des Abschlusskolloquiums des Projekts ACA-Modes rund 60 Teilnehmende aus Forschung, Lehre und Industrie zu einer internationalen Konferenz an der Hochschule Offenburg. Hier wurden die Projektergebnisse rund um die erfolgreiche Implementierung modellprädiktiver Regelstrategien vorgestellt, aktuelle Fragestellungen diskutiert und Entwicklungspfade hin zu einem netzdienlichen Betrieb von Energieverbundsystemen skizziert.
The energy system of the future will transform from the current centralised fossil based to a decentralised, clean, highly efficient, and intelligent network. This transformation will require innovative technologies and ideas like trigeneration and the crowd energy concept to pave the way ahead. Even though trigeneration systems are extremely energy efficient and can play a vital role in the energy system, turning around their deployment is hindered by various barriers. These barriers are theoretically analysed in a multiperspective approach and the role decentralised trigeneration systems can play in the crowd energy concept is highlighted. It is derived from an initial literature research that a multiperspective (technological, energy-economic, and user) analysis is necessary for realising the potential of trigeneration systems in a decentralised grid. And to experimentally quantify these issues we are setting up a microscale trigeneration lab at our institute and the motivation for this lab is also briefly introduced.
Cooling towers or recoolers are one of the major consumers of electricity in a HVAC plant. The implementation and analysis of advanced control methods in a practical application and its comparison with conventional controllers is necessary to establish a framework for their feasibility especially in the field of decentralised energy systems. A standard industrial controller, a PID and a model based controller were developed and tested in an experimental set-up using market-ready components. The characteristics of these controllers such as settling time, control difference, and frequency of control actions are compared based on the monitoring data. Modern controllers demonstrated clear advantages in terms of energy savings and higher accuracy and a model based controller was easier to set-up than a PID.
Drawing off the technical flexibility of building polygeneration systems to support a rapidly expanding renewable electricity grid requires the application of advanced controllers like model predictive control (MPC) that can handle multiple inputs and outputs, uncertainties in forecast data, and plant constraints amongst other features. In this original work, an economic-MPC-based optimal scheduling of a real-world building energy system is demonstrated and its performance is evaluated against a conventional controller. The demonstration includes the steps to integrate an optimisation-based supervisory controller into a standard building automation and control system with off-the-shelf HVAC components and usage of state-of-art algorithms for solving complex nonlinear mixed integer optimal control problems. With the MPC, quantitative benefits in terms of 6–12% demand-cost savings and qualitative benefits in terms of better controller adaptability and hardware-friendly operation are identified. Further research potential for improving the MPC framework in terms of field-level stability, minimising constraint violations, and inter-system communication for its deployment in a prosumer-network is also identified.
Optimisation based economic despatch of real-world complex energy systems demands reduced order and continuously differentiable component models that can represent their part-load behaviour and dynamic responses. A literature study of existing modelling methods and the necessary characteristics the models should meet for their successful application in model predictive control of a polygeneration system are presented. Deriving from that, a rational modelling procedure using engineering principles and assumptions to develop simplified component models is applied. The models are quantitatively and qualitatively evaluated against experimental data and their efficacy for application in a building automation and control architecture is established.
To achieve its climate goals, the German industry has to undergo a transformation toward renewable energies. To analyze this transformation in energy system models, the industry’s electricity demands have to be provided in a high temporal and sectoral resolution, which, to date, is not the case due to a lack of open-source data. In this paper, a methodology for the generation of synthetic electricity load profiles is described; it was applied to 11 industry types. The modeling was based on the normalized daily load profiles for eight electrical end-use applications. The profiles were then further refined by using the mechanical processes of different branches. Finally, a fluctuation was applied to the profiles as a stochastic attribute. A quantitative RMSE comparison between real and synthetic load profiles showed that the developed method is especially accurate for the representation of loads from three-shift industrial plants. A procedure of how to apply the synthetic load profiles to a regional distribution of the industry sector completes the methodology.
Research is often conducted to investigate footwear mechanical properties and their effects on running biomechanics, but little is known about their influence on runner satisfaction, or how well the shoe is perceived. A tool to predict runner satisfaction in a shoe from its mechanical properties would be advantageous for footwear companies. Data in this study were from a database (n = 615 subject-shoe pairings) of satisfaction ratings (gathered after participants ran on a treadmill), and mechanical testing data for 87 unique subjects across 61 unique shoes. Random forest and elastic net logistic regression models were built to test if footwear mechanical properties and subject characteristics could predict runner satisfaction in 3 ways: degree-of-satisfaction on a 7-point Likert scale, overall satisfaction on a 3-point Likert scale, and willingness-to-purchase the shoe (yes/no response). Data were divided into training and validation sets, using an 80–20 split, to build the models and test their accuracy, respectively. Model accuracies were compared against the no-information rate (i.e. proportion of data belonging to the largest class). The models were not able to predict degree-of-satisfaction or overall satisfaction from footwear mechanical properties but could predict runner’s willingness to purchase with 68–75% accuracy. Midsole Gmax at the heel and forefoot appeared in the top five of variable importance rankings across both willingness-to-purchase models, suggesting its role as a major factor in purchase decisions. The negative regression coefficient for both heel and forefoot Gmax indicated that softer midsoles increase the likelihood of a shoe purchase. Future models to predict satisfaction may improve accuracy with the addition of more subject-specific parameters, such as running goals or foot proportions.
Ecological concerns on the climatic effects of the emissions from electricity production stipulate the remuneration of electricity grids to accept growing amounts of intermittent regenerative electricity feed-in from wind and solar power. Germany’s eager political target to double regenerative electricity production by 2030 puts pressure on grid operators to adapt and restructure their transmission and distribution grids. The ability of local distribution grids to operate autonomous of transmission grid supply is essential to stabilize electricity supply at the level of German federal states. Although congestion management and collaboration at the distribution system operator (DSO) level are promising approaches, relatively few studies address this issue. This study presents a methodology to assess the electric energy balance for the low-voltage grids in the German federal state of Baden-Württemberg, assuming the typical load curves and the interchange potential among local distribution grids by means of linear programming of the supply function and for typical seasonal electricity demands. The model can make a statement about the performance and development requirements for grid architecture for scenarios in 2035 and 2050 when regenerative energies will—according to present legislation—account for more than half of Germany’s electricity supply. The study details the amendment to Baden-Württemberg’s electricity grid required to fit the system to the requirements of regenerative electricity production. The suggested model for grid analysis can be used in further German regions and internationally to systematically remunerate electricity grids for the acceptance of larger amounts of regenerative electricity inflows. This empirical study closes the research gap of assessing the interchange potential among DSO and considers usual power loads and simultaneously usual electricity inflows.
The German government is aiming to increase the share of renewable energies in the electricity supply to 80% in 2050. To date, however, neither the technical requirements nor the market requirements to implement this aim are provided: Germany is struggling to establish the technical requirements and the market requirements to meet this goal. As an important incentive mechanism, the German government has used and continues to use support measures, such as guaranteed feed-in tariffs, and continuously adapts these to market developments and requirements of the European Union. The purpose of the study is to outline a concept for the implementation of regional flexibility markets in Europe based on a thorough review of technical solutions. The method of a comprehensive review of research in regional flexibility markets of electricity, distribution, and pricing from the study is applied to summarize and discuss the opportunities, risks, and future potentials of grid distribution technology. Based on the insights, a new market-based supply and distribution scheme for electricity, which is aimed to benefit of a fully regenerative, decentral and fairly priced electricity markets on the European level is presented. The study suggests a blockchain based pricing mechanism which shall allow equal market access for consumer, providers, and grid operators and rewards regenerative production and short-distance transmission.
With the growing share of renewable energies in the electricity supply, transmission and distribution grids have to be adapted. A profound understanding of the structural characteristics of distribution grids is essential to define suitable strategies for grid expansion. Many countries have a large number of distribution system operators (DSOs) whose standards vary widely, which contributes to coordination problems during peak load hours. This study contributes to targeted distribution grid development by classifying DSOs according to their remuneration requirement. To examine the amendment potential, structural and grid development data from 109 distribution grids in South-Western Germany, are collected, referring to publications of the respective DSOs. The resulting data base is assessed statistically to identify clusters of DSOs according to the fit of demographic requirements and grid-construction status and thus identify development needs to enable a broader use of regenerative energy resources. Three alternative algorithms are explored to manage this task. The study finds the novel Gauss-Newton algorithm optimal to analyse the fit of grid conditions to regional requirements and successfully identifies grids with remuneration needs. It is superior to the so far used K-Means algorithm. The method developed here is transferable to other areas for grid analysis and targeted, cost-efficient development.
In pandemic times, the possibilities for conventional sports activities are severely limited; many sports facilities are closed or can only be used with restrictions. To counteract this lack of health activities and social exchange, people are increasingly adopting new digital sports solutions—a behavior change that had already started with the trend towards fitness apps and activity trackers. Existing research suggests that digital solutions increase the motivation to move and stay active. This work further investigates the potentials of digital sports incorporating the dimensions gender and preference for team sports versus individual sports. The study focuses on potential users, who were mostly younger professionals and academics. The results show that the SARS-CoV-19 pandemic had a significant negative impact on sports activity, particularly on persons preferring team sports. To compensate, most participants use more digital sports than before, and there is a positive correlation between the time spent physically active during the pandemic and the increase in motivation through digital sports. Nevertheless, there is still considerable skepticism regarding the potential of digital sports solutions to increase the motivation to do sports, increase performance, or raise a sense of team spirit when done in groups.
Since 2003, most European countries established heat health warning systems to alert the population to heat load. Heat health warning systems are based on predicted meteorological conditions outdoors. But the majority of the European population spends a substantial amount of time indoors, and indoor thermal conditions can differ substantially from outdoor conditions. The German Meteorological Service (Deutscher Wetterdienst, DWD) extended the existing heat health warning system (HHWS) with a thermal building simulation model to consider heat load indoors. In this study, the thermal building simulation model is used to simulate a standardized building representing a modern nursing home, because elderly and sick people are most sensitive to heat stress. Different types of natural ventilation were simulated. Based on current and future test reference years, changes in the future heat load indoors were analyzed. Results show differences between the various ventilation options and the possibility to minimize the thermal heat stress during summer by using an appropriate ventilation method. Nighttime ventilation for indoor thermal comfort is most important. A fully opened window at nighttime and the 2-h ventilation in the morning and evening are more sufficient to avoid heat stress than a tilted window at nighttime and the 1-h ventilation in the morning and the evening. Especially the ventilation in the morning seems to be effective to keep the heat load indoors low. Comparing the results for the current and the future test reference years, an increase of heat stress on all ventilation types can be recognized.
Der Endkundenvertrieb ist für die Bewahrung und Weiterentwicklung des Kundenstamms eines Energieversorgers essenziell. Doch um knappe Mittel im Vertrieb möglichst wirkungsvoll einsetzen zu können, wird Wissen darüber benötigt, wie sich die durchschnittlich erzielbaren Strompreise und die zu erwartende Kundenbindungsdauer zwischen verschiedenen Vertriebskanälen unterscheiden. Leitet man anhand dieser Informationen den Wert eines Kunden je Vertriebskanal ab, lässt sich treffsicherer über einzusetzende Marketing-Budgets entscheiden.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
Adsorption of N2 and CO2 on Activated Carbon, AlO(OH) Nanoparticles, and AlO(OH) Hollow Spheres
(2015)
Adsorption behaviors of nitrogen and CO2 on Norit R1 Extra and AlO(OH) nanoparticles and hollow spheres were measured under different temperature and pressure conditions using a magnetic suspension balance. Independent from the substrate investigated, all isotherms increase at lower pressure, reach a maximum, and then decrease with increasing pressure. In addition, selected experimental data were correlated with different model approaches and compared with reliable literature data. In case of CO2 on AlO(OH), capillary condensation was observed at two defined temperatures. The results suggest that the conversion of the liquid into a supercritical adsorbate phase does not take place suddenly.
In this paper, the influence of the material hardening behavior on plasticity-induced fatigue crack closure is investigated for strain-controlled loading and fully plastic, large-scale yielding conditions by means of the finite element method. The strain amplitude and the strain ratio are varied for given Ramberg–Osgood material properties representing materials with different hardening behavior. The results show a pronounced influence of the hardening behavior on crack closure, while no significant effect is found from the considered strain amplitude and strain ratio. The effect of the hardening behavior on the crack opening stress cannot be described by existing crack opening stress equations.
Lithium‐ion battery cells are multiscale and multiphysics systems. Design and material parameters influence the macroscopically observable cell performance in a complex and nonlinear way. Herein, the development and application of three methodologies for model‐based interpretation and visualization of these influences are presented: 1) deconvolution of overpotential contributions, including ohmic, concentration, and activation overpotentials of the various cell components; 2) partial electrochemical impedance spectroscopy, allowing a direct visualization of the origin of different impedance features; and 3) sensitivity analyses, allowing a systematic assessment of the influence of cell parameters on capacity, internal resistance, and impedance. The methods are applied to a previously developed and validated pseudo‐3D model of a high‐power lithium‐ion pouch cell. The cell features a blend cathode. The two blend components show strong coupling, which can be observed and interpreted using the results of overpotential deconvolution, partial impedance spectroscopy, and sensitivity analysis. The presented methods are useful tools for model‐supported lithium‐ion cell research and development.
Lithium-ion batteries exhibit a well-known trade-off between energy and power, which is problematic for electric vehicles which require both high energy during discharge (high driving range) and high power during charge (fast-charge capability). We use two commercial lithium-ion cells (high-energy [HE] and high-power) to parameterize and validate physicochemical pseudo-two-dimensional models. In a systematic virtual design study, we vary electrode thicknesses, cell temperature, and the type of charging protocol. We are able to show that low anode potentials during charge, inducing lithium plating and cell aging, can be effectively avoided either by using high temperatures or by using a constant-current/constant-potential/constant-voltage charge protocol which includes a constant anode potential phase. We introduce and quantify a specific charging power as the ratio of discharged energy (at slow discharge) and required charging time (at a fast charge). This value is shown to exhibit a distinct optimum with respect to electrode thickness. At 35°C, the optimum was achieved using an HE electrode design, yielding 23.8 Wh/(min L) volumetric charging power at 15.2 min charging time (10% to 80% state of charge) and 517 Wh/L discharge energy density. By analyzing the various overpotential contributions, we were able to show that electrolyte transport losses are dominantly responsible for the insufficient charge and discharge performance of cells with very thick electrodes.
DEM–FEA estimation of pores arrangement effect on the compressive Young’s modulus for Mg foams
(2015)
This work reports the study of the effect of the pore arrangement on the compressive behavior of Mg foams with regular pore size and porosities ranging from 25% to 45%. Pore arrangements were modeled using Finite Element Analysis (FEA), with random and ordered models, and compared to the estimations obtained for a previous work. The coordinates of the random pore arrangements were firstly generated using Discrete Element Method (DEM), and used in a second stage for modeling the pores by FEA. Estimations were also compared to the experimental results for Mg foams obtained by means of powder metallurgy. Results show important drops in the Young’s moduli as the porosity increases for both, experimental results and FEA estimations. Estimations obtained using ordered pore arrangements presented significant differences when compared to the estimations acquired from models with random arrangements. The randomly arranged models represent more accurately the real topologies of the experimental metallic foams. The Young’s moduli estimated using these models were in excellent agreement with the experiments, whilst the estimations obtained using ordered models presented relative errors significantly higher. The importance of the use of more realistic FEA models for improving the predicting ability of this method was probed, for the study of the mechanical properties of metallic foams.
The aim of this study was to develop a biomechanically validated finite element model to predict the biomechanical behaviour of the human lumbar spine in compression.
For validation of the finite element model, an in vitro study was performed: Twelve human lumbar cadaveric spinal segments (six segments L2/3 and six segments L4/5) were loaded in axial compression using 600 N in the intact state and following surgical treatment using two different internal stabilisation devices. Range of motion was measured and used to calculate stiffness.
A finite element model of a human spinal segment L3/4 was loaded with the same force in intact and surgically altered state, corresponding to the situation of biomechanical in vitro study.
The results of the cadaver biomechanical and finite element analysis were compared. As they were close together, the finite element model was used to predict: (1) load-sharing within human lumbar spine in compression, (2) load-sharing within osteoporotic human lumbar spine in compression and (3) the stabilising potential of the different spinal implants with respect to bone mineral density.
A finite element model as described here may be used to predict the biomechanical behaviour of the spine. Moreover, the influence of different spinal stabilisation systems may be predicted.
There is a strong interaction between the urban atmospheric canopy layer and the building energy balance. The urban atmospheric conditions affect the heat transfer through exterior walls, the long-wave heat transfer between the building surfaces and the surroundings, the short-wave solar heat gains, and the heat transport by ventilation. Considering also the internal heat gains and the heat capacity of the building structure, the energy demand for heating and cooling and the indoor thermal environment can be calculated based on the urban microclimatic conditions. According to the building energy concept, the energy demand results in an (anthropogenic) waste heat; this is directly transferred to the urban environment. Furthermore, the indoor temperature is re-coupled via the building envelope to the urban environment and affects indirectly the urban microclimate with a temporally lagged and damped temperature fluctuation. We developed a holistic building model for the combined calculation of indoor climate and energy demand based on an analytic solution of Fourier's equation and implemented this model into the PALM model.
A strong heat load in buildings and cities during the summer is not a new phenomenon. However, prolonged heat waves and increasing urbanization are intensifying the heat island effect in our cities; hence, the heat exposure in residential buildings. The thermophysiological load in the interior and exterior environments can be reduced in the medium and long term, through urban planning and building physics measures. In the short term, an increasingly vulnerable population must be effectively informed of an impending heat wave. Building simulation models can be favorably used to evaluate indoor heat stress. This study presents a generic simulation model, developed from monitoring data in urban multi-unit residential buildings during a summer period and using statistical methods. The model determines both the average room temperature and its deviations and, thus, consists of three sub-models: cool, average, and warm building types. The simulation model is based on the same mathematical algorithm, whereas each building type is described by a specific data set, concerning its building physical parameters and user behavior, respectively. The generic building model may be used in urban climate analyses with many individual buildings distributed across the city or in heat–health warning systems, with different building and user types distributed across a region. An urban climate analysis (with weather data from a database) may evaluate local differences in urban and indoor climate, whereas heat–health warning systems (driven by a weather forecast) obtain additional information on indoor heat stress and its expected deviations.
Thermisch angetriebene (Adsorptions-)Kältemaschinen können mit einem verhältnismäßig geringen elektrischen Energieaufwand bzw. mit einer hohen elektrischen Leistungszahl Kälte bereitstel-len. Wird die zum Antrieb erforderliche Wärme aus industrieller Abwärme bereitgestellt, ist diese Kältebereitstellung energetisch effizienter als die Kältebereitstellung über eine Kompressionskäl-temaschine. Wird die Wärme jedoch in Kraft-Wärme-Kopplung bereitgestellt, ist die primärenergetische Bewertung sowohl von mehreren Teilwirkungsgraden als auch den Primärenergiefaktoren für den eingesetzten Brennstoff und die erzeugte bzw. bezogene elektrische Energie abhängig. Eine umfangreiche Messkampagne im Sommer 2018 liefert unter realitätsnahen Randbedingungen in einer Labor umgebung detaillierte Energiekennzahlen für einen typischen Tagesgang des Kältebedarfs. Damit gelingt es, Teilenergiekennwerte für die Planungspraxis abzuleiten und das Gesamtsystem energetisch mit einer konventionellen Kompressionskältemaschine zu vergleichen.
Unter dem europäischen Programm Intelligent Energy for Europe (IEE) fanden sich acht europäische Partner zusammen, um im Rahmen des Projektes ThermCo Lüftungs‐ und Kühlenergiekonzepte für Nichtwohngebäude mit niedrigem Energieeinsatz im Hinblick auf die Energieeffizienz und den thermischen Raumkomfort zu bewerten (siehe Teil 1 dieser Veröffentlichung in Bauphysik 34 (2012), Heft 6. Mit Hilfe einer Simulationsstudie für ein typisches Bürogebäude wird das Potenzial unterschiedlicher Lüftungs‐ und Kühlstrategien unter Berücksichtigung von Energieeffizienz und Raumkomfort für verschiedene europäische Klimazonen bewertet. Die Ergebnisse weisen eine hohe Wirksamkeit von Nachtlüftungskonzepten im nord‐europäischen Sommerklima mit verhältnismäßig niedrigen Außentemperaturen nach. Im mitteleuropäischen Sommerklima bietet das Erdreich ein ausreichend niedriges Temperaturniveau für den effizienten Einsatz von wassergeführten Flächentemperiersystemen. Im südeuropäischen Sommerklima kann eine aktive Kühlung über Luft die hohen und schnell fluktuierenden Kühllasten effizient abführen.
Mit der Messung des Wärme- und Kälteverbrauchs im Labor gelingt es, sowohl thermisch träge als auch agile Flächentemperiersysteme unter praxisnahen, dynamischen Bedingungen messtechnisch zu bewerten. Werden Nutzwärme- und Nutzkältebedarf berechnet und ins Verhältnis zu den gemessenen Verbräuchen gesetzt, können die Aufwandzahlen für die Nutzenübergabe ece für verschiedene Flächentemperiersysteme und in Kombinationen mit anderen Übergabesystemen unter verschiedenen Nutzungsbedingungen und für unterschiedliche Betriebsführungsstrategien bestimmt werden. Damit stehen Aufwandszahlen auf Basis kalorischer Messungen zur Verfügung, die je nach Aufgabenstellung entweder produkt- oder objektbezogen in der Planung komplexer Energiekonzepte verwendet werden können und die tatsächlichen Aufwandszahlen eh, ce für den Heizfall bzw. ec, ce für den Kühlfall genauer als Literaturwerte bzw. projektbezogen beschreiben
In der Planungs- und Betriebspraxis herrscht im Bereich der Betriebsführung von thermisch aktivierten Bauteilsystemen und insbesondere der thermisch trägen Bauteilaktivierung noch große Unsicherheit. Trotz einer weiten Verbreitung dieser Systeme im Neubau von Nichtwohngebäuden hat sich bis heute keine einheitliche Betriebsführungsstrategie durchgesetzt. Vielmehr kritisieren Bauherren und Nutzer regelmäßig zu hohe bzw. niedrige Raumtemperaturen in den Übergangsjahreszeiten und bei Wetterwechsel sowie generell eine mangelhafte Regelbarkeit. Demgegenüber weisen Monitoringprojekte immer wieder einen hohen thermischen Komfort in diesen Gebäuden nach. Offensichtlich unterscheiden sich hier subjektiv empfundene Behaglichkeit und objektiv gemessener Komfort. Gleichzeitig sind Heiz- und Kühlkonzepte mit Flächentemperierung dann besonders energieeffizient, wenn das Regelkonzept auf deren thermische Trägheit angepasst ist. Eine gute Regelung gewährleistet also einen hohen thermischen Komfort und sorgt für einen möglichst niedrigen Energieeinsatz. Das Rechenverfahren mit Anlagenaufwandszahlen (in Anlehnung an DIN V 18599) bietet eine gute Möglichkeit, Anlagenkonzepte inklusive deren Betriebsführungsstrategie zu bewerten. Damit ist es möglich, eine auf das Gebäude angepasste Betriebsführungsstrategie für die Bauteilaktivierung zu finden und einheitlich zu bewerten.
Raman spectra from three different binary gasoline-ethanol blends (with ratios 95:5, 90:10, and 85:15) have been obtained by using a low-cost, frequency precise Fourier-transform Raman spectrometer (FT-Raman) prototype. The spectral information is presented in the range of 0 to 3500 cm-1 with a resolution of 1.66 cm-1, which is greater than the required for most liquid and solid chemical samples. This set-up delivers spectral information about the sample with a reduced spectral deviation compared to theoretical values (less than 0.4 cm-1 without compensation for instrumental response). The robust and highly fexible FT-Raman prototype presented for the spectral analysis, consisting mainly of a Michelson interferometer and a self-designed photon counter, is able to deliver high resolution and frequency precise Raman spectra from the gasoline-ethanol blends comparable to the obtained by using commercial devices. This FT-Raman set-up does not need additional complex hardware or software control and relies on re-sampling and interpolation algorithms. The qualitative spectral information obtained has been used to calculate the proportion of gasoline and ethanol present in the used chemical samples without using extra calibrations methods or chemical markers.