Refine
Year of publication
Document Type
- Article (reviewed) (119) (remove)
Has Fulltext
- no (119) (remove)
Is part of the Bibliography
- yes (119) (remove)
Keywords
- Adsorption (3)
- Mikrostruktur (3)
- Durchblutung (2)
- Elektronische Bibliothek (2)
- Ermüdung (2)
- Export (2)
- Gehirn (2)
- Grauguss (2)
- Gusseisen (2)
- HPTLC (2)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (44)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (30)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (21)
- INES - Institut für nachhaltige Energiesysteme (13)
- Fakultät Wirtschaft (W) (10)
- Fakultät Medien (M) (ab 22.04.2021) (7)
- IMLA - Institute for Machine Learning and Analytics (4)
- IfTI - Institute for Trade and Innovation (4)
- ACI - Affective and Cognitive Institute (2)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (2)
Open Access
- Open Access (119) (remove)
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
This report examines exporters’ challenges and possible solutions for public intervention to promote foreign trade. Based on fieldwork conducted in Georgia, we explore which policy approaches can help to stimulate Georgian exports further. Our outcomes show that exporters face substantial barriers such as navigating complex trade regulations, lack of knowledge about target markets, trade finance gaps, as well as new export promotion programs (EPPs) in competitor countries. Other upper-middle-income countries can learn from our results that exporters can significantly benefit from a comprehensive export promotion strategy combined with an ecosystem-based “team” approach. EPPs related to awareness and capacity building in Georgia should be part of this strategy, focusing on challenges such as a lack of knowledge about trade practices and international business skills. Other EPPs must help to mitigate related market failures, as information gathering is costly, and firms have no incentive to share this information with competitors. Furthermore, targeted marketing support and customer matchmaking can answer Georgian exporters’ challenges, such as lack of market access and low sector visibility. Our results also show that public intervention through financial support and risk mitigation is essential for firms with an international orientation. The high-quality, rich outcomes provide significant value for other upper-middle-income countries by exploring the example of Georgia’s contemporary circumstances in an in-depth manner based on extensive interviews and document analysis. Limitations include that our work primarily relies on qualitative data and further research could involve a quantitative study with a diverse range of sectors.
This paper presents a streaming-based E-Learning environment where closer integration between learning and work is achieved by integrating multimedia services into manufacturing processes. It contains a comprehensive and detailed explanation of the proposed E-Learning streaming framework, especially the adaption of streaming services to mobile environments. We first analyze several scenarios where E-Learning streaming services can be integrated into manufacturing processes. To allow systematic and tailor-made integration, we develop a model and a specification language for E-Learning streaming services and apply the model using practical scenarios from real manufacturing processes. Adaption of multimedia streaming services to mobile devices is discussed based on Synchronized Multimedia Integration Language (SMIL). Last, we comment on the benefits of using E-Learning streaming services as part of manufacturing processes and analyze the acceptance of the developed system. The key components of our E-Learning environment are 1) an xml based streaming service specification language, 2) adaption of multimedia E-Learning services to mobile environments, and 3) Web Services for searching, registration, and creation of E-Learning streaming services.
There is an ongoing debate about the use and scope of Clayton M. Christensen´s idea of disruptive innovation, including the question of whether it is a management buzz phrase or a valuable theory. This discussion considers the general question of how innovation in the field of management theories and concepts finds its way to the different target groups. This conceptual paper combines the different concepts of the creation and dissemination of management trends in a basic framework based on a short review of models for the dissemination of management ideas. This framework allows an analysis of the character of new management ideas like disruptive innovation. By measuring the impact of the theory on the academic sphere using a bibliometric statistic of the number of academic publications on Google scholar and Scopus and a meta-analysis of research papers, we show the significant influence of disruptive innovation beyond pure management fads.
We revisit the quantitative analysis of the ultrafast magnetoacoustic experiment in a freestanding nickel thin film by Kim and Bigot [J.-W. Kim and J.-Y. Bigot, Phys. Rev. B 95, 144422 (2017)] by applying our recently proposed approach of magnetic and acoustic eigenmode decomposition. We show that the application of our modeling to the analysis of time-resolved reflectivity measurements allows for the determination of amplitudes and lifetimes of standing perpendicular acoustic phonon resonances with unprecedented accuracy. The acoustic damping is found to scale as ∝ω2 for frequencies up to 80 GHz, and the peak amplitudes reach 10−3. The experimentally measured magnetization dynamics for different orientations of an external magnetic field agrees well with numerical solutions of magnetoelastically driven magnon harmonic oscillators. Symmetry-based selection rules for magnon-phonon interactions predicted by our modeling approach allow for the unambiguous discrimination between spatially uniform and nonuniform modes, as confirmed by comparing the resonantly enhanced magnetoelastic dynamics simultaneously measured on opposite sides of the film. Moreover, the separation of timescales for (early) rising and (late) decreasing precession amplitudes provide access to magnetic (Gilbert) and acoustic damping parameters in a single measurement.
The utilisation of artificial intelligence (AI) is progressively emerging as a significant mechanism for innovation in human resource management (HRM). The capacity to facilitate the transformation of employee performance across numerous responsibilities. AI development, there remains a dearth of comprehensive exploration into the potential opportunities it presents for enhancing workplace performance among employees. To bridge this gap in knowledge, the present work carried out a survey with 300 participants, utilises a fuzzy set-theoretic method that is grounded on the conceptualisation of AI, KS, and HRM. The findings of our study indicate that the exclusive adoption of AI technologies does not adequately enhance HRM engagements. In contrast, the integration of AI and KS offers a more viable HRM approach for achieving optimal performance in a dynamic digital society. This approach has the potential to enhance employees’ proficiency in executing their responsibilities and cultivate a culture of creativity inside the firm.
Heat pumps play a central role in decarbonizing the heat supply of buildings. However, in this article, implementing heat pumps in existing buildings, a significant challenge is still presented due to high temperature requirements. In this article, a systematic analysis of the effects of heat source temperatures, maximum heat pump condenser temperatures, and system temperatures on the seasonal performance of heat pump (HP) systems is presented. The quantitative performance analysis encompasses over 50 heat pumps installed in residential buildings, revealing correlations between the building characteristics, observed temperatures, and heat pump type. The performance of an HP system retrofitted to a 30-dwelling multifamily building is presented in more detail. The bivalent HP system combines air and ground as heat sources and achieves a seasonal performance factor of 3.25 with a share of the gas boiler of 27% in its first year of operation. In these findings, the technical feasibility of retrofitting heat pumps is demonstrated in existing buildings and insights are provided into overcoming the challenges associated with high temperature requirements.
Im Automobilbau bietet der Einsatz der Multimaterialbauweise ein signifikantes Potenzial zur Gewichtsreduktion. Zugleich erfordert diese Bauweise eine große Anzahl von Fügeverfahren für die Verbindung der unterschiedlichen Werkstoffe und Werkstoffklassen. Dabei muss eine Vielzahl an konstruktiven und materialseitigen Anforderungen berücksichtigt werden. Um in diesem Auswahlprozess den Aspekt des Leichtbaus beim Fügeverfahren selbst systematisch zu integrieren, wurde eine Methodik entwickelt, welche die Fügeverfahren im Hinblick auf ihr jeweiliges Leichtbaupotenzial bewertet.
Neural networks tend to overfit the training distribution and perform poorly on out-ofdistribution data. A conceptually simple solution lies in adversarial training, which introduces worst-case perturbations into the training data and thus improves model generalization to some extent. However, it is only one ingredient towards generally more robust models and requires knowledge about the potential attacks or inference time data corruptions during model training. This paper focuses on the native robustness of models that can learn robust behavior directly from conventional training data without out-of-distribution examples. To this end, we study the frequencies in learned convolution filters. Clean-trained models often prioritize high-frequency information, whereas adversarial training enforces models to shift the focus to low-frequency details during training. By mimicking this behavior through frequency regularization in learned convolution weights, we achieve improved native robustness to adversarial attacks, common corruptions, and other out-of-distribution tests. Additionally, this method leads to more favorable shifts in decision-making towards low-frequency information, such as shapes, which inherently aligns more closely with human vision.
Background
Internal tibial loading is influenced by modifiable factors with implications for the risk of stress injury. Runners encounter varied surface steepness (gradients) when running outdoors and may adapt their speed according to the gradient. This study aimed to quantify tibial bending moments and stress at the anterior and posterior peripheries when running at different speeds on surfaces of different gradients.
Methods
Twenty recreational runners ran on a treadmill at 3 different speeds (2.5 m/s, 3.0 m/s, and 3.5 m/s) and gradients (level: 0%; uphill: +5%, +10%, and +15%; downhill: –5%, –10%, and –15%). Force and marker data were collected synchronously throughout. Bending moments were estimated at the distal third centroid of the tibia about the medial–lateral axis by ensuring static equilibrium at each 1% of stance. Stress was derived from bending moments at the anterior and posterior peripheries by modeling the tibia as a hollow ellipse. Two-way repeated-measures analysis of variance were conducted using both functional and discrete statistical analyses.
Results
There were significant main effects for running speed and gradient on peak bending moments and peak anterior and posterior stress. Higher running speeds resulted in greater tibial loading. Running uphill at +10% and +15% resulted in greater tibial loading than level running. Running downhill at –10% and –15% resulted in reduced tibial loading compared to level running. There was no difference between +5% or –5% and level running.
Conclusion
Running at faster speeds and uphill on gradients ≥+10% increased internal tibial loading, whereas slower running and downhill running on gradients ≥–10% reduced internal loading. Adapting running speed according to the gradient could be a protective mechanism, providing runners with a strategy to minimize the risk of tibial stress injuries.
Injury prevention is essential in running due to the risk of overuse injury development. Tailoring running shoes to individual needs may be a promising strategy to reduce this risk. Novel manufacturing processes allow the production of individualised running shoes that incorporate features that meet individual biomechanical and experiential needs. However, specific ways to individualise footwear to reduce injury risk are poorly understood. Therefore, this scoping review provides an overview of (1) footwear design features that have the potential for individualisation; and (2) the literature on the differential responses to footwear design features between selected groups of individuals. These purposes focus exclusively on reducing the risk of overuse injuries. We included studies in the English language on adults that analysed: (1) potential interaction effects between footwear design features and subgroups of runners or covariates (e.g., age, sex) for running-related biomechanical risk factors or injury incidences; (2) footwear comfort perception for a systematically modified footwear design feature. Most of the included articles (n = 107) analysed male runners. Female runners may be more susceptible to footwear-induced changes and overuse injury development; future research should target more heterogonous sampling. Several footwear design features (e.g., midsole characteristics, upper, outsole profile) show potential for individualisation. However, the literature addressing individualised footwear solutions and the potential to reduce biomechanical risk factors is limited. Future studies should leverage more extensive data collections considering relevant covariates and subgroups while systematically modifying isolated footwear design features to inform footwear individualisation.
Künstliche Intelligenz (KI) durchdringt unser Leben immer stärker. Studierende werden im Alltag und an Hochschulen zunehmend mit KI-Anwendungen konfrontiert. An der Hochschule Offenburg werden deshalb KI-bezogene Lehrangebote curricular verankert, um Studierende im Erwerb von KI-Kompetenz zu unterstützen.
Der Beitrag stellt ein Konzept für die Entwicklung von Lehrveranstaltungen nach der Idee des pädagogischen Makings zur Förderung von KI-Kompetenz in der Hochschullehre vor. Konkretisiert wird das Konzept anhand eines Moduls zum Thema Chatbots, dessen Lehrinhalte interdisziplinär aus verschiedenen Perspektiven ausgearbeitet werden.
Public export credits and trade insurance require a global framework of institutions, rules and regulations to avoid subsidies and a race to the bottom. The extensive modernisation of the Arrangement on Officially Supported Export Credits (Arrangement) of the Organisation for Economic Co-operation and Development intends to re-level the playing field. This Practitioner Commentary describes the demand for adequate government interventions, considers the need for the reform and discusses key aspects of the new Arrangement. We argue that there is a breakthrough in several important areas such as tenors, repayment terms and green finance. However, we also find that the modernisation falls short in areas such as the interplay between different rulebooks, pre-shipment instruments' regulations and climate action.
Gamification wird in vielen Bereichen, die auch den Bildungssektor einschließen, zur Motivations- und Leistungssteigerung eingesetzt. Dieser Beitrag beschreibt das Design, die Umsetzung und Evaluierung eines Gamification-Konzeptes für die Vorlesung „Software Engineering" an der Hochschule Offenburg. Gamification soll nach Intention der Lehrenden eine kontinuierliche und tiefergehende Auseinandersetzung mit den Themen der Vorlesung forcieren sowie einen positiven Einfluss auf die Motivation der Studierenden haben, um den Lernprozess zu unterstützen. Zentral für das Gamification-Design sind dabei eine freiwillige Teilnahme, die Wahrnehmung der Bedeutung der Lerninhalte und ein zielorientierter Einsatz von Gamification-Elementen. Das entwickelte Konzept wurde in der Lernplattform Moodle realisiert, über drei Semester eingesetzt und parallel evaluiert. Die Ergebnisse dieser Evaluierungen zeigen, dass die Studierenden den gamifizierten Kurs intensiv und oft über das gesamte Semester nutzten und aus eigenem Antrieb eine Vielzahl von Übungen absolvierten.
Seit den ersten Projekten der 90er Jahre arbeiten Hochschulen daran, geeignete Servicestrukturen für E-Learning zu etablieren, die die erforderliche technische, didaktische und organisatorische Unterstützung hochschulweit zur Verfügung stellen. Ging es zunächst darum, Services überhaupt dauerhaft zu sichern, steht heute die Frage des „wie“ im Vordergrund. Dabei wird am Bereich E-Learning ein eigentlich viel allgemeineres Problem deutlich: Die bisher überwiegende Organisation der Hochschule nach funktionellen Einheiten stößt an ihre Grenzen. Wir schlagen eine stärker prozessorientierte Sichtweise vor, analog zu Entwicklungen bei der Organisation von Unternehmen.
The Humboldt Portal has been designed and implemented as part of an ongoing research project to develop an information system on the Internet to share the documents and rare books of Alexander von Humboldt, a 19th century German scientist and explorer, who viewed the natural world holistically and described the harmony of nature among the diversity of the physical world. Even after more than two centuries he is admired for his ability to see the natural world and human nature in the context of a complex network of relationships. The design and implementation of the Humboldt Portal are also oriented to support further research on Humboldt’s intellectual perspective.
Although all of Humboldt's works can be found on the internet as digitized documents, the complexity and internal inter-connectivity of his vision of nature cannot be adequately represented only by digitized papers or scanned documents in digital libraries.
As a consequence a specific portal of the Humboldt's documents was developed, which extends the standards of digital libraries and offers a technical approach for the adequate presentation of highly interconnected data.
Due to the continuous scientific and literary research, new insights and requirements for the digital presentation of Humboldt documents are constantly emerging, so that this article only provides a summary of the concepts realized at now. Consequently, the design and implementation of the Humboldt Portal is both: a consequence of a continuing research project and oriented to support more research on Humboldt´s intellectual holistic perspective, which was an anticipation to the System Approach of the last Century.
Automatic Identification of Travel Locations in Rare Books - Object Oriented Information Management
(2017)
The digital content of the Internet is growing exponentially and mass digitization of printed media opens access to literature, in particular the genre of travel literature from the 18th and 19th century, which consists of diaries or travel books describing routes, observations or inspirations. The identification of described locations in the digital text is a long-standing challenge which requires information technology to supply dynamic links to sources by new forms of interaction and synthesis between humanistic texts and scientific observations.
Using object oriented information technology, a prototype of a software tool is developed which makes it possible to automatically identify geographic locations and travel routes mentioned in rare books. The information objects contain properties such as names and classification codes for populated places, streams, mountains and regions. Together, with the latitudes and longitudes of every single location, it is possible to geo-reference this information in order that all processed and filtered datasets can be displayed by a map application. This method has already been used in the Humboldt Digital Library to present Alexander von Humboldt’s maps and was tested in a case study to prove the correctness and reliability of the automatic identification of locations based on the work of Alexander von Humboldt and Johann Wolfgang von Goethe.
The results reveal numerous errors due to misspellings, change of location names, equality of terms and location names. But on the other hand it becomes very clear that results of the automatic object detection and recognition can be improved by error-free and comprehensive sources. As a result an increase in quality and usability of the service can be expected, accompanied by more options to detect unknown locations in the descriptions of rare books.
In the 19th century Alexander von Humboldt explored the nature and was conceived a new vision of nature that still influences the way we understand the new world. Humboldt believed in the importance of accurate measurements and precise description of observations. His vision of nature included not only facts but also emotions.
Nowadays smart solutions will be developed by using computer technology, which will influence our relationship to nature, our handling of the complexity and diversity of nature itself and the technological influences on the society. Could we avoid a new form of “Colonialism”, when a network of super computers will create a smarter world?
High-performance thin-layer chromatography (HPTLC), as the modern form of TLC (thin-layer chromatography), is suitable for detecting pharmaceutically active compounds over a wide polarity range using the gradient multiple development (GMD) technique. Diode-array detection (DAD) in conjunction with HPTLC can simultaneously acquire ultraviolet‒visible (UV‒VIS) and fluorescence spectra directly from the plate. Visualization as a contour plot helps to identify separated zones. An orange peel extract is used as an example to show how GMD‒DAD‒HPTLC in seven different developments with seven different solvents can provide an overview of the entire sample. More than 50 compounds in the extract can be separated on a 6-cm HPTLC plate. Such separations take place in the biologically inert stationary phase of HPTLC, making it a suitable method for effect-directed analysis (EDA). HPTLC‒EDA can even be performed with living organism, as confirmed by the use of Aliivibrio fischeri bacteria to detect bioluminescence as a measure of toxicity. The combining of gradient multiple development planar chromatography with diode-array detection and effect-directed analysis (GMD‒DAD‒HPTLC‒EDA) in conjunction with specific staining methods and time-of-flight mass spectrometry (TOF‒MS) will be the method of choice to find new chemical structures from plant extracts that can serve as the basic structure for new pharmaceutically active compounds.
Two solvent mixtures for high-performance thin-layer chromatographic (HPTLC) separation of some compounds showing estrogenic activity in the yeast estrogen screen (YES) assay are presented. The new method, planar yeast estrogen screen (pYES) combines the YES assay and a chromatographic separation on silica gel HPTLC plates with the performance of the YES assay. For separation, the analytes were applied bandwise to HPTLC plates (10 × 20 cm) with fluorescent dye (Merck, Germany). The plates were developed in a vertical developing chamber after 30 min of chamber saturation over a separation distance of 70 mm, using cyclohexane‒methyl-ethyl ketone (2:1, V/V) or cyclohexane‒CPME (3:2, V/V) as solvents. Both solvents allow separation of estriol, daidzein, genistein, 17β-estradiol, 17α-ethinyl estradiol, estrone, 4-nonylphenol and bis(2-ethylhexyl) phthalate.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average R² score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Virtual reality (VR) offers the opportunity to create virtual worlds that could replace real experiences. This research investigates the influence of user motivation, temporal distance and experience type on the satisfaction with the VR experience, and the degree of acceptance of a VR experience as a substitute for a real experience. The results suggest that the degree of acceptance of a VR experience as a substitute for a real experience is higher for passive VR experiences compared to active VR experiences. Furthermore, the results support the assumption that users are more satisfied with passive VR experiences.
Die Corona-Semester erforderten die Übertragung der Brückenkurse Mathematik in ein digitales Lehr-format. Gerade beim Studieneinstieg spielen persönliche Unterstützung und soziale Eingebundenheit für Studierende eine besonders wichtige Rolle. Deshalb lag die besondere Herausforderung bei der Übertragung in ein digitales Format darin, die wegfallenden üblichen Kennenlern- und Kommunika-tionsmöglichkeiten, die sich in Präsenzformaten beispielsweise in den Pausen oder im Gespräch mit den Sitznachbarn ergeben, zu kompensieren. Vorliegender Beitrag stellt vor, inwieweit der Transfer in ein digitales Format gelungen ist. Das digitale Brückenkurskonzept wurde in ein didaktisches Entwurfsmuster übertragen, um durch die strukturierte und nachvollziehbare Darstellung den Transfer und die Vergleichbarkeit der Ergebnisse zu erleichtern.
The accurate diagnosis of state of charge (SOC) and state of health (SOH) is of utmost importance for battery users and for battery manufacturers. State diagnosis is commonly based on measuring battery current and using it in Coulomb counters or as input for a current-controlled model. Here we introduce a new algorithm based on measuring battery voltage and using it as input for a voltage-controlled model. We demonstrate the algorithm using fresh and pre-aged lithium-ion battery single cells operated under well-defined laboratory conditions on full cycles, shallow cycles, and a dynamic battery electric vehicle load profile. We show that both SOC and SOH are accurately estimated using a simple equivalent circuit model. The new algorithm is self-calibrating, is robust with respect to cell aging, allows to estimate SOH from arbitrary load profiles, and is numerically simpler than state-of-the-art model-based methods.
Lithium-ion batteries exhibit a well-known trade-off between energy and power, which is problematic for electric vehicles which require both high energy during discharge (high driving range) and high power during charge (fast-charge capability). We use two commercial lithium-ion cells (high-energy [HE] and high-power) to parameterize and validate physicochemical pseudo-two-dimensional models. In a systematic virtual design study, we vary electrode thicknesses, cell temperature, and the type of charging protocol. We are able to show that low anode potentials during charge, inducing lithium plating and cell aging, can be effectively avoided either by using high temperatures or by using a constant-current/constant-potential/constant-voltage charge protocol which includes a constant anode potential phase. We introduce and quantify a specific charging power as the ratio of discharged energy (at slow discharge) and required charging time (at a fast charge). This value is shown to exhibit a distinct optimum with respect to electrode thickness. At 35°C, the optimum was achieved using an HE electrode design, yielding 23.8 Wh/(min L) volumetric charging power at 15.2 min charging time (10% to 80% state of charge) and 517 Wh/L discharge energy density. By analyzing the various overpotential contributions, we were able to show that electrolyte transport losses are dominantly responsible for the insufficient charge and discharge performance of cells with very thick electrodes.
Jürgen Zierep passed away on July 29, 2021, at the age of 92. To him, science and education was not only a profession, but an affair of the heart. His impressive contributions in fluid mechanics comprise about 200 scientific publications in the fields of gas dynamics, similarity laws, flow instabilities, flows with energy transfer, and non-Newtonian fluids. In addition, he wrote eleven textbooks with great dedication. Those books by the “scientist who loves to teach” are nowadays available in different languages and regularly appear in new editions.
Significant progress in the development and commercialization of electrically conductive adhesives has been made. This makes shingling a very attractive approach for solar cell interconnection. In this study, we investigate the shading tolerance of two types of solar modules based on shingle interconnection: first, the already commercialized string approach, and second, the matrix technology where solar cells are intrinsically interconnected in parallel and in series. An experimentally validated LTspice model predicts major advantages for the power output of the matrix layout under partial shading. Diagonal as well as random shading of a 1.6-m2 solar module is examined. Power gains of up to 73.8 % for diagonal shading and up to 96.5 % for random shading are found for the matrix technology compared to the standard string approach. The key factor is an increased current extraction due to lateral current flows. Especially under minor shading, the matrix technology benefits from an increased fill factor as well. Under diagonal shading, we find the probability of parts of the matrix module being bypassed to be reduced by 40 % in comparison to the string module. In consequence, the overall risk of hotspot occurrence in matrix modules is decreased significantly.
A versatile liquid metal (LM) printing process enabling the fabrication of various fully printed devices such as intra- and interconnect wires, resistors, diodes, transistors, and basic circuit elements such as inverters which are process compatible with other digital printing and thin film structuring methods for integration is presented. For this, a glass capillary-based direct-write method for printing LMs such as eutectic gallium alloys, exploring the potential for fully printed LM-enabled devices is demonstrated. Examples for successful device fabrication include resistors, p–n diodes, and field effect transistors. The device functionality and easiness of one integrated fabrication flow shows that the potential of LM printing is far exceeding the use of interconnecting conventional electronic devices in printed electronics.
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE).
Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states.
Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients.
Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
The compliant nature of distal limb muscle-tendon units is traditionally considered suboptimal in explosive movements when positive joint work is required. However, during accelerative running, ankle joint net mechanical work is positive. Therefore, this study aims to investigate how plantar flexor muscle-tendon behavior is modulated during fast accelerations. Eleven female sprinters performed maximum sprint accelerations from starting blocks, while gastrocnemius muscle fascicle lengths were estimated using ultrasonography. We combined motion analysis and ground reaction force measurements to assess lower limb joint kinematics and kinetics, and to estimate gastrocnemius muscle-tendon unit length during the first two acceleration steps. Outcome variables were resampled to the stance phase and averaged across three to five trials. Relevant scalars were extracted and analyzed using one-sample and two-sample t-tests, and vector trajectories were compared using statistical parametric mapping. We found that an uncoupling of muscle fascicle behavior from muscle-tendon unit behavior is effectively used to produce net positive mechanical work at the joint during maximum sprint acceleration. Muscle fascicles shortened throughout the first and second steps, while shortening occurred earlier during the first step, where negative joint work was lower compared with the second step. Elastic strain energy may be stored during dorsiflexion after touchdown since fascicles did not lengthen at the same time to dissipate energy. Thus, net positive work generation is accommodated by the reuse of elastic strain energy along with positive gastrocnemius fascicle work. Our results show a mechanism of how muscles with high in-series compliance can contribute to net positive joint work.
Despite increasing budgets for social media activities and a wide variety of performance measurement possibilities, many companies do not measure the performance of their social media activities. Research shows that those companies that measure the performance of social media activities use incorrect, too few or inappropriate metrics. A central problem is that there is often an inadequate performance measurement process. This article presents a process that focuses on the objectives of social media activities. In phase one of this process, suitable metrics are selected and target values are defined based on these objectives. In phase two, data are collected and analysed. Finally, actions are defined. The developed process helps companies to measure the performance of their social media activities.
Digitale Lernszenarien in der Hochschullehre. Bedeutung und Funktion aus Sicht von Studierenden
(2021)
Bedingt durch die Coronapandemie wurde in den Informatikkursen Software Engineering und Computernetze an der Hochschule Offenburg ein Lernsetting entwickelt, das mehrere digitale Lernszenarien (Online-Sessions, Lernvideos, Wikis, Quiz, Foren und die selbst entwickelte Lernplattform MILearning) integriert. Im Wintersemester 2020/2021 fand eine Evaluierung statt, um den Einsatz der unterschiedlichen digitalen Lernszenarien in der aktuellen Situation zu bewerten und um zu entscheiden, welche Lernszenarien sinnvoll für einen Einsatz nach der Pandemie sind. Aus dem Blickwinkel des Didaktischen Designs spielen dabei die Eignung der Szenarien für die Wissensvermittlung, die Aktivierung der Studierenden und die Betreuung bei Fragen und Problemen eine wichtige Rolle. Die Ergebnisse zeigen, dass Studierende das Lernsetting intensiv nutzen und die angebotenen digitalen Lernszenarien lernförderlich kombinieren.
Modern Franciscan Leadership
(2020)
This article combines two important areas of practical theology: Monastic rules and leadership in a cloistral organisation, using the Rule of Saint Francis as a prominent example. The aim of this research is to examine how living Christian tradition in a monastic order affects leadership today, discovering how the Rule and Franciscan spirituality impact managing a convent. The research question is answered within this inductive research applying the methodology of the ‘theology in four voices.’ Based on the results, it is possible to build a coherent leadership system based on Biblical and Franciscan sources.
Experimental Investigation of the Air Exchange Effectiveness of Push-Pull Ventilation Devices
(2020)
The increasing installation numbers of ventilation units in residential buildings are driven by legal objectives to improve their energy efficiency. The dimensioning of a ventilation system for nearly zero energy buildings is usually based on the air flow rate desired by the clients or requested by technical regulations. However, this does not necessarily lead to a system actually able to renew the air volume of the living space effectively. In recent years decentralised systems with an alternating operation mode and fairly good energy efficiencies entered the market and following question was raised: “Does this operation mode allow an efficient air renewal?” This question can be answered experimentally by performing a tracer gas analysis. In the presented study, a total of 15 preliminary tests are carried out in a climatic chamber representing a single room equipped with two push-pull devices. The tests include summer, winter and isothermal supply air conditions since this parameter variation is missing till now for push-pull devices. Further investigations are dedicated to the effect of thermal convection due to human heat dissipation on the room air flow. In dependence on these boundary conditions, the determined air exchange efficiency varies, lagging behind the expected range 0.5 < εa < 1 in almost all cases, indicating insufficient air exchange including short-circuiting. Local air exchange values suggest inhomogeneous air renewal depending on the distance to the indoor apertures as well as the temperature gradients between in- and outdoor. The tested measurement set-up is applicable for field measurements.
In this paper, we describe the PALM model system 6.0. PALM (formerly an abbreviation for Parallelized Large-eddy Simulation Model and now an independent name) is a Fortran-based code and has been applied for studying a variety of atmospheric and oceanic boundary layers for about 20 years. The model is optimized for use on massively parallel computer architectures. This is a follow-up paper to the PALM 4.0 model description in Maronga et al. (2015). During the last years, PALM has been significantly improved and now offers a variety of new components. In particular, much effort was made to enhance the model with components needed for applications in urban environments, like fully interactive land surface and radiation schemes, chemistry, and an indoor model. This paper serves as an overview paper of the PALM 6.0 model system and we describe its current model core. The individual components for urban applications, case studies, validation runs, and issues with suitable input data are presented and discussed in a series of companion papers in this special issue.
Diffracted waves carry high‐resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. However, the diffraction energy tends to be weak compared to the reflected energy and is also sensitive to inaccuracies in the migration velocity, making the identification of its signal challenging. In this work, we present an innovative workflow to automatically detect scattering points in the migration dip angle domain using deep learning. By taking advantage of the different kinematic properties of reflected and diffracted waves, we separate the two types of signals by migrating the seismic amplitudes to dip angle gathers using prestack depth imaging in the local angle domain. Convolutional neural networks are a class of deep learning algorithms able to learn to extract spatial information about the data in order to identify its characteristics. They have now become the method of choice to solve supervised pattern recognition problems. In this work, we use wave equation modelling to create a large and diversified dataset of synthetic examples to train a network into identifying the probable position of scattering objects in the subsurface. After giving an intuitive introduction to diffraction imaging and deep learning and discussing some of the pitfalls of the methods, we evaluate the trained network on field data and demonstrate the validity and good generalization performance of our algorithm. We successfully identify with a high‐accuracy and high‐resolution diffraction points, including those which have a low signal to noise and reflection ratio. We also show how our method allows us to quickly scan through high dimensional data consisting of several versions of a dataset migrated with a range of velocities to overcome the strong effect of incorrect migration velocity on the diffraction signal.
Extracting horizon surfaces from key reflections in a seismic image is an important step of the interpretation process. Interpreting a reflection surface in a geologically complex area is a difficult and time-consuming task, and it requires an understanding of the 3D subsurface geometry. Common methods to help automate the process are based on tracking waveforms in a local window around manual picks. Those approaches often fail when the wavelet character lacks lateral continuity or when reflections are truncated by faults. We have formulated horizon picking as a multiclass segmentation problem and solved it by supervised training of a 3D convolutional neural network. We design an efficient architecture to analyze the data over multiple scales while keeping memory and computational needs to a practical level. To allow for uncertainties in the exact location of the reflections, we use a probabilistic formulation to express the horizons position. By using a masked loss function, we give interpreters flexibility when picking the training data. Our method allows experts to interactively improve the results of the picking by fine training the network in the more complex areas. We also determine how our algorithm can be used to extend horizons to the prestack domain by following reflections across offsets planes, even in the presence of residual moveout. We validate our approach on two field data sets and show that it yields accurate results on nontrivial reflectivity while being trained from a workable amount of manually picked data. Initial training of the network takes approximately 1 h, and the fine training and prediction on a large seismic volume take a minute at most.
A Hybrid Optoelectronic Sensor Platform with an Integrated Solution‐Processed Organic Photodiode
(2021)
Hybrid systems, unifying printed electronics with silicon‐based technology, can be seen as a driving force for future sensor development. Especially interesting are sensing elements based on printed devices in combination with silicon‐based high‐performance electronics for data acquisition and communication. In this work, a hybrid system integrating a solution‐processed organic photodiode in a silicon‐based system environment, which enables flexible device measurement and application‐driven development, is presented. For performance evaluation of the integrated organic photodiode, the measurements are compared to a silicon‐based counterpart. Therefore, the steady state response of the hybrid system is presented. Promising application scenarios are described, where a solution‐processed organic photodiode is fully integrated in a silicon system.
Fully Printed Inverters using Metal‐Oxide Semiconductor and Graphene Passives on Flexible Substrates
(2020)
Printed and flexible metal‐oxide transistor technology has recently demonstrated great promise due to its high performance and robust mechanical stability. Herein, fully printed inverter structures using electrolyte‐gated oxide transistors on a flexible polyimide (PI) substrate are discussed in detail. Conductive graphene ink is printed as the passive structures and interconnects. The additive printed transistors on PI substrates show an on/off ratio of 106 and show mobilities similar to the state‐of‐the‐art printed transistors on rigid substrates. Printed meander structures of graphene are used as pull‐up resistances in a transistor–resistor logic to create fully printed inverters. The printed and flexible inverters show a signal gain of 3.5 and a propagation delay of 30 ms. These printed inverters are able to withstand a tensile strain of 1.5% following more than 200 cycles of mechanical bending. The stability of the electrical direct current (DC) properties has been observed over a period of 5 weeks. These oxide transistor‐based fully printed inverters are relevant for digital printing methods which could be implemented into roll‐to‐roll processes.
In this study, a facile method to fabricate a cohesive ion‐gel based gate insulator for electrolyte‐gated transistors is introduced. The adhesive and flexible ion‐gel can be laminated easily on the semiconducting channel and electrode manually by hand. The ion‐gel is synthesized by a straightforward technique without complex procedures and shows a remarkable ionic conductivity of 4.8 mS cm−1 at room temperature. When used as a gate insulator in electrolyte‐gated transistors (EGTs), an on/off current ratio of 2.24×104 and a subthreshold swing of 117 mV dec−1 can be achieved. This performance is roughly equivalent to that of ink drop‐casted ion‐gels in electrolyte‐gated transistors, indicating that the film‐attachment method might represent a valuable alternative to ink drop‐casting for the fabrication of gate insulators.
A disturbed synchronization of the ventricular contraction can cause a highly developed systolic heart failure in affected patients with reduction of the left ventricular ejection fraction, which can often be explained by a diseased left bundle branch block (LBBB). If medication remains unresponsive, the concerned patients will be treated with a cardiac resynchronization therapy (CRT) system. The aim of this study was to integrate His-bundle pacing into the Offenburg heart rhythm model in order to visualize the electrical pacing field generated by His-Bundle-Pacing. Modelling and electrical field simulation activities were performed with the software CST (Computer Simulation Technology) from Dessault Systèms. CRT with biventricular pacing is to be achieved by an apical right ventricular electrode and an additional left ventricular electrode, which is floated into the coronary vein sinus. The non-responder rate of the CRT therapy is about one third of the CRT patients. His- Bundle-Pacing represents a physiological alternative to conventional cardiac pacing and cardiac resynchronization. An electrode implanted in the His-bundle emits a stronger electrical pacing field than the electrical pacing field of conventional cardiac pacemakers. The pacing of the Hisbundle was performed by the Medtronic Select Secure 3830 electrode with pacing voltage amplitudes of 3 V, 2 V and 1,5 V in combination with a pacing pulse duration of 1 ms. Compared to conventional pacemaker pacing, His-bundle pacing is capable of bridging LBBB conduction disorders in the left ventricle. The His-bundle pacing electrical field is able to spread via the physiological pathway in the right and left ventricles for CRT with a narrow QRS-complex in the surface ECG.
Bei bimodaler Cochlea-Implantat-/Hörgerät-Versorgung kann es aufgrund seitenverschiedener Signalverarbeitung zu einer zeitlich versetzten Stimulation der beiden Modalitäten kommen. Jüngste Studien haben gezeigt, dass durch zeitlichen Abgleich der Modalitäten die Schalllokalisation bei bimodaler Versorgung verbessert werden kann. Um solch einen Abgleich vornehmen zu können, ist die messtechnische Bestimmung der Durchlaufzeit von Hörgeräten erforderlich. Kommerziell verfügbare Hörgerätemessboxen können diese Werte häufig liefern. Die dazu verwendete Signalverarbeitung wird dabei aber oft nicht vollständig offengelegt. In dieser Arbeit wird ein alternativer und nachvollziehbarer Ansatz zum Design eines simplen Messaufbaus basierend auf einem Arduino DUE Mikrocontroller-Board vorgestellt. Hierzu wurde ein Messtisch im 3D-Druck gefertigt, auf welchem Hörgeräte über einen 2-ccm-Kuppler an ein Messmikrofon angeschlossen werden können. Über einen Latenzvergleich mit dem simultan erfassten Signal eines Referenzmikrofons kann die Durchlaufzeit von Hörgeräten bestimmt werden. Frequenzspezifische Durchlaufzeiten werden mittels einer Kreuzkorrelation zwischen Ziel- und Referenzsignal errechnet. Aufnahme, Ausgabe und Speicherung der Signale erfolgt über einen ATMEL SAM3X8E Mikrocontroller, welcher auf dem Arduino DUE-Board verbaut ist. Über eigens entworfene elektronische Schaltungen werden die Mikrofone und der verwendete Lautsprecher angesteuert. Nach Abschluss einer Messung (Messdauer ca. 5 s) werden die Messdaten seriell an einen PC übertragen, auf dem die Datenauswertung mittels MATLAB erfolgt. Erste Validierungen zeigten eine hohe Stabilität der Messergebnisse mit sehr geringen Standardabweichungen im Bereich weniger Mikrosekunden für Pegel zwischen 50 und 75 dB (A). Der Messaufbau wird in laufenden Studien zur Quantifizierung der Durchlaufzeit von Hörgeräten verwendet.
Cooling towers or recoolers are one of the major consumers of electricity in a HVAC plant. The implementation and analysis of advanced control methods in a practical application and its comparison with conventional controllers is necessary to establish a framework for their feasibility especially in the field of decentralised energy systems. A standard industrial controller, a PID and a model based controller were developed and tested in an experimental set-up using market-ready components. The characteristics of these controllers such as settling time, control difference, and frequency of control actions are compared based on the monitoring data. Modern controllers demonstrated clear advantages in terms of energy savings and higher accuracy and a model based controller was easier to set-up than a PID.
Passive hybridization refers to a parallel connection of photovoltaic and battery cells on the direct current level without any active controllers or inverters. We present the first study of a lithium-ion battery cell connected in parallel to a string of four or five serially-connected photovoltaic cells. Experimental investigations were performed using a modified commercial photovoltaic module and a lithium titanate battery pouch cell, representing an overall 41.7 W-peak (photovoltaic)/36.8 W-hour (battery) passive hybrid system. Systematic and detailed monitoring of this system over periods of several days with different load scenarios was carried out. A scaled dynamic synthetic load representing a typical profile of a single-family house was successfully supplied with 100 % self-sufficiency over a period of two days. The system shows dynamic, fully passive self-regulation without maximum power point tracking and without battery management system. The feasibility of a photovoltaic/lithium-ion battery passive hybrid system could therefore be demonstrated.
Silicon (Si) has turned out to be a promising active material for next‐generation lithium‐ion battery anodes. Nevertheless, the issues known from Si as electrode material (pulverization effects, volume change etc.) are impeding the development of Si anodes to reach market maturity. In this study, we are investigating a possible application of Si anodes in low‐power printed electronic applications. Tailored Si inks are produced and the impact of carbon coating on the printability and their electrochemical behavior as printed Si anodes is investigated. The printed Si anodes contain active material loadings that are practical for powering printed electronic devices, like electrolyte gated transistors, and are able to show high capacity retentions. A capacity of 1754 mAh/gSi is achieved for a printed Si anode after 100 cycles. Additionally, the direct applicability of the printed Si anodes is shown by successfully powering an ink‐jet printed transistor.
The Future of FDI: Achieving the Sustainable Development Goals 2030 through Impact Investment
(2019)
Publicized as a global call for action in 2015, the United Nations General Assembly passed a resolution on the Sustainable Development Goals 2030 (SDGs). Before issuing the SDGs in 2015, the United Nations Conference on Trade and Development (UNCTAD) has already identified in 2014, as part of their World Investment Report, that especially developing countries are facing an estimated USD 2.5 trillion funding gap annually in the efforts to achieve the SDGs. Yet, the investment opportunities and challenges for investors, when contributing to the closure of this funding gap while benefiting from its economic potential have not been widely discussed. Despite that Foreign Direct Investments (FDI) are a key driver to sustainable economic growth and prosperity of a nation, policies and a holistic framework linking the 2030 Agenda to actionable investment opportunities for private investors are missing. Furthermore, a global platform capturing, channeling and promoting investment projects aiming to achieve the SDGs through impact investment has not been established. Utilizing global financial resources more effectively while developing new approaches and tools to promote impact investments, which demonstrate the benefits for investors to tap into the funding gap of the 2030 Agenda, will have the potential to significantly shape and influence the future of FDI.
A new yield function for lamellar gray cast iron materials is proposed. The new model is able to describe the results of recently performed microstructure-based finite-element computations that resolve the three dimensional yield surface of three different gray cast irons. The yield function requires only the yield stress in tension and compression of the respective material as model parameters. Furthermore, the algorithmic formulation of the new model is assessed for numerical robustness and efficiency.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.