Refine
Year of publication
- 2022 (341) (remove)
Document Type
- Conference Proceeding (106)
- Article (reviewed) (72)
- Part of a Book (63)
- Article (unreviewed) (23)
- Other (20)
- Book (18)
- Contribution to a Periodical (11)
- Working Paper (8)
- Report (7)
- Patent (5)
- Doctoral Thesis (4)
- Letter to Editor (3)
- Periodical Part (1)
Conference Type
- Konferenzartikel (87)
- Konferenz-Abstract (13)
- Konferenz-Poster (3)
- Sonstiges (3)
Is part of the Bibliography
- yes (341) (remove)
Keywords
- COVID-19 (10)
- injury (10)
- Digitalisierung (8)
- biomechanics (7)
- running (7)
- 3D printing (6)
- Machine Learning (6)
- Entrepreneurship (5)
- ACL (4)
- Robustness (4)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (108)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (87)
- Fakultät Medien (M) (ab 22.04.2021) (87)
- Fakultät Wirtschaft (W) (64)
- INES - Institut für nachhaltige Energiesysteme (33)
- POIM - Peter Osypka Institute of Medical Engineering (21)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (16)
- IMLA - Institute for Machine Learning and Analytics (14)
- ACI - Affective and Cognitive Institute (7)
- CRT - Campus Research & Transfer (4)
Open Access
- Closed (165)
- Open Access (143)
- Bronze (51)
- Closed Access (33)
- Diamond (29)
- Gold (23)
- Hybrid (15)
- Grün (7)
Drawing off the technical flexibility of building polygeneration systems to support a rapidly expanding renewable electricity grid requires the application of advanced controllers like model predictive control (MPC) that can handle multiple inputs and outputs, uncertainties in forecast data, and plant constraints amongst other features. In this original work, an economic-MPC-based optimal scheduling of a real-world building energy system is demonstrated and its performance is evaluated against a conventional controller. The demonstration includes the steps to integrate an optimisation-based supervisory controller into a standard building automation and control system with off-the-shelf HVAC components and usage of state-of-art algorithms for solving complex nonlinear mixed integer optimal control problems. With the MPC, quantitative benefits in terms of 6–12% demand-cost savings and qualitative benefits in terms of better controller adaptability and hardware-friendly operation are identified. Further research potential for improving the MPC framework in terms of field-level stability, minimising constraint violations, and inter-system communication for its deployment in a prosumer-network is also identified.
In this paper, the Bauschinger effect and latent hardening of single crystals are assessed in finite element calculations using a single crystal plasticity model with kinematic hardening. To this end, results of cyclic micro-bending experiments on single crystal Alloy 718 in different crystal orientations (single slip and multi slip) with respect to the loading direction are used to determine the slip system related material properties of the single crystal plasticity model. Two kinematic hardening laws are considered: a kinematic hardening law describing latent hardening and a kinematic hardening law without latent hardening. For the determination of material properties for both hardening laws, a gradient-based optimization method is used. The results show that the different strength levels observed for micro-bending tests on different crystal orientations can only be described with latent kinematic hardening well, whereas the pronounced Bauschinger effect is described well by both kinematic hardening laws. It is concluded that cyclic micro-bending experiments on single crystals using different crystal orientations give an appropriate data base for the determination of the slip system related material properties of the single crystal plasticity model with latent kinematic hardening.
As PV enters the terawatt era, reliability, sustainability and low carbon footprint of solar modules are key requirements. The N.I.C.E.TM technology from Apollon Solar is a good candidate for significant improvements in these areas. As the second-generation pilot line is now functional with IEC certification underway, we present a holistic assessment of N.I.C.E.TM technology compared with conventional module technology with encapsulant. This includes electrical performance and cost/consumables, reliability, and degradation mechanisms as well as sustainability aspects. In addition, the new generation of N.I.C.E.-wire modules are presented that use thin round Cu wires instead of flat ribbons for interconnection. This candidate technology for an alternative to the Smart Wire Connection Technology (SWCT) is investigated experimentally as well as via numerical simulations.
The isolation measures adopted during the COVID-19 pandemic brought light to discussions related to the importance of meaningful social relationships as a basic need to human well-being. But even before the pandemic outbreak in the years 2020 and 2021, organizations and scholars were already drawing attention to the growing numbers related to lonely people in the world (World Economic Forum, 2019). Loneliness is an emotional distress caused by the lack of meaningful social connections, which affects people worldwide across all age groups, mainly young adults (Rook, 1984). The use of digital technologies has gained prominence as a means of alleviating the distress. As an example, studies have shown the benefits of using digital games both to stimulate social interactions (Steinfield, Ellison & Lampe, 2008) and to enhance the effects of digital interventions for mental health treatments, through gamification (Fleming et al., 2017). It is with these aspects in mind that the gamified app Noneliness was designed with the intention of reducing loneliness rates among young students at a German university. In addition to sharing the related works that supported the application development, this chapter also presents the aspects considered for the resource's design, its main functionalities, and the preliminary results related to the reduction of loneliness in the target audience.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused on stabilizing the gait. Moreover, we would like to overcome the constraints of a ZMP-algorithm that has a horizontal footplate as precondition for the simplification of the equations. In addition we would like to switch between impedance and position control with a fuzzy-like algorithm that might help to minimize jerks when Sweaty’s feet touch the ground.
Editorial
(2022)
Editorial
(2022)
Editorial
(2022)
Editorial
(2022)
Editorial
(2022)
Editorial
(2022)
Publisher und Start-ups
(2022)
Mit dem Klimaschutzgesetz 2021 wurden von der Bundesregierung die Klimaschutzvorgaben verschärft und die Treibhausgasneutralität bis 2045 als Ziel verankert. Zur Erreichung dieses ambitionierten Ziels ist es notwendig, im Bereich der Mobilität weitgehend von Verbrennungsmotoren mit fossilen Kraftstoffen auf Elektromobilität mit regenerativ erzeugtem Strom umzusteigen. Dabei ist die zügige Bereitstellung einer ausreichenden Ladeinfrastruktur für die Elektrofahrzeuge eine große Herausforderung. Neben der Installation einer ausreichend großen Zahl von Ladepunkten selbst besteht die Herausforderung darin, diese in das bestehende Verteilungsnetz zu integrieren bzw. das Verteilungsnetz so auszubauen, dass weiter ein sicherer Netzbetrieb gewährleistet werden kann. Dabei sind insbesondere Lösungen gefragt, bei denen der Ausbau der Ladeinfrastruktur und der Netzbetriebsmittel durch intelligentes Management des Ladens so gering wie möglich gehalten wird, indem vorhandene oder neu zu installierender Hardware möglichst effizient genutzt wird.
Hier setzte das Projekt „Intelligente Ladeinfrastruktur für Elektrofahrzeuge auf dem Parkplatz der Hochschule Offenburg (INTLOG)“ (Projektlaufzeit 15.11.2020 – 30.09.2022) an. Inhalt des Projekts war es, einen Ladepark für den Parkplatz der Hochschule Offenburg mit 20 Ladepunkten à 11 kW und somit einer Gesamtladeleistung von 220 kW an einen vorhandenen Ortsnetztransformator mit 200 kW Nennleistung anzuschließen, der aber bereits von anderen Verbrauchern genutzt wurde. Das übergeordnete Ziel war es also, eine Ladeinfrastruktur von maßgeblichem Umfang in die bestehende Netzinfrastruktur ohne zusätzlichen Ausbau zu integrieren.
Dabei wurden zukunftsweisende Technologien genutzt und weiterentwickelt sowie teilweise in Praxis, im Labor und in der Computersimulation demonstriert.
Generative machine learning models for creative purposes play an increasingly prominent role in the field of dance and technology. A particularly popular approach is the use of such models for generating synthetic motions. Such motions can either serve as source of ideation for choreographers or control an artificial dancer that acts as improvisation partner for human dancers. Several examples employ autoencoder-based deep-learning architectures that have been trained on motion capture recordings of human dancers. Synthetic motions are then generated by navigating the autoencoder's latent space. This paper proposes an alternative approach of using an autoencoder for creating synthetic motions. This approach controls the generation of synthetic motions on the level of the motion itself rather than its encoding. Two different methods are presented that follow this principle. Both methods are based on the interactive control of a single joint of an artificial dancer while the other joints remain under the control of the autoencoder. The first method combines the control of the orientation of a joint with iterative autoencoding. The second method combines the control of the target position of a joint with forward kinematics and the application of latent difference vectors. As illustrative example of an artistic application, this latter method is used for an artificial dancer that plays a digital instrument. The paper presents the implementation of these two methods and provides some preliminary results.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Forschung im Fokus 2022
(2022)
Machine Learning (ML) als das aktuell wohl am meisten diskutierte Teilgebiet von Künstlicher Intelligenz (KI) verspricht und realisiert bereits sinnvolle Unterstützung u.a. beim autonomen Fahren, der Predictive Maintenance, der Verbrechensvorbeugung, dem Zusammenführen von Angebot und Nachfrage durch Empfehlungslisten oder im Kundenservice mit Softbots.1 All diesen Anwendung ist letztlich gemeinsam, dass Entscheidungen zu treffen sind, und zwar möglichst rational vor dem Hintergrund von subjektiven Präferenzsystemen und den in Betracht gezogenen Handlungsalternativen. Die Entscheidungsgegenstände betreffen Fragen, wann ein Fahrzeug leicht abgebremst werden soll, in welcher Gegend schwerpunktmäßig Polizeipräsenz gezeigt werden soll oder wann eine Maschine gewartet werden soll.
Lithium-ion batteries exhibit a dynamic voltage behaviour depending nonlinearly on current and state of charge. The modelling of lithium-ion batteries is therefore complicated and model parametrisation is often time demanding. Grey-box models combine physical and data-driven modelling to benefit from their respective advantages. Neural ordinary differential equations (NODEs) offer new possibilities for grey-box modelling. Differential equations given by physical laws and NODEs can be combined in a single modelling framework. Here we demonstrate the use of NODEs for grey-box modelling of lithium-ion batteries. A simple equivalent circuit model serves as a basis and represents the physical part of the model. The voltage drop over the resistor–capacitor circuit, including its dependency on current and state of charge, is implemented as a NODE. After training, the grey-box model shows good agreement with experimental full-cycle data and pulse tests on a lithium iron phosphate cell. We test the model against two dynamic load profiles: one consisting of half cycles and one dynamic load profile representing a home-storage system. The dynamic response of the battery is well captured by the model.
Introduction The use of scaffolds in tissue engineering is becoming increasingly important as solutions need to be found for the problem of preserving human tissue, such as bone or cartilage. In this work, scaffolds were printed from the biomaterial known as polycaprolactone (PCL) on a 3D Bioplotter. Both the external and internal geometry were varied to investigate their influence on mechanical stability and biocompatibility. Materials and Methods: An Envisiontec 3D Bioplotter was used to fabricate the scaffolds. First, square scaffolds were printed with variations in the strand width and strand spacing. Then, the filling structure was varied: either lines, waves, and honeycombs were used. This was followed by variation in the outer shape, produced as either a square, hexagon, octagon, or circle. Finally, the internal and external geometry was varied. To improve interaction with the cells, the printed PCL scaffolds were coated with type-I collagen. MG-63 cells were then cultured on the scaffolds and various tests were performed to investigate the biocompatibility of the scaffolds. Results: With increasing strand thickness and strand spacing, the compressive strengths decreased from 86.18 + 2.34 MPa (200 µm) to 46.38 + 0.52 MPa (600 µm). The circle was the outer shape with the highest compressive strength of 76.07 + 1.49 MPa, compared to the octagon, which had the lowest value of 52.96 ± 0.98 MPa. Varying the external shape (toward roundness) geometry, as well as the filling configuration, resulted in the highest values of compressive strength for the round specimens with honeycomb filling, which had a value of 91.4 + 1.4 MPa. In the biocompatibility tests, the round specimens with honeycomb filling also showed the highest cell count per mm2, with 1591 ± 239 live cells/mm2 after 10 days and the highest value in cell proliferation, but with minimal cytotoxic effects (9.19 ± 2.47% after 3 days).
The use of biochar is an important tool to improve soil fertility, reduce the negative environmental impacts of agriculture, and build up terrestrial carbon sinks. However, crop yield increases by biochar amendment were not shown consistently for fertile soils under temperate climate. Recent studies show that biochar is more likely to increase crop yields when applied in combination with nutrients to prepare biochar-based fertilizers. Here, we focused on the root-zone amendment of biochar combined with mineral fertilizers in a greenhouse trial with white cabbage (Brassica oleracea convar. Capitata var. Alba) cultivated in a nutrient-rich silt loam soil originating from the temperate climate zone (Bavaria, Germany). Biochar was applied at a low dosage (1.3 t ha−1). The biochar was placed either as a concentrated hotspot below the seedling or it was mixed into the soil in the root zone representing a mixture of biochar and soil in the planting basin. The nitrogen fertilizer (ammonium nitrate or urea) was either applied on the soil surface or loaded onto the biochar representing a nitrogen-enhanced biochar. On average, a 12% yield increase in dry cabbage heads was achieved with biochar plus fertilizer compared to the fertilized control without biochar. Most consistent positive yield responses were observed with a hotspot root-zone application of nitrogen-enhanced biochar, showing a maximum 21% dry cabbage-head yield increase. Belowground biomass and root-architecture suggested a decrease in the fine root content in these treatments compared to treatments without biochar and with soil-mixed biochar. We conclude that the hotspot amendment of a nitrogen-enhanced biochar in the root zone can optimize the growth of white cabbage by providing a nutrient depot in close proximity to the plant, enabling efficient nutrient supply. The amendment of low doses in the root zone of annual crops could become an economically interesting application option for biochar in the temperate climate zone.
Die vorliegende Arbeit beschäftigt sich mit dem Ermüdungs- und Schädigungsverhalten der in Verbrennungsmotoren eingesetzten Aluminiumgusslegierungen AlSi7Cu0,5Mg-T7 und AlSi12Cu3Ni2Mg-T7. Im Vergleich zur niederzyklischen sowie thermomechanischen Ermüdungsbeanspruchung führt die zusätzliche Überlagerung hochzyklischer Belastungen zu einer signifikanten Lebensdauerreduktion, die mit der Replika-Technik beobachteten Beschleunigung des Kurzrisswachstums erklärt werden kann. Frakto- und metallographische Untersuchungen zeigen, dass Rissinitiierung und Lebensdauerverhalten durch Gussdefekte sowie von belastungs- und temperaturabhängigen Schädigungsmechanismen bestimmt werden. Die Lebensdauern werden mit einem mechanismenbasierten Risswachstumsmodell vorhergesagt. Dazu wird der Schädigungsparameter DTMF,brittle entwickelt, der die charakteristischen Schädigungsmechanismen berücksichtigt. Die Legierung AlSi12Cu3Ni2Mg-T7 wird abschließend mit der Finite-Elemente-Methode und mikrostrukturbasierten Zellmodellen untersucht. Mit den Simulationsergebnissen können die experimentell beobachteten Schädigungsmechanismen fundiert gestützt werden.
Die fluktuierende Verfügbarkeit regenerativer Energiequellen stellt eine Herausforderung bei der Planung und Auslegung regenerativer Gebäudeenergiesysteme dar. Die in einem System benötigten Speicherkapazitäten hängen dabei sowohl von der eingesetzten Regelungsstrategie als auch von den temperaturabhängigen Wirkungsgraden der Anlagenkomponenten ab. Genauere Einblicke in das Betriebsverhalten eines Gesamtsystems können dynamische Simulationen liefern, die eine Analyse der Systemtemperaturen und von Teilenergiekennwerten ermöglichen.
The lifetime of a battery is affected by various aging processes happening at the electrode scale and causing capacity and power fade over time. Two of the most critical mechanisms are the deposition of metallic lithium (plating) and the loss of lithium inventory to the solid electrolyte interphase (SEI). These side reactions compete with reversible lithium intercalation at the graphite anode. Here we present a comprehensive physicochemical pseudo-3D aging model for a lithium-ion battery cell, which includes electrochemical reactions for SEI formation on graphite anode, lithium plating, and SEI formation on plated lithium. The thermodynamics of the aging reactions are modeled depending on temperature and ion concentration, and the reactions kinetics are described with an Arrhenius-type rate law. The model includes also the positive feedback of plating on SEI growth, with the presence of plated lithium leading to a higher SEI formation rate compared to the values obtained in its absence at the same operating conditions. The model is thus able to describe cell aging over a wide range of temperatures and C-rates. In particular, it allows to quantify capacity loss due to cycling (here in % per year) as function of operating conditions. This allows the visualization of aging colormaps as function of both temperature and C-rate and the identification of critical operation conditions, a fundamental step for a comprehensive understanding of batteries performance and behavior. For example, the model predicts that at the harshest conditions (< –5 °C, > 3 C), aging is reduced compared to most critical conditions (around 0–5 °C) because the cell cannot be fully charged.
To achieve its climate goals, the German industry has to undergo a transformation toward renewable energies. To analyze this transformation in energy system models, the industry’s electricity demands have to be provided in a high temporal and sectoral resolution, which, to date, is not the case due to a lack of open-source data. In this paper, a methodology for the generation of synthetic electricity load profiles is described; it was applied to 11 industry types. The modeling was based on the normalized daily load profiles for eight electrical end-use applications. The profiles were then further refined by using the mechanical processes of different branches. Finally, a fluctuation was applied to the profiles as a stochastic attribute. A quantitative RMSE comparison between real and synthetic load profiles showed that the developed method is especially accurate for the representation of loads from three-shift industrial plants. A procedure of how to apply the synthetic load profiles to a regional distribution of the industry sector completes the methodology.
The significant market growth of stationary electrical energy storage systems both for private and commercial applications has raised the question of battery lifetime under practical operation conditions. Here, we present a study of two 8 kWh lithium-ion battery (LIB) systems, each equipped with 14 lithium iron phosphate/graphite (LFP) single cells in different cell configurations. One system was based on a standard configuration with cells connected in series, including a cell-balancing system and a 48 V inverter. The other system featured a novel configuration of two stacks with a parallel connection of seven cells each, no cell-balancing system, and a 4 V inverter. The two systems were operated as part of a microgrid both in continuous cycling mode between 30% and 100% state of charge, and in solar-storage mode with day–night cycling. The aging characteristics in terms of capacity loss and internal resistance change in the cells were determined by disassembling the systems for regular checkups and characterizing the individual cells under well-defined laboratory conditions. As a main result, the two systems showed cell-averaged capacity losses of 18.6% and 21.4% for the serial and parallel configurations, respectively, after 2.5 years of operation with 810 (serial operation) and 881 (parallel operation) cumulated equivalent full cycles. This is significantly higher than the aging of a reference single cell cycled under laboratory conditions at 20 °C, which showed a capacity loss of only 10% after 1000 continuous full cycles.
Die Covid-19-Pandemie hat die Welt verändert. Alle Wirtschaftszweige wie etwa der Handel sahen sich von heute auf morgen mit einer veränderten Realität konfrontiert. Diese Entwicklung hat einerseits den schon vor der Pandemie wahrnehmbaren Digitalisierungsdruck, vor allem auf den stationären Handel, massiv erhöht. Und andererseits die Daten von Kundinnen und Kunden in das Zentrum der Aufmerksamkeit gerückt, da in der digitalen Welt der persönliche Kontakt zur Kundschaft fehlt. Dieser Beitrag beleuchtet die Bedeutung von Kundendaten, Datenqualität und Datenmanagement als wesentliche Erfolgsfaktoren für den Handel in dieser herausfordernden Situation. Er zeigt auf, wie Datenverantwortliche im Handel aus dem Wissen um die Daten mittels Identity Resolution klare Profile von Kundinnen und Kunden entwickeln und diese plattformbasiert ausrollen können. Hierzu wird das neue Konzept des Customer Digital Twins eingeführt. Die abschließenden Handlungsempfehlungen bieten eine ‚Arbeitsanweisung in fünf Schritten‘ für eine aktuelle, vollständige und verlässliche Datenbasis, als Grundlage für den datenorientierten stationären und Onlinehandel.
Intelligentes Data Governance und Data Management – Neue Chancen für die Kundendatenbewirtschaftung
(2022)
Mit der Digitalisierung haben sich für Marketing und Vertrieb vielzählige unterschiedliche neue Kanäle, Werbeformate und Zielgruppen eröffnet. Expertinnen und Experten schätzen, dass ca. 4.000 bis 10.000 Werbe- und Markenbotschaften täglich auf jede und jeden von uns einprasseln. Mögen diese Zahlen umstritten und eine Zahl zwischen 300 bis 500 Botschaften pro Tag realistischer sein, so ist dies noch immer eine Menge, die kaum noch verarbeitet und wahrgenommen werden kann. Als Reaktion auf diese Werbemasse und Reizüberflutung haben Konsumentinnen und Konsumenten zum Teil eine Art „Werbeblindheit“ entwickelt.
Für Unternehmen wird daher zunehmend herausfordernder, ihre Zielgruppe aktiv zu erreichen. Was können Unternehmen nun tun, um dennoch nachhaltig Aufmerksamkeit zu gewinnen, und wie kann Künstliche-Intelligenz-gestütztes Kundendatenmanagement dabei helfen?
Dieses Buch gibt einen fundierten Überblick über Krisen- und Transformationsszenarien im Rahmen der Corona-Pandemie in Wirtschaft, Kultur und Bildung. Den Schwerpunkt bilden konkrete Schilderungen erfolgreicher Maßnahmen und Lösungen in der Krise. Zahlreiche Expert*innen aus Wissenschaft und Praxis erläutern den Umgang mit der Pandemie in Institutionen und Unternehmen, die wirksame Entwicklung von Zukunftsperspektiven sowie ihre Umsetzung in den Alltag.
Die Corona-Pandemie hat in vielen Bereichen eine Krise ausgelöst und durch die zwingende Notwendigkeit zum Handeln einen katalytischen Effekt bewirkt. Entwicklungen wurden verstärkt oder beschleunigt und Transformationsprozesse, beispielsweise die Digitalisierung vieler Branchen und Lebensbereiche, in ihrer Richtung und Dynamik beeinflusst.
Anhand von Good Practises bieten die Autoren Lösungsvorschläge, wie die Krise als Chance weiterentwickelt werden kann: von neuen Omnichannel-Lösungen im Vertrieb über innovative New-Work-Ansätze bis hin zu Beispielen aus den besonders schwer getroffenen Branchen Luftfahrt und Tourismus. Weitere Kernbereiche sind die Sektoren Kultur und Bildung mit Beispielen zu konkreten Überlebensstrategien von Künstler*innen sowie innovativen Digitalkonzepten in Lehre und Forschung. Das Buch richtet sich gleichermaßen an wirtschaftlich und gesellschaftlich Interessierte, Studierende und Praktiker*innen.
Jürgen Zierep passed away on July 29, 2021, at the age of 92. To him, science and education was not only a profession, but an affair of the heart. His impressive contributions in fluid mechanics comprise about 200 scientific publications in the fields of gas dynamics, similarity laws, flow instabilities, flows with energy transfer, and non-Newtonian fluids. In addition, he wrote eleven textbooks with great dedication. Those books by the “scientist who loves to teach” are nowadays available in different languages and regularly appear in new editions.
This mature textbook brings the fundamentals of fluid mechanics in a concise and mathematically understandable presentation. In the current edition, a section on dissipation and viscous potential flows has been added. Exercises with solutions help to apply the material correctly and promote understanding.
Lithium-ion batteries exhibit a well-known trade-off between energy and power, which is problematic for electric vehicles which require both high energy during discharge (high driving range) and high power during charge (fast-charge capability). We use two commercial lithium-ion cells (high-energy [HE] and high-power) to parameterize and validate physicochemical pseudo-two-dimensional models. In a systematic virtual design study, we vary electrode thicknesses, cell temperature, and the type of charging protocol. We are able to show that low anode potentials during charge, inducing lithium plating and cell aging, can be effectively avoided either by using high temperatures or by using a constant-current/constant-potential/constant-voltage charge protocol which includes a constant anode potential phase. We introduce and quantify a specific charging power as the ratio of discharged energy (at slow discharge) and required charging time (at a fast charge). This value is shown to exhibit a distinct optimum with respect to electrode thickness. At 35°C, the optimum was achieved using an HE electrode design, yielding 23.8 Wh/(min L) volumetric charging power at 15.2 min charging time (10% to 80% state of charge) and 517 Wh/L discharge energy density. By analyzing the various overpotential contributions, we were able to show that electrolyte transport losses are dominantly responsible for the insufficient charge and discharge performance of cells with very thick electrodes.
A novel method for quasi-continuous tar monitoring in hot syngas from biomass gasification is reported. A very small syngas stream is extracted from the gasifier output, and the oxygen demand for tar combustion is determined by a well-defined dosage of synthetic air. Assuming the total oxidation of all of the combustible components at the Pt-electrode of a lambda-probe, the difference of the residual oxygen concentrations from successive operations with and without tar condensation represents the oxygen demand. From experiments in the laboratory with H2/N2/naphthalene model syngas, the linear sensitivity and a lower detection limit of about 70 ± 5 mg/m3 was estimated, and a very good long-term stability can be expected. This extremely sensitive and robust monitoring concept was evaluated further by the extraction of a small, constant flow of hot syngas as a sample (9 L/h) using a Laval nozzle combined with a metallic filter (a sintered metal plate (pore diameter 10 µm)) and a gas pump (in the cold zone). The first tests in the laboratory of this setup—which is appropriate for field applications—confirmed the excellent analysis results. However, the field tests concerning the monitoring of the tar in syngas from a woodchip-fueled gasifier demonstrated that the determination of the oxygen demand by the successive estimation of the oxygen concentration with/without tar trapping is not possible with enough accuracy due to continuous variation of the syngas composition. A method is proposed for how this constraint can be overcome.
Simulation based studies for operational energy system analysis play a significant role in evaluation of various new age technologies and concepts in the energy grid. Various modelling approaches already exist and in this original paper, four models representing these approaches are compared in two real-world hybrid energy system scenarios. The models, namely TransiEnt, µGRiDS, and OpSim (including pandaprosumer and mosaic) are classified into component-oriented or system-oriented approaches as deduced from the literature research. The methodology section describes their differences under standard conditions and the necessary parameterization for the purpose of creating a framework facilitating a closest possible comparison. A novel methodology for scenario generation is also explained. The results help to quantify primary differences in these approaches that are also identified in literature and qualify the influence of the accuracy of the models for application in a system-wide analysis. It is shown that a simplified model may be sufficient for the system-oriented approach especially when the objective is an optimization-based control or planning. However, from a field level operational point of view, the differences in the time series signify the importance of the component-oriented approaches.
In asymmetric treatment of hearing loss, processing latencies of the modalities typically differ. This often alters the reference interaural time difference (ITD) (i.e., the ITD at 0° azimuth) by several milliseconds. Such changes in reference ITD have shown to influence sound source localization in bimodal listeners provided with a hearing aid (HA) in one and a cochlear implant (CI) in the contralateral ear. In this study, the effect of changes in reference ITD on speech understanding, especially spatial release from masking (SRM) in normal-hearing subjects was explored. Speech reception thresholds (SRT) were measured in ten normal-hearing subjects for reference ITDs of 0, 1.75, 3.5, 5.25 and 7 ms with spatially collocated (S0N0) and spatially separated (S0N90) sound sources. Further, the cues for separation of target and masker were manipulated to measure the effect of a reference ITD on unmasking by A) ITDs and interaural level differences (ILDs), B) ITDs only and C) ILDs only. A blind equalization-cancellation (EC) model was applied to simulate all measured conditions. SRM decreased significantly in conditions A) and B) when the reference ITD was increased: In condition A) from 8.8 dB SNR on average at 0 ms reference ITD to 4.6 dB at 7 ms, in condition B) from 5.5 dB to 1.1 dB. In condition C) no significant effect was found. These results were accurately predicted by the applied EC-model. The outcomes show that interaural processing latency differences should be considered in asymmetric treatment of hearing loss.
Nach einem langen Vorlauf haben Bundestag und Bundesrat zum Ende der Legislatur 2021 einen Rechtsanspruch auf Ganztagsbildung in Grundschulen verabschiedet. Um aus diesem formalen Anspruch gute Angebote in der Rechtswirklichkeit schaffen zu können, bedarf es neben politischer und finanzieller Rahmenbedingungen auch eines gezielten Dialogs mit den relevanten Anspruchs- und Interessengruppen, weshalb dem Stakeholdermanagement vor allem der Akteure von Schulträgern und Schulen eine besondere Bedeutung zukommt.
Journalist, Teil A
(2022)
Brand-related-user-generated-content allows companies to achieve several important objectives, such as increasing sales and creating higher user engagement. In this paper a research framework is developed that provides an overview of the necessary processes to successfully use brand-related-user-generated-content. The framework also helps managers to understand the main motives of users when posting brand-related-user-generated-content. Expert interviews were carried out to validate the research framework. The results from the interviews support the proposed framework. Brand-related-user-generated-content can increase purchase intention and the community engagement. From a user’s perspective the opportunity to interact with a brand and be featured on official brand channels could be seen as the main motivation for creating brand-related-user-generated-content.
The present essay discusses several channels of social policy on climate mitigation and utilizes the Universal Basic Income (UBI) scheme as an example for endowment increasing and inclusive social policy instruments. UBI comprises the payment of a fixed amount of money to every member of a society from birth to death and is not bounded to any precondition. It is expected to increase resilience of individuals against disruptive and unexpected processes, such as climate change, digitization, aging population and the changing world of work, rather than particular life-trajectories. UBI is found to be a social policy instrument whose effects can contribute to climate mitigation. This essay is far from being conclusive and rather aims to raise questions which require further analysis.
The increase of the Internet of Things (IoT) calls for secure solutions for industrial applications. The security of IoT can be potentially improved by blockchain. However, blockchain technology suffers scalability issues which hinders integration with IoT. Solutions to blockchain’s scalability issues, such as minimizing the computational complexity of consensus algorithms or blockchain storage requirements, have received attention. However, to realize the full potential of blockchain in IoT, the inefficiencies of its inter-peer communication must also be addressed. For example, blockchain uses a flooding technique to share blocks, resulting in duplicates and inefficient bandwidth usage. Moreover, blockchain peers use a random neighbor selection (RNS) technique to decide on other peers with whom to exchange blockchain data. As a result, the peer-to-peer (P2P) topology formation limits the effective achievable throughput. This paper provides a survey on the state-of-the-art network structures and communication mechanisms used in blockchain and establishes the need for network-based optimization. Additionally, it discusses the blockchain architecture and its layers categorizes existing literature into the layers and provides a survey on the state-of-the-art optimization frameworks, analyzing their effectiveness and ability to scale. Finally, this paper presents recommendations for future work.
Electrochemical pressure impedance spectroscopy (EPIS) has recently been developed as a potential diagnosis tool for polymer electrolyte membrane fuel cells (PEMFC). It is based on analyzing the frequency response of the cell voltage with respect to an excitation of the gas-phase pressure. We present here a combined modeling and experimental study of EPIS. A pseudo-twodimensional PEMFC model was parameterized to a 100 cm2 laboratory cell installed in its test bench, and used to reproduce steady-state cell polarization and electrochemical impedance spectra (EIS). Pressure impedance spectra were obtained both in experiment and simulation by applying a harmonic pressure excitation at the cathode outlet. The model shows good agreement with experimental data for current densities ⩽ 0.4 A cm−2. Here it allows a further simulative analysis of observed EPIS features, including the magnitude and shape of spectra. Key findings include a strong influence of the humidifier gas volume on EPIS and a substantial increase in oxygen partial pressure oscillations towards the channel outlet at the resonance frequency. At current densities ⩾ 0.8 A cm−2 the experimental EIS and EPIS data cannot be fully reproduced. This deviation might be associated with the formation and transport of liquid water, which is not included in the model.
Dieses Fachbuch hat die Funktion einer fachlich fundierten und praxisnahen Überblicksdarstellung über die Möglichkeiten des Einsatzes von Social Media und Messenger-Diensten im kommunalen Bereich. Social Media sind insbesondere als Instrument für den Dialog mit der Bürgerschaft geeignet, was eine gezielte Analyse von Kommunikationsanlässen und Dialogbedürfnissen im kommunalen Sektor erfordert.
Um das volle Potenzial sozialer Medien ausschöpfen sowie zeitnah und dezentral kommunizieren zu können, müssen die daraus resultierenden Anforderungen an die Organisationskultur und Führungsstruktur berücksichtigt werden. Das Handbuch liefert den theoretischen Hintergrund und die dazugehörige praktische Umsetzung, um die notwendigen Handlungsfelder der kommunalen Kommunikation mithilfe von Social Media und Messenger-Diensten erschließen zu können.
Dieses Buch erläutert die allgemeinen Rahmenbedingungen von Industrie 4.0 und bietet innovativen KMUs im Industriesektor wertvolle Impulse.
Industrie 4.0 als Herzstück der digitalen Transformation vieler Produktionsbetriebe liefert ein sehr heterogenes Bild bezüglich der Umsetzung. Die Unterschiede in den organisatorischen Bedingungen liegen im Innovationsgrad der jeweiligen Organisation und lassen sich an der Nachhaltigkeit der Investitionen und dem damit verbundenen mittel- bis langfristigen Markterfolg festmachen.
Das Buch zeigt konkrete Umsetzungsbeispiele auf und beschreibt unter anderem Anwendungsszenarien von praxiserfahrenen und innovativen Ingenieur*innen im Maschinenbau und in der Medizintechnik. Zudem vermittelt es einen grundsätzlichen Überblick über Digitalstrategien und Anwendungsfelder von Industrie 4.0.
Es richtet sich gleichermaßen an Interessierte, Studierende sowie Praktiker aus den Themenfeldern Technologie, Ingenieurwesen und Management.
In diesem Kapitel geht es um die Frage, was unter dem Buzz-Word Industrie 4.0 zu verstehen ist und was nicht. Industrie 4.0 kann dabei unter gesellschaftlichen, kompetenzorientierten, produktionsorientierten oder verhaltensorientierten Gesichtspunkten unterschiedlich interpretiert werden. Die Umsetzung in Deutschland hat sich dabei vom politischen Konstrukt zur technisch-wirtschaftlichen Entwicklung im Rahmen der digitalen Transformation gewandelt.
Neue konzeptionelle Entwicklungen wie Industrie 4.0 gelten in vielen Fällen als Domäne von jungen, kleinen Unternehmen, die im Sinne einer Start-Up-Kultur auf Innovationen fokussiert sind oder als Themenfeld von Großunternehmen, die mit Hilfe von Forschungs- und Entwicklungsabteilungen und eigenen Engineering-Ansätzen sowie einem großen Budget in der Lage sind, größere Projekte zu testen und eigene Konzepte zu entwickeln. Da die deutsche Wirtschaft und Gesellschaft stark von den erfolgreichen Entwicklungen des Mittelstands abhängt, der eine Schlüsselstellung für die diversifizierte Wirtschaft und eine robuste Entwicklung des Wohlstands besitzt, kommt der hier behandelten Frage, wie mittelständische Unternehmen mit der digitalen Transformation und der Herausforderung von Industrie 4.0 zurechtkommen, eine zentrale Rolle zu.
Dass die digitale Transformation als generelles Thema für alle Gesellschafts- und Wirtschaftsbereich von Bedeutung ist, und gerade für den Mittelstand und zahlreiche Hidden Champions eine besondere Herausforderung darstellt, wurde in den vorausgehenden Unterkapiteln bereits deutlich. In diesem Unterkapitel geht es um die konkrete Umsetzung der digitalen Transformation in diesen Branchen- und Unternehmenssegmenten, wo von den Geschäftsmodellen und den dazugehörigen Kundensegmenten bis hin zur Unternehmens- und Führungskultur Veränderungen erforderlich sind. Diese Felder der Transformation im Sinne von Entwicklung und Change gilt es für einen erfolgreichen Transformationsprozess zu identifizieren und aufeinander abzustimmen.
Dieses Buch gibt einen fundierten Überblick über Change- und Corporate-Venture-Capital-Strategien im Mediensektor. Viele Medienunternehmen stehen vor der Herausforderung, das eigene (Kern-)Geschäft weiterzuentwickeln sowie eine deutlichere Orientierung am Kundennutzen zu verfolgen und dennoch die eigene kreative und publizistische Mission beizubehalten. Darüber hinaus gilt es, neue Geschäftsfelder aufzubauen und den Ankauf von Unternehmen oder die Beteiligung an Start-ups voranzutreiben. Der dabei erforderliche Spagat zwischen strategischen und finanziellen Zielen sowie die operative Umsetzung stellen eine beachtliche Herausforderung dar.
Die Branchenexperten analysieren in ihren Beiträgen die vielfältigen Potenziale sowie konkrete Maßnahmen und Best Cases. Dabei kommen Medieninsider und medienunabhängige Experten zur Sprache, die die Change-Strategien, -Maßnahmen und -Fallstudien beleuchten und neue Gestaltungsmöglichkeiten aufzeigen.
In diesem Einführungskapitel geht es um die Frage, welche Transformationsstrategien in der Medienbranche in den vergangenen zwanzig Jahren anzutreffen waren und welche Risiken und Chancen sie in ihrer Anwendung für die einzelnen Unternehmen und Akteure beinhalten. Dabei wird sehr schnell deutlich, dass es „die Medienbranche“ schlechthin nicht gibt, auch nicht geben kann, sondern dass es sich um eine sehr komplexe, mitunter auch widersprüchliche Branche handelt. Wir werden in diesem Kapitel aufzeigen, dass es gerade die angrenzenden Bereiche sind, aus denen meist innovative Ansätze kommen, die einen disruptiven Effekt auf die Kernbranche mit ihrem traditionellen Kerngeschäft haben. Insofern wird in diesem Kapitel zwar eine strukturierte analytische Betrachtung angestrebt, dennoch ist es ein Themenfeld, in dem die Dynamik der Akteure und Märkte dafür sorgt, dass die Branche, die Teilsegmente, die Unternehmen, Produkte und Märkte ständig in Bewegung sind und in Bewegung bleiben.
Wie in den grundsätzlichen Überlegungen zur Transformationsstrategie von Medienunternehmen in Abschn. 1.1 bereits deutlich wurde, ist es für Medienunternehmen und ihre Leitungen nicht einfach, einerseits neue Impulse nicht durch das überdimensionale Regelwerk einer auf Effizienz getrimmten Standardorganisation zu ersticken und andererseits aber auch die Fähigkeit zu entwickeln, die neuen Impulse nicht nur in eigenen Organisationseinheiten zu konservieren, sondern sie als Nukleus für Veränderungen und für eine Weiterentwicklung des Hauptunternehmens zuzulassen. Diese generelle Problematik zeigt sich auch in der Medienbranche beim Umgang mit Transformationsstrategien sehr deutlich, denn es ist letztlich immer wieder ein Austarieren und Nachsteuern, da es den einfachen, idealen Weg nicht gibt und wohl auch nicht geben kann.
Significant improvements in module performance are possible via implementation of multi-wire electrodes. This is economically sound as long as the mechanical yield of the production is maintained. While flat ribbons have a relatively large contact area to exert forces onto the solar cell, wires with round cross section reduce this contact area considerably – in theory to an infinitively thin line. Therefore, the local stresses induced by the electrodes might increase to a point that mechanical production yields suffer unacceptably.
In this paper, we assess this issue by an analytical mechanical model as well as experiments with an encapsulant-free N.I.C.E. test setup. From these, we can derive estimations for the relationship between lay-up accuracy and expected breakage losses. This paves the way for cost-optimized choices of handling equipment in industrial N.I.C.E.-wire production lines.
Micronization of biochar (BC) may ease its application in agriculture. For example, fine biochar powders can be applied as suspensions via drip-irrigation systems or can be used to produce grnulated fertilizers. However, micronization may effect important physical biochar properties like the water holding capacity (WHC) or the porosity.
The impact of the circular economy on sustainable development: A European panel data approach
(2022)
The circular economy (CE) has attracted considerable attention because of its potential to help achieve sustainable development (SD). This paper presents a comprehensive analysis of the effect of the CE on the three dimensions of SD at the country level. We analysed the impact of each CE source of value (renewable energy, reuse, repair, recycling) and the influence of an overall factor-analysis-derived measure of the CE on the economic, environmental and social dimensions of SD. The aim was to compare the individual impacts and outcomes of the CE and its sources of value in a single study. Panel data analysis was performed using a sample of 25 European countries for the period 2010 to 2019. The findings show a major impact of the CE on achieving SD, which has positive
effects on the economy, environment and society. However, the results show that the impact of each CE value source on the three SD dimensions varies. While renewable energies and reuse reduce the impact on the environment, recycling has no effect, and repair increases GHG emissions. However, repair is the only CE source with a positive economic impact at the country level. Finally, renewable energy, repair and recycling reduce unemployment. Decision makers should conduct impact analysis to design suitable, efficient and targeted measures depending on each country's specific objectives.
Kein Mensch lernt digital
(2022)
Ralf Lankau entlarvt in diesem Buch die wirtschaftlichen Interessen der IT-Branche und ihrer Lobbyisten. Dabei geht er sowohl auf die wissenschaftlichen Grundlagen (Kybernetik, Behaviorismus) als auch auf die technischen Rahmenbedingungen von Netzen und Cloud-Computing ein, bevor er konkrete Vorschläge für einen reflektierten, verantwortungsvollen Umgang mit Digitaltechnik im Unterricht skizziert. Seine These: Wir müssen uns auf unsere pädagogische Aufgabe besinnen und (digitale) Medien wieder zu dem machen, was sie im Präsenzunterricht sind: didaktische Hilfsmittel.
Die 2. Auflage greift insbesondere die Erfahrungen mit der Digitalisierung während der Corona-Pandemie auf. Soziales Lernen und pädagogische Beziehungsgestaltung haben sich hier als wichtige Parameter für den Lernerfolg erwiesen. Das bloße Distanzlernen hingegen zeigt: Wer sich schon vorher mit dem Lernen schwer getan hat, fiel während der Pandemie noch weiter im Unterricht zurück.
The aim of this study is to identify indicators at country level that could prove useful in improving the effectiveness of fraud detection in European Structural and Investment Funds. The chapter analyses EU funds, belonging to the period 2014–2020, from and the study suggests the convenience of tracking funds, especially in countries with higher GDP and higher transparency levels, and the lesser relevance of the number of irregularities for countries with higher GDP and those receiving larger funds. Fraud and fraud detection rates in individual funds vary significantly across states. Federal states, such as the Federal Republic of Germany, are comparatively successful in detecting fraud in EU funds.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features which address the tactile experience of working in real-world labs.
Energietechnik
(2022)
Dieses Lehrbuch vermittelt ein grundlegendes, dennoch kurz gefasstes Verständnis für die Zusammenhänge der Energieumwandlungsprozesse und umfasst dabei die gesamte Bandbreite der Energietechnik. Die Schwerpunkte reichen von der kompletten Beschreibung der konventionellen und vor allem nachhaltigen, erneuerbaren Energietechniken, über Gas- und Dampfturbinen-Kraftwerke sowie Kraft-Wärme-Kälte-Kopplungsanlagen bis hin zur Energiespeicherung, Energieverteilung und abschließend zu einem Abriss der Globalen Erwärmung mit zugehöriger Klimapolitik.
In der aktuellen Auflage wurden mehrere Kapitel von neuen Autoren überarbeitet und teils grundlegend neu verfasst. Die aktuellen politischen Änderungen wurden u. a. in den verschiedenen Kapiteln der erneuerbaren Techniken, der Energieverteilung sowie der Marktliberalisierung und Energiewende durch sorgfältige Überarbeitungen eingebracht.
Unter fossilen Energieträgern verstehen sich energie- und kohlenstoffhaltige Stoffe, die in mehreren Millionen Jahren aus Biomassen unter dafür günstigen Bedingungen (Sauerstoffausschluss) gebildet wurden. Das geologische Alter von Steinkohle beträgt über 250 Millionen Jahre, das von Braunkohle ca. 50 Millionen. Derzeit werden deutlich mehr fossile Energieträger verbraucht als nachgebildet. Während in Deutschland die Stilllegung von Dampfkraftwerken, die mit Steinkohle befeuert werden, begonnen hat, steigt global der Verbrauch dieser Brennstoffe weiterhin an. Die Steinkohleförderung erreichte im Jahr 2019 weltweit den Wert von 7,3 Milliarden Tonnen, mit einer Steigerung von immerhin fast 3 % gegenüber dem Vorjahr 2018 nach [1]. Dieser Anstieg erfolgte hauptsächlich in China und Indonesien – in Deutschland ging der Kohleverbrauch in diesem Zeitraum zurück.
This paper has the objective of creating a framework for a different cultural dimension of corporate entrepreneurship leading to corporate entrepreneurial culture (CEC). The analysis of CEC is based on a review of existing concepts of organisational culture and entrepreneurship. They are combined to create a framework of CEC, including macro- and microlevels and examples of subcultures. Core ideas of the framework are validated by qualitative interviews with ten experts. The identified organisational category of the CEC framework is defined by the levels of micro-cultures or subcultures and includes the upper levels of the hierarchy, including the industry level. Geographic categories such as regional or national culture are also part of the system. The individual category of the CEC framework is characterised by competencies (including aspects such as motivation, creativity, mobilising others, coping with uncertainty, teamwork and social competencies) and entrepreneurial personalities. The results of the interviews show the importance of these individual competencies for a lively CEC. The different levels, such as national and professional cultures, as a dimension of the organisational category of the framework are also confirmed by the interviews. The findings indicate that the individual category of CEC could be used for job satisfaction or engagement and the degree of CEC of an organisation could be defined and developed by the organisational category. The identified framework contributes to an understanding of this complex topic and supports companies in the implementation of entrepreneurial ideas in different organisational contexts.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks. From an image and signal processing point of view, this success might be a bit surprising as the inherent spatial pyramid design of most CNNs is apparently violating basic signal processing laws, i.e. Sampling Theorem in their down-sampling operations. However, since poor sampling appeared not to affect model accuracy, this issue has been broadly neglected until model robustness started to receive more attention. Recent work in the context of adversarial attacks and distribution shifts, showed after all, that there is a strong correlation between the vulnerability of CNNs and aliasing artifacts induced by poor down-sampling operations. This paper builds on these findings and introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture: FrequencyLowCut pooling. Our experiments show, that in combination with simple and Fast Gradient Sign Method (FGSM) adversarial training, our hyper-parameter free operator substantially improves model robustness and avoids catastrophic overfitting. Our code is available at https://github.com/GeJulia/flc_pooling
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average R² score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
(2022)
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, \ie the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
Es ist derzeit "in", in der Werbung damit zu werben, das eigene Unternehmen sei besonders umweltfreundlich. Das Stichwort lautet "Klimaneutralität". Viele Unternehmen möchten etwa mit dem Begriff "CO2-neutral" oder ähnlichem werben, weil es gut klingt. Dabei wird sich dieses Prädikat naturgemäß nicht daraus ergeben, dass bei Produktion und Transport der eigenenLeistungen und Produkte keinerlei Emissionen anfallen. Produktion, Versand, allgemeiner Energieverbrauch und Reisetätigkeiten emittieren sehr wohl. Allerdings möchte man dieses Prädikat durch die Förderung von Umweltprojekten (oftmals auf anderen Kontinenten) oder den Erwerb von "Ausgleichszertifikaten", die von Seiten Dritter beispielsweise die dauerhafte Bindung von CO2 (z.B. in Pflanzenkohlesenken) garantieren sollen, ermöglichen. Der Beitrag gibt einen ersten Überblick über die aktuelle Rechtslage und geplante Änderung der UGP-Richtlinie, insbesondere mit Blick auf §5a UWG.
This paper aims to draw attention to an urgent need for reform of the regulatory framework of the broader export credit system to ensure a new and comprehensive "safe haven" for officially supported export credits. The purpose is to analyse the complex debate on disciplines of the World Trade Organization (WTO) and the Organisation for Economic Co-operation and Development (OECD), creating a point of reference for future analysis of and debates around the "carve-out clause" of the Agreement on Subsidies and Countervailing Measures (ASCM) and a "safe haven" in a broader sense.
The authors set the focus in this paper on the description of polarization with the help of the Jones calculus and the application of polarization in photography. Furthermore, the effect of the circular polarization filter is described by using the Jones calculus. Also, an enhancement of artistic and creative possibilities in photography through quantization or parametrization by the Jones matrices is presented.
Teaching and learning concepts that are adapted to the constantly evolving requirements due to rapid technological progress are essential for teaching in media photonics technology. After the development of a concept for research-oriented education in optics and photonics, the next step will be a conceptual restructuring and redesign of the entire curriculum for education in media photonics technology. By including typical research activities as essential components of the learning process, a broad platform for practical projects and applied research can be created, offering a variety of new development opportunities.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
Digitale Kundeninteraktionen haben in den letzten Jahren enorm an Bedeutung gewonnen: Sowohl im B2B als auch im B2C sind weite Teile der Interaktion digital bzw. hybrid konzipiert. Ob ein bargeldloser Bezahlvorgang an der Kasse, ein Bestellformular beim Onlinekauf oder der Kauf einer Bahnfahrkarte am Automaten – die Kunden und Kundinnen haben digitale Interaktionen in vielfältigen Varianten bereits erlebt und kommen mehr oder weniger gut damit zurecht.
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.