Refine
Year of publication
Document Type
- Conference Proceeding (1184)
- Article (reviewed) (678)
- Article (unreviewed) (566)
- Part of a Book (460)
- Contribution to a Periodical (287)
- Book (227)
- Other (139)
- Working Paper (105)
- Patent (98)
- Report (76)
- Periodical Part (54)
- Doctoral Thesis (31)
- Letter to Editor (30)
- Study Thesis (2)
- Image (1)
- Moving Images (1)
Conference Type
- Konferenzartikel (945)
- Konferenz-Abstract (156)
- Sonstiges (42)
- Konferenz-Poster (32)
- Konferenzband (13)
Language
- German (2071)
- English (1856)
- Other language (5)
- Russian (3)
- Multiple languages (2)
- French (1)
- Spanish (1)
Is part of the Bibliography
- yes (3939) (remove)
Keywords
- Digitalisierung (41)
- RoboCup (32)
- Dünnschichtchromatographie (28)
- Social Media (24)
- COVID-19 (23)
- Kommunikation (23)
- Arbeitszeugnis (22)
- Energieversorgung (22)
- E-Learning (21)
- Export (21)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (945)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (808)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (779)
- Fakultät Wirtschaft (W) (616)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (464)
- INES - Institut für nachhaltige Energiesysteme (239)
- Fakultät Medien (M) (ab 22.04.2021) (219)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (155)
- Zentrale Einrichtungen (81)
- IMLA - Institute for Machine Learning and Analytics (78)
Open Access
- Open Access (1463)
- Closed Access (1245)
- Closed (528)
- Bronze (285)
- Diamond (76)
- Gold (76)
- Hybrid (49)
- Grün (16)
Wer sich als Pädagoge und Wissenschaftler mit dem Thema „Digitalisierung und Schule“ befasst, stellt fest, dass nur Wenige die Tragweite der beabsichtigten Transformation von Bildungseinrichtungen zu automatisierten, algorithmisch gesteuerten Lernfabriken realisieren. Dabei wird übersehen, dass mit Theorien und empirischen Modellen wie der „datengestützten Schulentwicklung“ und „Learning Analytics“ grundlegende Paradigmenwechsel verbunden sind, die das humane wie das christliche Menschenbild erschüttern. Mit Kybernetik und Behaviorismus auf der einen, mit der sogenannten „Künstlichen Intelligenz“ (KI) und darauf aufbauenden Geschäftsmodellen der Datenökonomie auf der anderen Seite, untergraben diese Beschulungsmodelle die Autonomie, das Selbstbestimmungsrecht und die Handlungsfreiheit des Menschen. Vertreter dieser Disziplinen behaupten, dass sowohl der einzelne Mensch wie Sozialgemeinschaften wie Maschinen programmiert und gesteuert werden können. Sie blenden aus, dass Mündigkeit und Selbstverantwortung das Ziel von Schule und Unterricht sind, nicht maschinell berechnete Verhaltenssteuerung und -manipulation. Diese Fehlentwicklungen sind nicht der Technik an sich geschuldet, die sich anders einsetzen ließe, sondern den Geschäftsmodellen der IT-Anbieter.
Smarte Technologien ermöglichen eine engmaschige Kontrolle und Steuerung der Schülerinnen und Schüler. Die entscheidende Frage zu IT in Schulen ist daher: Folgen wir der Logik der technischen Systeme oder besinnen wir uns auf den pädagogischen Auftrag der Erziehung zu Mündigkeit und Selbstverantwortung?
Kein Mensch lernt digital
(2022)
Ralf Lankau entlarvt in diesem Buch die wirtschaftlichen Interessen der IT-Branche und ihrer Lobbyisten. Dabei geht er sowohl auf die wissenschaftlichen Grundlagen (Kybernetik, Behaviorismus) als auch auf die technischen Rahmenbedingungen von Netzen und Cloud-Computing ein, bevor er konkrete Vorschläge für einen reflektierten, verantwortungsvollen Umgang mit Digitaltechnik im Unterricht skizziert. Seine These: Wir müssen uns auf unsere pädagogische Aufgabe besinnen und (digitale) Medien wieder zu dem machen, was sie im Präsenzunterricht sind: didaktische Hilfsmittel.
Die 2. Auflage greift insbesondere die Erfahrungen mit der Digitalisierung während der Corona-Pandemie auf. Soziales Lernen und pädagogische Beziehungsgestaltung haben sich hier als wichtige Parameter für den Lernerfolg erwiesen. Das bloße Distanzlernen hingegen zeigt: Wer sich schon vorher mit dem Lernen schwer getan hat, fiel während der Pandemie noch weiter im Unterricht zurück.
Der Kaiser ist ja nackt
(2016)
Der niedersächsische Landtag entscheidet bei der Diskussion und Abstimmung über die drei genannten Anträge über mehr als nur die Verteilung der Investitionsmittel aus dem „Digitalpakt Schule“. Es geht um grundsätzliche Fragen: Wer bestimmt über Lehrinhalte an staatlichen Schulen und über eingesetzte (Medien-)Technik? Bleibt die Bildungspolitik des Landes dem Anspruch und Recht der Schülerinnen und Schüler nach individueller Bildung und Persönlichkeitsentwicklung verpflichtet, wie es in der Landesverfassung (§1(4)) und im Niedersächsischen Schulgesetz (§2 Bildungsauftrag, NschG) steht? Vermitteln öffentliche Schulen weiterhin eine fundierte Allgemeinbildung als Grundlage sozialer Teilhabe in demokratischen Gemeinschaften? Oder setzen sich Wirtschaftsverbände und IT-Lobbyisten durch, die für mehr und den immer früheren Einsatz von digitalen Endgeräten in Bildungseinrichtungen eintreten? Die „Programmieren bereits in der KiTa“ fordern und Schulen mit „leistungsstarken WLAN ausleuchten“ wollen (CDU/SPD-An-trag), ohne über Strahlung auch nur nachzudenken? Werden Schulen qua Landtagsbeschluss zu Ausbildungsstätten und Berufsvorbereitung (Münch, 2018, 177) – oder nicht?
Dabei ist wissenschaftlich belegt, dass die Qualität von Schule und Unterricht gerade nicht an Medientechnik gekoppelt ist. Entscheidend sind immer qualifizierte Lehrpersönlichkeiten, ein gut strukturierter, altersgerechter Unterricht und der soziale Umgang miteinander. (Studien von Hattie, Telekom, OECD u.a.) Lehren und Lernen sind individuelle und soziale Prozesse, keine technisch steuerbaren Abläufe. Unberücksichtigt bleiben inden Anträgen sowohl die historischen Belege des Scheiterns von Medientechnik (Pias) wie bereits gegenläufige Entwicklungen aus den USA. Kinder in (teuren) Privatschulenwerden wieder von realen Lehrerinnen und Lehrern unterrichtet und genießen den „Luxus menschlicher Interaktion“. Bildschirme sind dort aus den Schulen verbannt, während Kinder an öffentlichen Schulen an Tablets ohne LehrerInnen lernen müssen (Bowles, 2018).
Der niedersächsische Landtag entscheidet bei diesen Anträgen also darüber, ob bereits gescheiterte IT-Konzepte aus den USA wiederholt werden oder ob eine Diskussionüber sinnvolle und pädagogisch fundierte Medienkonzepte für Schulen eröffnet wird, die nicht auf Digitaltechnik verkürzt werden darf. Wer also bestimmt über Lehrinhalte und Medientechnik an Schulen? Die IT-Wirtschaft und Vertreter der Daten-Ökonomie, die Lehrangebote digitalisieren und privatisieren wollen? Oder entscheiden Volksvertreter, nach pädagogischer Expertise, die den Schülerinnen und Schülern verpflichtet sind?
Scheuklappen statt Weitblick
(2019)
Der bildungsferne Campus
(2019)
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
This paper examines and evaluates the challenges and opportunities of export credit agencies (ECA) in the Middle East and North Africa (MENA) region. Political risks, unrest and instability made exports in the MENA region arduous. Further challenges are demonetization, the lack of reliable information and the acquisition of skilled employees. Access to financial resources can be quite challenging and several ECAs in the MENA region struggle from having no economies of scale. The global trend of globalisation and digitalisation has proved to be both a challenge and an opportunity. Nevertheless, the ECAs are becoming progressively important and needed in the MENA region. ECAs can benefit from this by working closely with financial institutions, banks and stakeholders. Other opportunities are infrastructure, renewable energies, international events and the diversification of the product portfolio. Through research on the ECAs EGE, ECI, Credit Oman and ICIEC, differences of multilateral and national export credit agencies have been analysed as well.
Analysis of Miniaturized Printed Flexible RFID/NFC Antennas Using Different Carrier Substrates
(2020)
Antennas for Radio Frequency Identification (RFID) provide benefits for high frequencies (HF) and wireless data transmission via Near Field Communication (NFC) and many other applications. In this case, various requirements for the design of the reader and transmitter antennas must be met in order to achieve a suitable transmission quality. In this work, a miniaturized cost-effective RFID/NFC antenna for a microelectronic measurement system is designed and printed on different flexible carrier substrates using a new and low-cost Direct Ink Writing (DIW) technology. Various practical aspects such as reflection and impedance magnitude as well as the behavior of the printed RFID/NFC antennas are analyzed and compared to an identical copper-based antenna of the same size. The results are presented in this paper. Furthermore, the problems during the printing process itself on the different substrates are evaluated. The effects of the characteristics on the antenna under kink-free bending tests are examined and subsequently long-term measurements are carried out.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Analysing and predicting the advance rate of a tunnel boring machine (TBM) in hard rock is integral to tunnelling project planning and execution. It has been applied in the industry for several decades with varying success. Most prediction models are based on or designed for large-diameter TBMs, and much research has been conducted on related tunnelling projects. However, only a few models incorporate information from projects with an outer diameter smaller than 5 m and no penetration prediction model for pipe jacking machines exists to date. In contrast to large TBMs, small-diameter TBMs and their projects have been considered little in research. In general, they are characterised by distinctive features, including insufficient geotechnical information, sometimes rather short drive lengths, special machine designs and partially concurring lining methods like pipe jacking and segment lining. A database which covers most of the parameters mentioned above has been compiled to investigate the performance of small-diameter TBMs in hard rock. In order to provide sufficient geological and technical variance, this database contains 37 projects with 70 geotechnically homogeneous areas. Besides the technical parameters, important geotechnical data like lithological information, unconfined compressive strength, tensile strength and point load index is included and evaluated. The analysis shows that segment lining TBMs have considerably higher penetration rates in similar geological and technical settings mostly due to their design parameters. Different methodologies for predicting TBM penetration, including state-of-the-art models from the literature as well as newly derived regression and machine learning models, are discussed and deployed for backward modelling of the projects contained in the database. New ranges of application for small-diameter tunnelling in several industry-standard penetration models are presented, and new approaches for the penetration prediction of pipe jacking machines in hard rock are proposed.
Significant improvements in module performance are possible via implementation of multi-wire electrodes. This is economically sound as long as the mechanical yield of the production is maintained. While flat ribbons have a relatively large contact area to exert forces onto the solar cell, wires with round cross section reduce this contact area considerably – in theory to an infinitively thin line. Therefore, the local stresses induced by the electrodes might increase to a point that mechanical production yields suffer unacceptably.
In this paper, we assess this issue by an analytical mechanical model as well as experiments with an encapsulant-free N.I.C.E. test setup. From these, we can derive estimations for the relationship between lay-up accuracy and expected breakage losses. This paves the way for cost-optimized choices of handling equipment in industrial N.I.C.E.-wire production lines.
Many sectors, like finance, medicine, manufacturing, and education, use blockchain applications to profit from the unique bundle of characteristics of this technology. Blockchain technology (BT) promises benefits in trustability, collaboration, organization, identification, credibility, and transparency. In this paper, we conduct an analysis in which we show how open science can benefit from this technology and its properties. For this, we determined the requirements of an open science ecosystem and compared them with the characteristics of BT to prove that the technology suits as an infrastructure. We also review literature and promising blockchain-based projects for open science to describe the current research situation. To this end, we examine the projects in particular for their relevance and contribution to open science and categorize them afterwards according to their primary purpose. Several of them already provide functionalities that can have a positive impact on current research workflows. So, BT offers promising possibilities for its use in science, but why is it then not used on a large-scale in that area? To answer this question, we point out various shortcomings, challenges, unanswered questions, and research potentials that we found in the literature and identified during our analysis. These topics shall serve as starting points for future research to foster the BT for open science and beyond, especially in the long-term.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Der Bedarf an Bandbreite für Internetanwendungen ist in den letzten Jahren so stark gestiegen, dass Kupferleitungen diesen Anforderungen nicht mehr entsprechen können. Aus Expertensicht ist die Lösung dieses Problems ein optisches Netzwerk aus Lichtwellenleitern (LWL), das bis in die Wohnung zum Endkunden reicht – das sogenannte „Fiber to the Home“ (FttH)-Konzept.
Für Verbindungen über einige hunderte Meter eignen sich Multimode-Lichtwellenleiter (MM-LWL) durch ihre Robustheit und einfache Handhabung ideal. Zudem erlaubt der große Durchmesser des Faserkerns mit 62,5 µm eine sichere, stabile und relativ verlustfreie Verbindung. Neben diesen Vorteilen sind jedoch im letzten Jahrzehnt durch die Erhöhung der Bitrate auch Nachteile sichtbar geworden. So konnten die für niedrige Übertragungsraten genutzten LEDs noch zur Vollanregung der Übertragungsmoden eingesetzt werden. Für höhere Übertragungsraten ist dies jedoch nicht mehr möglich, da sie optisch zu träge sind und somit der schnellen Modulation nicht mehr folgen können. Schnellere Anregungskomponenten, etwa Laserdioden (LD), müssen eingesetzt werden. Durch die spezifische Ausstrahlungscharakteristik der LDs kann jedoch nicht mehr der gesamte MM-LWL-Kern angeregt werden. Dies führt zu unterschiedlichen Modenlaufzeiten im MMLWL, was sich wiederum negativ auf die Übertragungsrate auswirken kann. Dadurch nimmt die Bandbreite rapide ab.
In short-reach connections, large-diameter multimode fibres allow for robust and easy connections. Unfortunately, their propagation properties depend on the excitation conditions. We propose a launching technique using a fibre stub that can tolerate fabrication tolerances in terms of tilts and off-sets to a large extent. A study of the influence of displaced connectors along the transmission link shows that the power distributions approach a steady-state power distribution very similar to the initial distribution established by the proposed launching scheme.
A polarization mode dispersion measurement set-up based on a Mach-Zehnder Interferometer was realized. Measurements were carried out on short high-birefringent fibers and on long standard telecommunication single-mode fibers. In order to ensure high accurate results, special emphasis was placed on the evaluation of the interference pattern. The procedure will be described in detail and practical measurement results will be presented.
The bandwidth behavior of graded-index multimode fibers (GI-MMFs) for different launching conditions is investigated to understand and characterize the effect of differential mode delay. In order to reduce the launch-power distribution the near field of a single-mode fiber is used to produce a controlled restricted launch. The baseband response is measured by observing the broadening of a narrow input pulse (time-domain measurement). The paper verifies the degradation in bandwidth due to profile distortion by scanning the spot of the single-mode fiber with a transversal offset from the center of the test sample. In addition, the impact of the launch-power distribution tuned by different spot-size diameters is demonstrated. Measurements were taken on ‘older’ 50-μm and 62.5-μm GI-MMFs as well as on laser-performance-optimized fibers more recently developed.
An isomorphous series of 10 microporous copper-based metal–organic frameworks (MOFs) with the general formulas ∞3[{Cu3(μ3-OH)(X)}4{Cu2(H2O)2}3(H-R-trz-ia)12] (R = H, CH3, Ph; X2– = SO42–, SeO42–, 2 NO32– (1–8)) and ∞3[{Cu3(μ3-OH)(X)}8{Cu2(H2O)2}6(H-3py-trz-ia)24Cu6]X3 (R = 3py; X2– = SO42–, SeO42– (9, 10)) is presented together with the closely related compounds ∞3[Cu6(μ4-O)(μ3-OH)2(H-Metrz-ia)4][Cu(H2O)6](NO3)2·10H2O (11) and ∞3[Cu2(H-3py-trz-ia)2(H2O)3] (12Cu), which are obtained under similar reaction conditions. The porosity of the series of cubic MOFs with twf-d topology reaches up to 66%. While the diameters of the spherical pores remain unaffected, adsorption measurements show that the pore volume can be fine-tuned by the substituents of the triazolyl isophthalate ligand and choice of the respective copper salt, that is, copper sulfate, selenate, or nitrate.
Synthesis and crystal structure of a novel copper-based MOF material are presented. The tetragonal crystal structure of [ ∞ 3 ( Cu 4 ( μ 4 -O ) ( μ 2 -OH ) 2 ( Me 2 trz p ba ) 4 ] possesses a calculated solvent-accessible pore volume of 57%. Besides the preparation of single crystals, synthesis routes to microcrystalline materials are reported. While PXRD measurements ensure the phase purity of the as-synthesized material, TD-PXRD measurements and coupled DTA–TG–MS analysis confirm the stability of the network up to 230 °C. The pore volume of the microcrystalline material determined by nitrogen adsorption at 77 K depends on the synthetic conditions applied. After synthesis in DMF/H2O/MeOH the pores are blocked for nitrogen, whereas they are accessible for nitrogen after synthesis in H2O/EtOH and subsequent MeOH Soxhleth extraction. The corresponding experimental pore volume was determined by nitrogen adsorption to be V Pore = 0.58 cm 3 g - 1 . In order to characterize the new material and to show its adsorption potential, comprehensive adsorption studies with different adsorptives such as nitrogen, argon, carbon dioxide, methanol and methane at different temperatures were carried out. Unusual adsorption–desorption isotherms with one or two hysteresis loops are found – a remarkable feature of the new flexible MOF material.
The newly synthesized Zn4O-based MOF 3∞[Zn4(μ4-O){(Metrz-pba)2mPh}3]·8 DMF (1·8 DMF) of rare tungsten carbide (acs) topology exhibits a porosity of 43% and remarkably high thermal stability up to 430 °C. Single crystal X-ray structure analyses could be performed using as-synthesized as well as desolvated crystals. Besides the solvothermal synthesis of single crystals a scalable synthesis of microcrystalline material of the MOF is reported. Combined TG-MS and solid state NMR measurements reveal the presence of mobile DMF molecules in the pore system of the framework. Adsorption measurements confirm that the pore structure is fully accessible for nitrogen molecules at 77 K. The adsorptive pore volume of 0.41 cm3 g−1 correlates well with the pore volume of 0.43 cm3 g−1 estimated from the single crystal structure.
Interaction and capturing information from the surrounding is dominated by vision and hearing. Haptics on the other side, widens the bandwidth and could also replace senses (sense switching) for impaired. Haptic technologies are often limited to point-wise actuation. Here, we show that actuation in two-dimensional matrices instead creates a richer input. We describe the construction of a full-body garment for haptic communication with a distributed actuating network. The garment is divided into attachable-detachable panels or add-ons that each can carry a two dimensional matrix of actuating haptic elements. Each panel adds to an enhanced sensoric capability of the human- garment system so that together a 720° system is formed. The spatial separation of the panels on different body locations supports semantic and theme-wise separation of conversations conveyed by haptics. It also achieves directional faithfulness, which is maintaining any directional information about a distal stimulus in the haptic input.
Disturbances of the cardiac conduction system causing reentry mechanisms above the atrioventricular (AV) node are induced by at least one accessory pathway with different conducting properties and refractory periods. This work aims to further develop the already existing and continuously expanding Offenburg heart rhythm model to visualise the most common supraventricular reentry tachycardias to provide a better understanding of the cause of the respective reentry mechanism.
In the modern knowledge-based and digital economy, the value of knowledge is growing relative to other assets and new intellectual property is being created at an ever-increasing rate. Therefore, the ability to find non-trivial solutions, systematically generate new concepts, and create intellectual property rapidly become crucial to achieving competitive advantage and leveraging the intellectual potential of organizations.
Enhancing engineering creativity with automated formulation of elementary solution principles
(2023)
The paper describes a method for the automated formulation of elementary creative stimuli for product or process design at different levels of abstraction and in different engineering domains. The experimental study evaluates the impact of structured automated idea generation on inventive thinking in engineering design and compares it with previous experimental studies in educational and industrial settings. The outlook highlights the benefits of using automated ideation in the context of AI-assisted invention and innovation.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. It briefly describes the program of a novel one-semester-course of 150 hours in new product development and inventive problem solving with TRIZ methodology, offered for the master students at the Beuth University of Applied Sciences in Berlin. The paper outlines multi-source educational approach, which includes a new product development project (about 50% of the complete course), theory, practical work, self-learning with the software tools for computer-aided innovation, and demonstrates examples of the students work. The research part analyses the learning experience, identifies the factors that impact the innovation and problem solving performance of the students, and underlines the main difficulties faced by the students in the course. It describes a method for measurement of education efficiency and compares the results with educational experience in the industry. The presented results can help universities to establish the education in new product development or to improve its performance.
Using patent information for identification of new product features with high market potential
(2014)
CONTEXT
The paper addresses the needs of medium and small businesses regarding qualification of R&D specialists in the interdisciplinary cross-industry innovation, which promises a considerable reduction of investments and R&D expenditures. The cross-industry innovation is commonly understood as identification of analogies and transfer of technologies, processes, technical solutions, working principles or business models between industrial sectors. However, engineering graduates and specialists frequently lack the advanced skills and knowledge required to run interdisciplinary innovation across the industry boundaries.
PURPOSE
The study compares the efficiency of the cross-industry innovation methods in one semester project-oriented course. It identifies the individual challenges and preferred working techniques of the students with different prior knowledge, sets of experiences, and cultural contexts, which require attention by engineering educators.
APPROACH
Two parallel one-semester courses were offered to the mechanical and process engineering students enrolled in bachelor’s and master’s degree programs at the faculty of mechanical and process engineering. The students from different years of study were working in 12 teams of 3…6 persons each on different innovation projects, spending two hours a week in the classroom and additionally on average two hours weekly on their project research. Students' feedback and self-assessments concerning gained skills, efficiency of learned tools and intermediate findings were documented, analysed, and discussed regularly along the course.
RESULTS
Analysis of numerous student projects allows to compare and to select the tools most appropriate for finding cross-industry solutions, such as thinking in analogies, web monitoring, function-oriented search, databases of technological effects and processes, special creativity techniques and others. The utilization of learned skills in practical innovation work strengthens the motivation of students and enhances their entrepreneurial competences. Suggested learning course and given recommendations help facilitate sustainable education of ambitious specialists.
CONCLUSIONS
The structured cross-industry innovation can be successfully run as a systematic process and learned in one semester course. The choice of the preferred working teqniques made by the students is affected by their prior knowledge in science, practical experience, and cultural contexts. Major outcomes of the students’ innovation projects such as feasibility, novelty and customer value of the concepts are primarily influenced by students’ engineering design skills, prior knowledge of the technologies, and industrial or business experience.
The comprehensive assessment method includes 80 innovation performance parameters and 10 key indicators of innovation capability, such as innovation process performance, innovating system performance, market and customer orientation, technology orientation, creativity, leadership, communication and knowledge management, risk and cost management, innovative climate, and innovation competences. The cross-industry study identifies parameters critical for innovation success and reveals different innovation performance patterns in companies.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. In comparison with the engineers, the students often demonstrate lower motivation in learning systematic inventive techniques, like for example TRIZ methodology, and prefer random brainstorming for idea generation. The quality of obtained solutions also depends on the level of completeness of the problem analysis, which is more complex and time consuming in the case of interdisciplinary systems. The paper briefly describes one-semester-course of 60 hours in new product development with the Advanced Innovation Design Approach and TRIZ methodology, in which a typical industrial innovation process for one selected interdisciplinary mechatronic product is modelled.
The paper conceptualizes the systemic approach for enhancing innovative and competitive capacity of industrial companies (named as Advanced Innovation Design Approach – AIDA) including analysis, optimizations and further development of the innovation process and promoting the innovation climate in industrial companies. The innovation process is understood as a holistic stage-gate system comprising following typical phases with feedback loops and simultaneous auxiliary or follow-up processes: uncovering of solution-neutral customer needs, technology and market trends, identification of the needs and problems with high market potential and formulation of the innovation tasks and strategy, idea generation and problem solving, evaluation and enhancement of solution ideas, creation of innovation concepts based on solution ideas, evaluation of the innovation concepts as well as implementation, validation and market launch of chosen innovation concepts. The article presents the current state of innovation research and discusses the actual status of innovation process in the industrial environment. It defines the future research tasks for amplification of the innovation process with self-configuration, self-optimization, self-diagnostics and intelligent information processing and communication.
The internal crowdsourcing-based ideation within a company can be defined as an involvement of its staff, specialists, managers, and other employees, to propose solution ideas for a pre-defined problem. This paper addresses a question, how many participants of the company-internal ideation process are required to nearly reach the ideation limit for the problems with a finite number of workable solutions. To answer the research question, the author proposes a set of metrics and a non-linear ideation performance function with a positive decreasing slope and ideation limit for the closed-ended problems. Three series of experiments helped to explore relationships between the metric attributes and resulted in a mathematical model which allows companies to predict the productivity metrics of their crowdsourcing ideation activities such as quantity of different ideas and ideation limit as a function of the number of contributors, their average personal creativity and ideation efficiency of a contributors’ group.
The effective executing innovation projects requires multiple estimation of market success of new product features in the early stages of customer-centered innovation process such as strategy formulation, evaluation of ideas and concepts and also at a stage close to the market launch. The attempts to integrate customers for estimation of the market success often result in time-consuming customer interviews or lengthy field research. For this reason, industrial companies usually try to skip customer surveys even if they risk that their innovations will fail to bring the anticipated economic outcomes. In many practical cases, the customer surveys are simply not feasible or too expensive. As a result, the internal assessments within companies are frequently the only resource available in innovation process in the industrial environment. The paper discusses the possibilities of the fast identification of promising innovation opportunities and new product features based on the internal competences of companies. It compares the results of customer surveys with the estimation of internal company-experts and analyses the accuracy and validity of the expert assessments. The presented case studies demonstrate the accuracy rate between 43% and 77% for prediction of new product features with high market potential by company-internal experts. The paper proposes the evaluation methods to increase the accuracy rate and outlines that one of the essential requirements for reliable forecasting by the experts is their profound understanding of the customer working process, the ability to estimate the importance of customer needs and to assess the level of customer satisfaction with current products on the market.
The Advanced Innovation Design Approach is a holistic methodology for enhancing innovative and competitive capability of industrial companies. AIDA can be considered as an open mindset, an individually adaptable range of strongest innovation techniques such as comprehensive front-end innovation process, advanced innovation methods, best tools and methods of the TRIZ methodology, organizational measures for accelerating innovation, IT-solutions for Computer-Aided Innovation, and other innovation methods, elaborated in the recent decade in the industry and academia
The European TRIZ Association ETRIA acts as a connecting link between scientific institutions, universities and other educational organizations, industrial companies and individuals concerned with conceptual and practical questions relating to organization of innovation process, invention methods, and innovation knowledge. In the meantime, more than TFC 1000 papers or presentation of scientists, educators, and practitioners from all over the world are available at the official ETRIA website. Numerous research projects were supported or funded by the European Commission.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
TRIZ Inventive Principles
(2022)
The analysis of several thousand patents led to the conclusion that inventive engineering problems and technical contradictions in all kinds of industrial sectors could be solved by a limited number of basic Inventive Principles (Altshuller, 1984). The modern Theory of Inventive Problem Solving TRIZ (VDI 4521) contains 40 basic Inventive Principles (IP). These principles are simple to use or modify and can be easily integrated in brainstorming or daily engineer’s work. One established part of industrial practice is the composition of the specific groups of principles for solving different kinds of problems (Livotov, Petrov, 2011). Based on interdisciplinary experience of TRIZ application in the industrial companies in the last 25 years the a general order in the application of 40 Inventive Principles can be recommended for idea generation and problem solving (Livotov, Chandra, Mas'udah et al, 2019). This brochure presents an update of the 40 Inventive Principles extending the original version (Altshuller, 1984) with additional 70 sub-principles, resulting in the advanced set of 160 sub-principles, regarded as elementary inventive operators. These extended version of inventive principles finds its application in the AIDA Automatic IDEA & IP Generator https://www.tris-europe.com/eng/software/innovationssoftware.htm
The modern TRIZ is today considered as the most organized and comprehensive methodology for knowledge-driven invention and innovation. When applying TRIZ for inventive problem solving, the quality of obtained solutions strongly depends on the level of completeness of the problem analysis and the abilities of designers to identify the main technical and physical contradictions in the inventive situation. These tasks are more complex and hence more time consuming in the case of interdisciplinary systems. Considering a mechatronic product as a system resulting from the integration of different technologies, the problem definition reveals two kinds of contradictions: 1) the mono-disciplinary contradictions within a homogenous sub-system, e.g., only mechanical or only electrical; 2) the interdisciplinary contradictions resulting from the interaction of the mechatronic sub-systems (mechanics, electrics, control and software). This paper presents a TRIZ-based approach for a fast and systematic problem definition and contradiction identification, which could be useful both for engineers and students facing mechatronic problems. It also proposes some useful problem formulation tech-niques such as the System Circle Diagram, the enhancement of System Operator with the Evolution Patterns, the extension of MATChEM-IB operator with Infor-mation field and Human Interactions, as well as the Cause-Effect-Matrix.
The research work analyses the relationship of 155 Process Intensification (PI) technologies to the components of the Theory of Inventive Problem Solving (TRIZ). It outlines TRIZ inventive principles frequently used in PI, and identifies opportunities for enhancing systematic innovation in process engineering by applying complementary TRIZ and PI. The study also proposes 70 additional inventive TRIZ sub-principles for the problems frequently encountered in process engineering, resulting in the advanced set of 160 inventive operators, assigned to the 40 TRIZ inventive principles. Finally, we analyse and discuss inventive principles used in 150 patent documents published in the last decade in the field of solid handling in the ceramic and pharmaceutical industries.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new teaching method and appropriate learning materials in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appears as negative side effects of eco-friendly solutions.
Based on a literature analysis and own investigations, authors propose to introduce a manageable number of eco-innovation tools into a standard one-semester design course in process engineering with particular focus on the identification of eco-problems in existing technologies, selection of the appropriate new process intensification technologies (knowledge-based engineering), and systematic ideation and problem solving (knowledge-based innovation and invention).
The proposed educational approach equips students with the advanced knowledge, skills and competences in the field of eco-innovation. Analysis of the student’s work allows one to recommend simple-to-use tools for a fast application in process engineering, such as process mapping, database of eco-friendly process intensification technologies, and up to 20 strongest inventive operators for solving of environmental problems. For the majority of students in the survey, even the small workload has strengthened their self-confidence and skills in eco-innovation
Economic growth and ecological problems motivate industries to apply eco-friendly technologies and equipment. However, environmental impact, followed by energy and material consumption still remain the main negative implications of the technological progress in process engineering. Based on extensive patent analysis, this paper assigns more than 250 identified eco-innovation problems and requirements to 14 general eco-categories with energy consumption and losses, air pollution, and acidification as top issues. It defines primary eco-engineering contradictions, in case eco-problems appear as negative side effects of the new technologies, and secondary eco-engineering contradictions, if eco-friendly solutions have new environmental drawbacks. The study conceptualizes a correlation matrix between the eco-requirements for prediction of typical eco-contradictions on example of processes involving solids handling. Finally, it summarizes major eco-innovation approaches including Process Intensification in process engineering, and chronologically reviews 66 papers on eco-innovation adapting TRIZ methodology. Based on analysis of 100 eco-patents, 58 process intensification technologies, and literature, the study identifies 20 universal TRIZ inventive principles and sub-principles that have a higher value for environmental innovation.
Economic growth and ecological problems have pushed industries to switch to eco-friendly technologies. However, environmental impact is still often neglected since production efficiency remains the main concern. Patent analysis in the field of process engineering shows that, on the one hand, some eco-issues appear as secondary problems of the new technologies, and on the other hand, eco-friendly solutions often show lower efficiency or performance capability. The study categorizes typical environmental problems and eco-contradictions in the field of process engineering involving solids handling and identifies underlying inventive principles that have a higher value for environmental innovation. Finally, 42 eco-innovation methods adapting TRIZ are chronologically presented and discussed.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
Sustainable design of equipment for process intensification requires a comprehensive and correct identification of relevant stakeholder requirements, design problems and tasks crucial for innovation success. Combining the principles of the Quality Function Deployment with the Importance-Satisfaction Analysis and Contradiction Analysis of requirements gives an opportunity to define a proper process innovation strategy more reliably and to develop an optimal process intensification technology with less secondary engineering and ecological problems.
In recent years, the application of TRIZ methodology in the process engineering has been found promising to develop comprehensive inventive solution concepts for process intensification (PI). However, the effectiveness of TRIZ for PI is not measured or estimated. The paper describes an approach to evaluate the efficiency of TRIZ application in process intensification by comparing six case studies in the field of chemical, pharmaceutical, ceramic, and mineral industries. In each case study, TRIZ workshops with the teams of researchers and engineers has been performed to analyze initial complex problem situation, to identify problems, to generate new ideas, and to create solution concepts. The analysis of the workshop outcomes estimates fulfilment of the PI-goals, impact of secondary problems, variety and efficiency of ideas and solution concepts. In addition to the observed positive effect of TRIZ application, the most effective inventive principles for process engineering have been identified.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
Classification of TRIZ Inventive Principles and Sub-Principles for Process Engineering Problems
(2019)
The paper proposes a classification approach of 40 Inventive Principles with an extended set of 160 sub-principles for process engineering, based on a thorough analysis of 155 process intensification technologies, 200 patent documents, 6 industrial case studies applying TRIZ, and other sources. The authors define problem-specific sub-principles groups as a more precise and productive ideation technique, adaptable for a large diversity of problem situations, and finally, examine the anticipated variety of ideation using 160 sub-principles with the help of MATCEM-IBD fields.
Growing demands for cleaner production and higher eco-efficiency in process engineering require a comprehensive analysis of technical and environmental outcomes of customers and society. Moreover, unexpected additional technical or ecological drawbacks may appear as negative side effects of new environ-mentally friendly technologies. The paper conceptualizes a comprehensive ap-proach for analysis and ranking of engineering and ecological requirements in process engineering in order to anticipate secondary problems in eco-design and to avoid compromising the environmental or technological goals. For this purpose, the paper presents a method based on integration of the Quality Func-tion Deployment approach with the Importance-Satisfaction Analysis for the requirements ranking. The proposed method identifies and classifies compre-hensively the potential engineering and eco-engineering contradictions through analysis of correlations within requirements groups such as stakehold-er requirements (SRs) and technical requirements (TRs), and additionally through cross-relationship between SRs and TRs.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
Process engineering industries are now facing growing economic pressure and societies' demands to improve their production technologies and equipment, making them more efficient and environmentally friendly. However unexpected additional technical and ecological drawbacks may appear as negative side effects of the new environmentally-friendly technologies. Thus, in their efforts to intensify upstream and downstream processes, industrial companies require a systematic aid to avoid compromising of ecological impact. The paper conceptualises a comprehensive approach for eco-innovation and eco- design in process engineering. The approach combines the advantages of Process Intensification as Knowledge-Based Engineering (KBE), inventive tools of Knowledge-Based Innovation (KBI), and main principles and best-practices of Eco-Design and Sustainable Manufacturing. It includes a correlation matrix for identification of eco-engineering contradictions and a process mapping technique for problem definition, database of Process Intensification methods and equipment, as well as a set of strongest inventive operators for eco-ideation.
The increasing diffusion of rapidly developing AI technologies led to the idea of the experiment to combine TRIZ-based automated idea generation with the natural language processing tool ChatGPT, using the chatbot to interpret the automatically generated elementary solution principles. The article explores the opportunities and benefits of a novel AI-enhanced approach to teaching systematic innovation, analyses the learning experience, identifies the factors that affect students' innovation and problem-solving performance, and highlights the main difficulties students face, especially in interdisciplinary problems.
Identification of Secondary Problems of New Technologies in Process Engineering by Patent Analysis
(2018)
The implementation of new technologies in production plants often causes negative side effects and drawbacks. In this context, the prediction of the secondary problems and risks can be used advantageously for selecting best solutions for intensification of the processes. The proposed method puts primary emphasis on systematic and fast anticipation of secondary problems using patent documents, and on extraction and prediction of possible engineering contradictions within novel technical systems. The approach comprises three ways to find secondary problems: (a) direct knowledge-based identification of secondary problems in new technologies or equipment; (b) identification of secondary problems of prototypes mentioned in patent citation trees; and (c) prediction of negative side effects using the correlation matrix for invention goals and secondary problems in a specific engineering domain.
TRIZ Innovationstechnologie
(2023)
Systemic Constellations are a phenomenological approach to resolving personal, professional and organizational issues. They offer a way of mapping a present reality, working at the source of the hidden dynamics and moving to a resolution. This systemic approach often delivers surprising and unexpected insights while also offering the possibility to analyze and solve organizational problems. Rational analysis provides the whole picture of the problem which often turns out to be too complex for a decision making. Systemic constellations can help to simplify and clarify the situation and inform what has to happen next [8], [17]. The outcomes of systemic constellations as an additional resource for solving comprehensive technical problems have not yet been sufficiently investigated. In structural constellation work dealing with technical problems, the individuals who are involved in the problem situation are used to represent different system components, substances or fields. A moderator voices the feedback from the representatives concerning their feelings or intuitive movements, and points to possible solutions. For example, a moderator places the representatives somewhere in the room, develops a three-dimensional picture of the constellation of the analyzed situation and tries to expose the factors empowering or blocking the way towards constructive solutions [13]. This paper explores the theoretical background and practical outcomes of the systemic constellation method for technical problem solving. It presents some case study work which has been conducted in recent years, and then discusses its findings and implications. The research outlined in this paper demonstrates that the noteworthy contribution of structural constellation work for problem solving is typically the result of a combination of functional analysis and the feeling-as-information principle. The constellation work helps, at first, to reveal the subjective experiences, such as feelings, moods, emotions, and bodily sensations, and then to accept them as a source of objective information relevant to the decision making process. In accordance with the latest research [19], the use of feelings as a source of information follows the same principles as the use of any other information. This paper provides the structures of some standard templates and types of constellation work for technical problems, and discusses the preconditions for their application.
The paper recommends an approach to estimate effectively the probability of buffer overflow in high-speed communication networks, capable of carrying diverse traffic, including self-similar teletraffic, and supporting diverse levels of quality of service. Simulations with stochastic, long-range dependent self-similar traffic source models are conducted. A new efficient algorithm, based on a variant of the RESTART/LRE method, is developed and applied to accelerate the buffer overflow simulation in a finite buffer single server model under long-range dependent self-similar traffic load with different buffer sizes. Numerical examples and simulation results are shown
Silicon edges as one-dimensional waveguides for dispersion-free and supersonic leaky wedge waves
(2012)
Acoustic waves guided by the cleaved edge of a Si(111) crystal were studied using a laser-based angle-tunable transducer for selectively launching isolated wedge or surface modes. A supersonic leaky wedge wave and the fundamental wedge wave were observed experimentally and confirmed theoretically. Coupling of the supersonic wave to shear waves is discussed, and its leakage into the surface acoustic wave was observed directly. The velocity and penetration depth of the wedge waves were determined by contact-free optical probing. Thus, a detailed experimental and theoretical study of linear one-dimensional guided modes in silicon is presented.
In anisotropic media, the existence of leaky surface acoustic waves is a well-known phenomenon. Very recently, their analogs at the apex of an elastic silicon wedge have been found in experiments using laser-ultrasonics. In addition to a wedge-wave (WW) pulse with low speed, a pseudo-wedge wave (p-WW) pulse was found with a velocity higher than the velocity of shear bulk waves, propagating in the same direction. With a probe-beam-deflection technique, the propagation of the WW pulses was monitored on one of the faces of the wedge at variable distance from the apex. In this way, their depth structure and the leakage of the p-WW could be visualized directly. Calculations were carried out using a method based on a representation of the displacement field in Laguerre functions. This method has been validated by calculating the surface density of states in anisotropic media and comparing the results with those obtained from the surface Green's tensor. The approach has then been extended to the continuum of acoustic modes in infinite wedges with fixed wave-vector along the apex. These calculations confirmed the measured speeds of the WW and p-WW pulses.
In den letzten Jahren sind verstärkt große Batteriespeichersysteme in der Mittel- und Hochspannungsebene in Deutschland installiert worden. Neben dem Einsatz für lokale Anwendungszwecke wie Eigenverbrauchsmaximierung oder Lastspitzenkappung sind seit 2016 etwa 250 MW aus Batteriespeichern für die Teilnahme am Markt für Primärregelleistung (PRL) präqualifiziert worden. Damit können bereits 40 % des aktuellen Bedarfs der deutschen Übertragungsnetzbetreiber (ÜNB) gedeckt werden. Für einen zuverlässigen Betrieb von Batteriespeichern sind intelligente Betriebsstrategien erforderlich, die im Rahmen dieser Analyse vorgestellt werden.
Die Bedeutung internationaler Handelsabkommen nimmt immer weiter zu und verdeutlicht dabei die Wichtigkeit sowie Dringlichkeit internationaler Zusammenarbeit, insbesondere internationaler Wirtschaftsbeziehungen zwischen den einzelnen Nationen. Der Begriff „internationale Wirtschaftsbeziehung“ meint die Gesamtheit der die Landesgrenzen überschreitenden wirtschaftlichen Handlungen von Wirtschaftssubjekten sowie auch staatliche und überstaatliche Maßnahmen und Beziehungen. Bei der Welthandelsorganisation (WTO) sind beispielsweise 301 regionale Handelsabkommen verzeichnet. Anhand des aufgeführten Diagramms zeigt sich der verstärkte jährliche Anstieg der in Kraft getretenen Handelsabkommen.
Cardiac resynchronization therapy with biventricular pacing is an established therapy for heart failure patients with electrical left ventricular desynchronization. The aim of this study was to evaluate left atrial conduction delay, intra left atrial conduction delay, left ventricular conduction delay and intra left ventricular conduction delay in heart failure patients using novel signal averaging transesophageal left heart ECG software.
Methods: 8 heart failure patients with dilated cardiomyopathy (DCM), age 68 ± 9 years, New York Heart Association (NYHA) class 2.9 ± 0.2, 24.8 ± 6.7 % left ventricular ejection fraction, 188.8 ± 15.5 ms QRS duration and 8 heart failure patients with ischaemic cardiomyopathy (ICM), age 67 ± 8 years, NYHA class 2.9 ± 0.3, 32.5 ± 7.4 % left ventricular ejection fraction and 167.6 ± 19.4 ms QRS duration were analysed with transesophageal and transthoracic ECG by Bard LabDuo EP system and novel National Intruments LabView signal averaging ECG software.
Results: The electrical left atrial conduction delay was 71.3 ± 17.6 ms in ICM versus 72.3 ± 12.4 ms in DCM, intra left atrial conduction delay 66.8 ± 8.6 ms in ICM versus 63.4 ± 10.9 ms in DCM and left cardiac AV delay 180.5 ± 32.6 ms in ICM versus 152.4 ± 30.4 ms in DCM. The electrical left ventricular conduction delay was 40.9 ± 7.5 ms in ICM versus 42.6 ± 17 ms in DCM and intra left ventricular conduction delay 105.6 ± 19.3 ms in ICM versus 128.3 ± 24.1 ms in DCM.
Conclusions: Left heart signal averaging ECG can be utilized to analyse left atrial conduction delay, intra left atrial conduction delay, left ventricular conduction delay and intra left ventricular conduction delay to improve patient selection for cardiac resynchronization therapy.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant margin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image
classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers.
We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.
In this paper, the effect of the polycrystalline microstructure on crack-tip opening displacement and crack closure is investigated for microstructural short plane strain fatigue cracks using the finite-element method. To this end, cracks are introduced in synthetically generated microstructures and the grain properties are described using a single crystal plasticity model with kinematic hardening. Additionally, finite-element calculations without resolved microstructure and von Mises plasticity with kinematic hardening are performed. Fully-reversed strain-controlled cyclic loadings are considered under large-scale yielding conditions as typical for low-cycle fatigue problems. The crack opening stress and the cyclic crack-tip opening displacement are significantly influenced by the local grain structure. While the stabilized crack opening stresses obtained with the microstructure-based finite-element model are in good accordance with the von Mises plasticity results, the differences in the cyclic crack opening displacement are addressed to the asymmetric plastic strain fields in the plastic wake behind the crack-tip of the microstructure-based model. The asymmetric plastic strain fields result in discontinuous and premature contact of the crack flanks.
n this work a mathematical model for describing the performance of lithium-ion battery electrodes consisting of porous active material particles is presented. The model represents an extension of the Newman-type model, accounting for the agglomerate structure of the active material particles, here Li(Ni1/3Co1/3Mn1/3)O2 (NCM) and Li(Ni1/3Co1/3Al1/3)O2 (NCA). To this goal, an additional pore space is introduced on the active material level. The space is filled with electrolyte and a charge-transfer reaction takes place at the liquid-solid interface within the porous active material particles. Volume-averaging techniques are used to derive the model equations. A local Thiele modulus is defined and provides an insight into the potentially limiting factors on the active material level. The introduction of a liquid-phase ion transport within the active material reduces the overall transport losses, while the additional active surface area within the agglomerate lowers the charge-transfer resistance. As a consequence, calculated discharge capacities are higher for particles modeled as agglomerates. This finding is more pronounced in the case of high C-rates
Neural networks tend to overfit the training distribution and perform poorly on out-ofdistribution data. A conceptually simple solution lies in adversarial training, which introduces worst-case perturbations into the training data and thus improves model generalization to some extent. However, it is only one ingredient towards generally more robust models and requires knowledge about the potential attacks or inference time data corruptions during model training. This paper focuses on the native robustness of models that can learn robust behavior directly from conventional training data without out-of-distribution examples. To this end, we study the frequencies in learned convolution filters. Clean-trained models often prioritize high-frequency information, whereas adversarial training enforces models to shift the focus to low-frequency details during training. By mimicking this behavior through frequency regularization in learned convolution weights, we achieve improved native robustness to adversarial attacks, common corruptions, and other out-of-distribution tests. Additionally, this method leads to more favorable shifts in decision-making towards low-frequency information, such as shapes, which inherently aligns more closely with human vision.
KINLI
(2023)
Konsumenten haben immer höhere Ansprüche an Lebensmittelsicherheit, -qualität und -nachhaltigkeit. Bei Fleisch erwarten viele Menschen auch eine artgerechte und ethisch vertretbare Aufzucht, Haltung und Schlachtung der Tiere. Im Projekt KINLI sollen eine Datenplattform und Dienste mit künstlicher Intelligenz entwickelt werden, um mögliche Probleme vorherzusagen. Unternehmen in der Lieferkette können damit proaktiv ihre Prozesse anpassen, bevor Probleme tatsächlich eintreten.
Heat pumps play a central role in decarbonizing the heat supply of buildings. However, in this article, implementing heat pumps in existing buildings, a significant challenge is still presented due to high temperature requirements. In this article, a systematic analysis of the effects of heat source temperatures, maximum heat pump condenser temperatures, and system temperatures on the seasonal performance of heat pump (HP) systems is presented. The quantitative performance analysis encompasses over 50 heat pumps installed in residential buildings, revealing correlations between the building characteristics, observed temperatures, and heat pump type. The performance of an HP system retrofitted to a 30-dwelling multifamily building is presented in more detail. The bivalent HP system combines air and ground as heat sources and achieves a seasonal performance factor of 3.25 with a share of the gas boiler of 27% in its first year of operation. In these findings, the technical feasibility of retrofitting heat pumps is demonstrated in existing buildings and insights are provided into overcoming the challenges associated with high temperature requirements.
Uptakes of 9.2 mmol g−1 (40.5 wt %) for CO2 at 273 K/0.1 MPa and 15.23 mmol g−1 (3.07 wt %) for H2 at 77 K/0.1 MPa are among the highest reported for metal–organic frameworks (MOFs) and are found for a novel, highly microporous copper‐based MOF (see picture; Cu turquoise, O red, N blue). Thermal analyses show a stability of the flexible framework up to 250 °C.
Metal–organic frameworks (MOFs) as highly porous materials have gained increasing interest because of their distinct adsorption properties.1–3 They exhibit a high potential for applications in gas separation and storage,4 as sensors5 as well as in heterogeneous catalysis.6 In the last few years, the H2 storage capacity of MOFs has been considerably increased. Mesoporous MOFs show high adsorption capacities for CH4, CO2, and H2 at high pressures.2, 3, 7–10 To increase the uptake of H2 and CO2 by physisorption at ambient pressure, adsorbents with small micropores as well as high specific surface areas and micropore volumes are required.11, 12 Such microporous materials seem to be more appropriate for gas‐mixture separation by physisorption than mesoporous materials. For gas separation in MOFs the interactions between the fluid adsorptive and “open metal sites” (coordinatively unsaturated binding sites) or the ligands are regarded as important.13 Industrial processes, such as natural‐gas purification or biogas upgrading, can be improved with those materials during a vapor‐pressure swing adsorption cycle (VPSA cycle) or a temperature swing adsorption cycle (TSA cycle).14 The microporous MOF series CPO‐27‐M (M=Mg, Co, Ni, Zn), for example, shows very high CO2 uptakes at low pressures (<0.1 MPa).15, 16 Concerning H2 adsorption, the microporous MOF PCN‐12 offers with 3.05 wt % the highest uptake at ambient pressure and 77 K reported to date.17
Herein, we present a novel microporous copper‐based MOF equation image[Cu(Me‐4py‐trz‐ia)] (1; Me‐4py‐trz‐ia2−=5‐(3‐methyl‐5‐(pyridin‐4‐yl)‐4H‐1,2,4‐triazol‐4‐yl)isophthalate) with extraordinarily high CO2 and H2 uptakes at ambient pressure, the H2 uptake being similar to that in PCN‐12. The ligand Me‐4py‐trz‐ia2−, which can be obtained from cheap starting materials by a three‐step synthesis in good yield, combines carboxylate, triazole, and pyridine functions and is adopted from a recently presented series of linkers,18 for which up to now only a few coordination polymers are known.