Refine
Year of publication
Document Type
- Conference Proceeding (1089)
- Article (unreviewed) (558)
- Article (reviewed) (529)
- Part of a Book (454)
- Book (222)
- Other (138)
- Contribution to a Periodical (123)
- Patent (94)
- Report (62)
- Letter to Editor (30)
- Doctoral Thesis (26)
- Working Paper (8)
- Periodical Part (4)
- Study Thesis (2)
- Image (1)
- Moving Images (1)
Conference Type
- Konferenzartikel (856)
- Konferenz-Abstract (153)
- Sonstiges (40)
- Konferenz-Poster (31)
- Konferenzband (13)
Language
- German (1734)
- English (1595)
- Other language (5)
- Russian (3)
- Multiple languages (2)
- French (1)
- Spanish (1)
Has Fulltext
- no (3341) (remove)
Keywords
- Digitalisierung (39)
- RoboCup (32)
- Dünnschichtchromatographie (26)
- Arbeitszeugnis (22)
- Finite-Elemente-Methode (22)
- Energieversorgung (21)
- Kommunikation (21)
- Management (19)
- Industrie 4.0 (18)
- Machine Learning (18)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (786)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (717)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (704)
- Fakultät Wirtschaft (W) (559)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (390)
- INES - Institut für nachhaltige Energiesysteme (178)
- Fakultät Medien (M) (ab 22.04.2021) (173)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (133)
- IMLA - Institute for Machine Learning and Analytics (72)
- ACI - Affective and Cognitive Institute (57)
Open Access
- Closed Access (1241)
- Open Access (866)
- Closed (533)
- Bronze (190)
- Diamond (53)
- Gold (11)
- Hybrid (11)
- Grün (7)
Die öffentliche Diskussion über den Einsatz digitaler Medien in Schule und Unterricht verkennt die zugrundeliegenden Interessen. Seit über 30 Jahren wird jede neue Generation von Digitaltechnik in die Schulen gedrückt. 1984 waren es Personal Computer (PC), in den 1990er Jahren Laptops, aktuell sind es WLAN, Tablets und Smartphones. Die Argumente sind identisch: Angeblich sorgen die Geräte für moderneren, innovativeren Unterricht, höhere Motivation der Schüler/innen, bessere Lernergebnisse. Wissenschaftlich valide Studien belegen das Gegenteil. Der pädagogische Nutzen war und ist bis heute negativ. PISA-Koordinator Andreas Schleicher: „Wir müssen es als Realität betrachten, dass Technologie in unseren Schulen mehr schadet als nützt.“ (Schleicher, 2016) Der Aktionsrat Bildung bestätigt in einer Studie für die Vereinigung der Bayerischer Wirtschaft (vbw) „statistisch signifikant niedrigere Kompetenzen in den Domänen Mathematik und Naturwissenschaften“, wenn Grundschülerinnen und Grundschüler im Unterricht mindestens einmal wöchentlich Computer einsetzen im Vergleich zu Grundschulkindern, die seltener als einmal pro Woche Computer im Unterricht nutzten - und fordert trotzdem, die Schulen müssten schneller digitalisiert werden.
Es geht offensichtlich um Anderes. Es sind wirtschaftliche Interessen der IT-Wirtschaft und der Global Education Industries (GEI), die die Bildungsmärkte nach angelsächsischem Vorbild privatisieren und kommerzialisieren wollen. Es sind zugleich die Geschäftsmodelle der Daten-Ökonomie, die alle Lebensbereiche verdaten und Menschen per Algorithmus und kybernetischen Modellen steuern wollen – wie in den 1950er Jahren (Behaviorismus, programmiertes Lernen). Die Digitalisierung ist „nur“ die technische Infrastruktur zur Datenerhebung, die empirische Bildungsforschung das Instrumentarium zur Quantifizierung auch des Sozialen (Mau, 2018). Nach Arbeitsmarkt und Kommunikation stehen derzeit Bildung und Gesundheit auf der Agenda der Digitalisten. Das Problem: Werden soziale Systeme nach der binären Logik der IT umgebaut, verlieren sie alles Soziale. Daher ist die vordringliche Aufgabe der Pädagogik, die derzeit dominierenden Denkstrukturen von BWL und IT, Empirie, Kennzahlenfixierung und behavioristischen Lerntheorien als dysfunktionalen und a-sozialen Irrweg zu kennzeichnen und stattdessen Schule und Unterricht wieder vom Menschen und seinen Lernprozessen her zu denken.
Die Digitalisierung aller Lebensbereiche ist kein Technik-, sondern ein Systemwechsel. Alles, was wir im Netz tun, wird verdatet; idealiter prenatal bis postmortal. Dieser Datenpool wird mit immer ausgefeilteren Algorithmen des Big Data Mining analysiert und mit Methoden der Empirie, Statistik und Mustererkennung ausgewertet. Der Mensch wird zum Datensatz. Je früher Menschen psychometrisch vermessen werden können, desto exaktere Persönlichkeits-, Lern- und Leistungsprofile entstehen – und umso leichter ist die Einflussnahme. Das ist der Grund für die Forderung nach Digitaltechnik in KiTas und Grundschulen. Menschen werden daran gewöhnt zu tun, was Maschinen ihnen sagen. Das ist Gegenaufklärung aus dem Silicon Valley per App und Web. Wie Alternativen aussehen können, zeigt dieser Beitrag.
Controlling ist ein Begriff aus der Wirtschaftslehre und bezeichnet nicht Kontrollle, sondern Prozeßsteuerung. Definierte Ziele werden durch kleinteilige Messungen und permanente Überwachung aller Arbeitsschritte und Handlungen der beteiligten Personen protokolliert und stetig optimiert. Dieses Konzept der Planungs-, Koordinations- und Kontrollaufgaben wird beim „Bildungs-Controlling“ auf Schulen und Hochschulen übertragen. Ziel ist dabei, entsprechend der Gary Beckerschen Humankapitaltheorie, die Produktion von Humankapital mit validierten Kompetenzen. Zwei Probleme gibt es dabei: Lernen und vor allem Verstehen lassen sich nicht automatisieren und auch nicht automatisiert prüfen. Und: Sozialsysteme unter dem Regime der Kennzahlen des Quality Management (QM) oder Total Quality Management (TQM) verlieren ihre Eigenschaft als soziale Systeme
Wer sich als Pädagoge und Wissenschaftler mit dem Thema „Digitalisierung und Schule“ befasst, stellt fest, dass nur Wenige die Tragweite der beabsichtigten Transformation von Bildungseinrichtungen zu automatisierten, algorithmisch gesteuerten Lernfabriken realisieren. Dabei wird übersehen, dass mit Theorien und empirischen Modellen wie der „datengestützten Schulentwicklung“ und „Learning Analytics“ grundlegende Paradigmenwechsel verbunden sind, die das humane wie das christliche Menschenbild erschüttern. Mit Kybernetik und Behaviorismus auf der einen, mit der sogenannten „Künstlichen Intelligenz“ (KI) und darauf aufbauenden Geschäftsmodellen der Datenökonomie auf der anderen Seite, untergraben diese Beschulungsmodelle die Autonomie, das Selbstbestimmungsrecht und die Handlungsfreiheit des Menschen. Vertreter dieser Disziplinen behaupten, dass sowohl der einzelne Mensch wie Sozialgemeinschaften wie Maschinen programmiert und gesteuert werden können. Sie blenden aus, dass Mündigkeit und Selbstverantwortung das Ziel von Schule und Unterricht sind, nicht maschinell berechnete Verhaltenssteuerung und -manipulation. Diese Fehlentwicklungen sind nicht der Technik an sich geschuldet, die sich anders einsetzen ließe, sondern den Geschäftsmodellen der IT-Anbieter.
Smarte Technologien ermöglichen eine engmaschige Kontrolle und Steuerung der Schülerinnen und Schüler. Die entscheidende Frage zu IT in Schulen ist daher: Folgen wir der Logik der technischen Systeme oder besinnen wir uns auf den pädagogischen Auftrag der Erziehung zu Mündigkeit und Selbstverantwortung?
Kein Mensch lernt digital
(2022)
Ralf Lankau entlarvt in diesem Buch die wirtschaftlichen Interessen der IT-Branche und ihrer Lobbyisten. Dabei geht er sowohl auf die wissenschaftlichen Grundlagen (Kybernetik, Behaviorismus) als auch auf die technischen Rahmenbedingungen von Netzen und Cloud-Computing ein, bevor er konkrete Vorschläge für einen reflektierten, verantwortungsvollen Umgang mit Digitaltechnik im Unterricht skizziert. Seine These: Wir müssen uns auf unsere pädagogische Aufgabe besinnen und (digitale) Medien wieder zu dem machen, was sie im Präsenzunterricht sind: didaktische Hilfsmittel.
Die 2. Auflage greift insbesondere die Erfahrungen mit der Digitalisierung während der Corona-Pandemie auf. Soziales Lernen und pädagogische Beziehungsgestaltung haben sich hier als wichtige Parameter für den Lernerfolg erwiesen. Das bloße Distanzlernen hingegen zeigt: Wer sich schon vorher mit dem Lernen schwer getan hat, fiel während der Pandemie noch weiter im Unterricht zurück.
Der Kaiser ist ja nackt
(2016)
Der niedersächsische Landtag entscheidet bei der Diskussion und Abstimmung über die drei genannten Anträge über mehr als nur die Verteilung der Investitionsmittel aus dem „Digitalpakt Schule“. Es geht um grundsätzliche Fragen: Wer bestimmt über Lehrinhalte an staatlichen Schulen und über eingesetzte (Medien-)Technik? Bleibt die Bildungspolitik des Landes dem Anspruch und Recht der Schülerinnen und Schüler nach individueller Bildung und Persönlichkeitsentwicklung verpflichtet, wie es in der Landesverfassung (§1(4)) und im Niedersächsischen Schulgesetz (§2 Bildungsauftrag, NschG) steht? Vermitteln öffentliche Schulen weiterhin eine fundierte Allgemeinbildung als Grundlage sozialer Teilhabe in demokratischen Gemeinschaften? Oder setzen sich Wirtschaftsverbände und IT-Lobbyisten durch, die für mehr und den immer früheren Einsatz von digitalen Endgeräten in Bildungseinrichtungen eintreten? Die „Programmieren bereits in der KiTa“ fordern und Schulen mit „leistungsstarken WLAN ausleuchten“ wollen (CDU/SPD-An-trag), ohne über Strahlung auch nur nachzudenken? Werden Schulen qua Landtagsbeschluss zu Ausbildungsstätten und Berufsvorbereitung (Münch, 2018, 177) – oder nicht?
Dabei ist wissenschaftlich belegt, dass die Qualität von Schule und Unterricht gerade nicht an Medientechnik gekoppelt ist. Entscheidend sind immer qualifizierte Lehrpersönlichkeiten, ein gut strukturierter, altersgerechter Unterricht und der soziale Umgang miteinander. (Studien von Hattie, Telekom, OECD u.a.) Lehren und Lernen sind individuelle und soziale Prozesse, keine technisch steuerbaren Abläufe. Unberücksichtigt bleiben inden Anträgen sowohl die historischen Belege des Scheiterns von Medientechnik (Pias) wie bereits gegenläufige Entwicklungen aus den USA. Kinder in (teuren) Privatschulenwerden wieder von realen Lehrerinnen und Lehrern unterrichtet und genießen den „Luxus menschlicher Interaktion“. Bildschirme sind dort aus den Schulen verbannt, während Kinder an öffentlichen Schulen an Tablets ohne LehrerInnen lernen müssen (Bowles, 2018).
Der niedersächsische Landtag entscheidet bei diesen Anträgen also darüber, ob bereits gescheiterte IT-Konzepte aus den USA wiederholt werden oder ob eine Diskussionüber sinnvolle und pädagogisch fundierte Medienkonzepte für Schulen eröffnet wird, die nicht auf Digitaltechnik verkürzt werden darf. Wer also bestimmt über Lehrinhalte und Medientechnik an Schulen? Die IT-Wirtschaft und Vertreter der Daten-Ökonomie, die Lehrangebote digitalisieren und privatisieren wollen? Oder entscheiden Volksvertreter, nach pädagogischer Expertise, die den Schülerinnen und Schülern verpflichtet sind?
Scheuklappen statt Weitblick
(2019)
Der bildungsferne Campus
(2019)
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset where, a subspace is the subset of dimensions of the data. But exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, thus, parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage, firstly, the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation has shown linear speedup. Secondly, we are developing an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Analysis of Miniaturized Printed Flexible RFID/NFC Antennas Using Different Carrier Substrates
(2020)
Antennas for Radio Frequency Identification (RFID) provide benefits for high frequencies (HF) and wireless data transmission via Near Field Communication (NFC) and many other applications. In this case, various requirements for the design of the reader and transmitter antennas must be met in order to achieve a suitable transmission quality. In this work, a miniaturized cost-effective RFID/NFC antenna for a microelectronic measurement system is designed and printed on different flexible carrier substrates using a new and low-cost Direct Ink Writing (DIW) technology. Various practical aspects such as reflection and impedance magnitude as well as the behavior of the printed RFID/NFC antennas are analyzed and compared to an identical copper-based antenna of the same size. The results are presented in this paper. Furthermore, the problems during the printing process itself on the different substrates are evaluated. The effects of the characteristics on the antenna under kink-free bending tests are examined and subsequently long-term measurements are carried out.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Analysing and predicting the advance rate of a tunnel boring machine (TBM) in hard rock is integral to tunnelling project planning and execution. It has been applied in the industry for several decades with varying success. Most prediction models are based on or designed for large-diameter TBMs, and much research has been conducted on related tunnelling projects. However, only a few models incorporate information from projects with an outer diameter smaller than 5 m and no penetration prediction model for pipe jacking machines exists to date. In contrast to large TBMs, small-diameter TBMs and their projects have been considered little in research. In general, they are characterised by distinctive features, including insufficient geotechnical information, sometimes rather short drive lengths, special machine designs and partially concurring lining methods like pipe jacking and segment lining. A database which covers most of the parameters mentioned above has been compiled to investigate the performance of small-diameter TBMs in hard rock. In order to provide sufficient geological and technical variance, this database contains 37 projects with 70 geotechnically homogeneous areas. Besides the technical parameters, important geotechnical data like lithological information, unconfined compressive strength, tensile strength and point load index is included and evaluated. The analysis shows that segment lining TBMs have considerably higher penetration rates in similar geological and technical settings mostly due to their design parameters. Different methodologies for predicting TBM penetration, including state-of-the-art models from the literature as well as newly derived regression and machine learning models, are discussed and deployed for backward modelling of the projects contained in the database. New ranges of application for small-diameter tunnelling in several industry-standard penetration models are presented, and new approaches for the penetration prediction of pipe jacking machines in hard rock are proposed.
Significant improvements in module performance are possible via implementation of multi-wire electrodes. This is economically sound as long as the mechanical yield of the production is maintained. While flat ribbons have a relatively large contact area to exert forces onto the solar cell, wires with round cross section reduce this contact area considerably – in theory to an infinitively thin line. Therefore, the local stresses induced by the electrodes might increase to a point that mechanical production yields suffer unacceptably.
In this paper, we assess this issue by an analytical mechanical model as well as experiments with an encapsulant-free N.I.C.E. test setup. From these, we can derive estimations for the relationship between lay-up accuracy and expected breakage losses. This paves the way for cost-optimized choices of handling equipment in industrial N.I.C.E.-wire production lines.
Many sectors, like finance, medicine, manufacturing, and education, use blockchain applications to profit from the unique bundle of characteristics of this technology. Blockchain technology (BT) promises benefits in trustability, collaboration, organization, identification, credibility, and transparency. In this paper, we conduct an analysis in which we show how open science can benefit from this technology and its properties. For this, we determined the requirements of an open science ecosystem and compared them with the characteristics of BT to prove that the technology suits as an infrastructure. We also review literature and promising blockchain-based projects for open science to describe the current research situation. To this end, we examine the projects in particular for their relevance and contribution to open science and categorize them afterwards according to their primary purpose. Several of them already provide functionalities that can have a positive impact on current research workflows. So, BT offers promising possibilities for its use in science, but why is it then not used on a large-scale in that area? To answer this question, we point out various shortcomings, challenges, unanswered questions, and research potentials that we found in the literature and identified during our analysis. These topics shall serve as starting points for future research to foster the BT for open science and beyond, especially in the long-term.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions of SARs' roles and appearance in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. The key insight is that although Israeli and German designers share similar perceptions of visual qualities for most of the robotics roles, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
A polarization mode dispersion measurement set-up based on a Mach-Zehnder Interferometer was realized. Measurements were carried out on short high-birefringent fibers and on long standard telecommunication single-mode fibers. In order to ensure high accurate results, special emphasis was placed on the evaluation of the interference pattern. The procedure will be described in detail and practical measurement results will be presented.
The bandwidth behavior of graded-index multimode fibers (GI-MMFs) for different launching conditions is investigated to understand and characterize the effect of differential mode delay. In order to reduce the launch-power distribution the near field of a single-mode fiber is used to produce a controlled restricted launch. The baseband response is measured by observing the broadening of a narrow input pulse (time-domain measurement). The paper verifies the degradation in bandwidth due to profile distortion by scanning the spot of the single-mode fiber with a transversal offset from the center of the test sample. In addition, the impact of the launch-power distribution tuned by different spot-size diameters is demonstrated. Measurements were taken on ‘older’ 50-μm and 62.5-μm GI-MMFs as well as on laser-performance-optimized fibers more recently developed.
An isomorphous series of 10 microporous copper-based metal–organic frameworks (MOFs) with the general formulas ∞3[{Cu3(μ3-OH)(X)}4{Cu2(H2O)2}3(H-R-trz-ia)12] (R = H, CH3, Ph; X2– = SO42–, SeO42–, 2 NO32– (1–8)) and ∞3[{Cu3(μ3-OH)(X)}8{Cu2(H2O)2}6(H-3py-trz-ia)24Cu6]X3 (R = 3py; X2– = SO42–, SeO42– (9, 10)) is presented together with the closely related compounds ∞3[Cu6(μ4-O)(μ3-OH)2(H-Metrz-ia)4][Cu(H2O)6](NO3)2·10H2O (11) and ∞3[Cu2(H-3py-trz-ia)2(H2O)3] (12Cu), which are obtained under similar reaction conditions. The porosity of the series of cubic MOFs with twf-d topology reaches up to 66%. While the diameters of the spherical pores remain unaffected, adsorption measurements show that the pore volume can be fine-tuned by the substituents of the triazolyl isophthalate ligand and choice of the respective copper salt, that is, copper sulfate, selenate, or nitrate.
Synthesis and crystal structure of a novel copper-based MOF material are presented. The tetragonal crystal structure of [ ∞ 3 ( Cu 4 ( μ 4 -O ) ( μ 2 -OH ) 2 ( Me 2 trz p ba ) 4 ] possesses a calculated solvent-accessible pore volume of 57%. Besides the preparation of single crystals, synthesis routes to microcrystalline materials are reported. While PXRD measurements ensure the phase purity of the as-synthesized material, TD-PXRD measurements and coupled DTA–TG–MS analysis confirm the stability of the network up to 230 °C. The pore volume of the microcrystalline material determined by nitrogen adsorption at 77 K depends on the synthetic conditions applied. After synthesis in DMF/H2O/MeOH the pores are blocked for nitrogen, whereas they are accessible for nitrogen after synthesis in H2O/EtOH and subsequent MeOH Soxhleth extraction. The corresponding experimental pore volume was determined by nitrogen adsorption to be V Pore = 0.58 cm 3 g - 1 . In order to characterize the new material and to show its adsorption potential, comprehensive adsorption studies with different adsorptives such as nitrogen, argon, carbon dioxide, methanol and methane at different temperatures were carried out. Unusual adsorption–desorption isotherms with one or two hysteresis loops are found – a remarkable feature of the new flexible MOF material.
The newly synthesized Zn4O-based MOF 3∞[Zn4(μ4-O){(Metrz-pba)2mPh}3]·8 DMF (1·8 DMF) of rare tungsten carbide (acs) topology exhibits a porosity of 43% and remarkably high thermal stability up to 430 °C. Single crystal X-ray structure analyses could be performed using as-synthesized as well as desolvated crystals. Besides the solvothermal synthesis of single crystals a scalable synthesis of microcrystalline material of the MOF is reported. Combined TG-MS and solid state NMR measurements reveal the presence of mobile DMF molecules in the pore system of the framework. Adsorption measurements confirm that the pore structure is fully accessible for nitrogen molecules at 77 K. The adsorptive pore volume of 0.41 cm3 g−1 correlates well with the pore volume of 0.43 cm3 g−1 estimated from the single crystal structure.
Interaction and capturing information from the surrounding is dominated by vision and hearing. Haptics on the other side, widens the bandwidth and could also replace senses (sense switching) for impaired. Haptic technologies are often limited to point-wise actuation. Here, we show that actuation in two-dimensional matrices instead creates a richer input. We describe the construction of a full-body garment for haptic communication with a distributed actuating network. The garment is divided into attachable-detachable panels or add-ons that each can carry a two dimensional matrix of actuating haptic elements. Each panel adds to an enhanced sensoric capability of the human- garment system so that together a 720° system is formed. The spatial separation of the panels on different body locations supports semantic and theme-wise separation of conversations conveyed by haptics. It also achieves directional faithfulness, which is maintaining any directional information about a distal stimulus in the haptic input.
In the modern knowledge-based and digital economy, the value of knowledge is growing relative to other assets and new intellectual property is being created at an ever-increasing rate. Therefore, the ability to find non-trivial solutions, systematically generate new concepts, and create intellectual property rapidly become crucial to achieving competitive advantage and leveraging the intellectual potential of organizations.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. It briefly describes the program of a novel one-semester-course of 150 hours in new product development and inventive problem solving with TRIZ methodology, offered for the master students at the Beuth University of Applied Sciences in Berlin. The paper outlines multi-source educational approach, which includes a new product development project (about 50% of the complete course), theory, practical work, self-learning with the software tools for computer-aided innovation, and demonstrates examples of the students work. The research part analyses the learning experience, identifies the factors that impact the innovation and problem solving performance of the students, and underlines the main difficulties faced by the students in the course. It describes a method for measurement of education efficiency and compares the results with educational experience in the industry. The presented results can help universities to establish the education in new product development or to improve its performance.
Using patent information for identification of new product features with high market potential
(2014)
CONTEXT
The paper addresses the needs of medium and small businesses regarding qualification of R&D specialists in the interdisciplinary cross-industry innovation, which promises a considerable reduction of investments and R&D expenditures. The cross-industry innovation is commonly understood as identification of analogies and transfer of technologies, processes, technical solutions, working principles or business models between industrial sectors. However, engineering graduates and specialists frequently lack the advanced skills and knowledge required to run interdisciplinary innovation across the industry boundaries.
PURPOSE
The study compares the efficiency of the cross-industry innovation methods in one semester project-oriented course. It identifies the individual challenges and preferred working techniques of the students with different prior knowledge, sets of experiences, and cultural contexts, which require attention by engineering educators.
APPROACH
Two parallel one-semester courses were offered to the mechanical and process engineering students enrolled in bachelor’s and master’s degree programs at the faculty of mechanical and process engineering. The students from different years of study were working in 12 teams of 3…6 persons each on different innovation projects, spending two hours a week in the classroom and additionally on average two hours weekly on their project research. Students' feedback and self-assessments concerning gained skills, efficiency of learned tools and intermediate findings were documented, analysed, and discussed regularly along the course.
RESULTS
Analysis of numerous student projects allows to compare and to select the tools most appropriate for finding cross-industry solutions, such as thinking in analogies, web monitoring, function-oriented search, databases of technological effects and processes, special creativity techniques and others. The utilization of learned skills in practical innovation work strengthens the motivation of students and enhances their entrepreneurial competences. Suggested learning course and given recommendations help facilitate sustainable education of ambitious specialists.
CONCLUSIONS
The structured cross-industry innovation can be successfully run as a systematic process and learned in one semester course. The choice of the preferred working teqniques made by the students is affected by their prior knowledge in science, practical experience, and cultural contexts. Major outcomes of the students’ innovation projects such as feasibility, novelty and customer value of the concepts are primarily influenced by students’ engineering design skills, prior knowledge of the technologies, and industrial or business experience.
The comprehensive assessment method includes 80 innovation performance parameters and 10 key indicators of innovation capability, such as innovation process performance, innovating system performance, market and customer orientation, technology orientation, creativity, leadership, communication and knowledge management, risk and cost management, innovative climate, and innovation competences. The cross-industry study identifies parameters critical for innovation success and reveals different innovation performance patterns in companies.
The paper conceptualizes the systemic approach for enhancing innovative and competitive capacity of industrial companies (named as Advanced Innovation Design Approach – AIDA) including analysis, optimizations and further development of the innovation process and promoting the innovation climate in industrial companies. The innovation process is understood as a holistic stage-gate system comprising following typical phases with feedback loops and simultaneous auxiliary or follow-up processes: uncovering of solution-neutral customer needs, technology and market trends, identification of the needs and problems with high market potential and formulation of the innovation tasks and strategy, idea generation and problem solving, evaluation and enhancement of solution ideas, creation of innovation concepts based on solution ideas, evaluation of the innovation concepts as well as implementation, validation and market launch of chosen innovation concepts. The article presents the current state of innovation research and discusses the actual status of innovation process in the industrial environment. It defines the future research tasks for amplification of the innovation process with self-configuration, self-optimization, self-diagnostics and intelligent information processing and communication.