Refine
Year of publication
- 2023 (243) (remove)
Document Type
- Conference Proceeding (105)
- Article (reviewed) (37)
- Article (unreviewed) (37)
- Part of a Book (24)
- Other (12)
- Book (9)
- Patent (9)
- Doctoral Thesis (4)
- Report (4)
- Letter to Editor (1)
Conference Type
- Konferenzartikel (80)
- Konferenz-Abstract (20)
- Konferenz-Poster (2)
- Sonstiges (2)
- Konferenzband (1)
Has Fulltext
- no (243) (remove)
Is part of the Bibliography
- yes (243)
Keywords
- Biomechanik (10)
- Deep Leaning (9)
- Wärmepumpe (6)
- Content-Marketing (5)
- Export (5)
- Deep Learning (4)
- Digitalisierung (4)
- Künstliche Intelligenz (4)
- Additive Manufacturing (3)
- Automation (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (75)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (63)
- Fakultät Medien (M) (ab 22.04.2021) (53)
- Fakultät Wirtschaft (W) (49)
- IMLA - Institute for Machine Learning and Analytics (25)
- INES - Institut für nachhaltige Energiesysteme (24)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (18)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (17)
- POIM - Peter Osypka Institute of Medical Engineering (9)
- IfTI - Institute for Trade and Innovation (8)
Ultra-low-power passive telemetry systems for industrial and biomedical applications have gained much popularity lately. The reduction of the power consumption and size of the circuits poses critical challenges in ultra-low-power circuit design. Biotelemetry applications like leakage detection in silicone breast implants require low-power-consuming small-size electronics. In this doctoral thesis, the design, simulation, and measurement of a programmable mixed-signal System-on-Chip (SoC) called General Application Passive Sensor Integrated Circuit (GAPSIC) is presented. Owing to the low power consumption, GAPSIC is capable of completely passive operation. Such a batteryless passive system has lower maintenance complexity and is also free from battery-related health hazards. With a die area of 4.92 mm² and a maximum analog power consumption of 592 µW, GAPSIC has one of the best figure-of-merits compared to similar state-of-the-art SoCs. Regarding possible applications, GAPSIC can read out and digitally transmit the signals of resistive sensors for pressure or temperature measurements. Additionally, GAPSIC can measure electrocardiogram (ECG) signals and conductivity.
The design of GAPSIC complies with the International Organization for Standardization (ISO) 15693/NFC (near field communication) 5 standard for radio frequency identification (RFID), corresponding to the frequency range of 13.56 MHz. A passive transponder developed with GAPSIC comprises of an external memory storage and very few other external components, like an antenna and sensors. The passive tag antenna and reader antenna use inductive coupling for communication and energy transfer, which enables passive operation. A passive tag developed with GAPSIC can communicate with an NFC compatible smart device or an ISO 15693 RFID reader. An external memory storage contains the programmable application-specific firmware.
As a mixed-signal SoC, GAPSIC includes both analog and digital circuitries. The analog block of GAPSIC includes a power management unit, an RFID/NFC communication unit, and a sensor readout unit. The digital block includes an integrated 32-bit microcontroller, developed by the Hochschule Offenburg ASIC design center, and digital peripherals. A 16-kilobyte random-access memory and a read-only 16-kilobyte memory constitute the GAPSIC internal memory. For the fabrication of GAPSIC, one poly, six-metal 0.18 µm CMOS process is used.
The design of GAPSIC includes two stages. In the first stage, a standalone RFID/NFC frontend chip with a power management unit, an RFID/NFC communication unit, a clock regenerator unit, and a field detector unit was designed. In the second stage, the rest of the functional blocks were integrated with the blocks of the RFID/NFC frontend chip for the final integration of GAPSIC. To reduce the power consumption, conventional low-power design techniques were applied extensively like multiple power supplies, and the operation of complementary metal-oxide-semiconductor (CMOS) transistors in the sub-threshold region of operation, as well as further innovative circuit designs.
An overvoltage protection circuit, a power rectifier, a bandgap reference circuit, and two low-dropout (LDO) voltage regulators constitute the power management unit of GAPSIC. The overvoltage protection circuit uses a novel method where three stacked transistor pairs shunt the extra voltage. In the power rectifier, four rectifier units are arranged in parallel, which is a unique approach. The four parallel rectifier units provide the optimal choice in terms of voltage drop and the area required.
The communication unit is responsible for RFID/NFC communication and incorporates demodulation and load modulation circuitry. The demodulator circuit comprises of an envelope detector, a high-pass filter, and a comparator. Following a new approach, the bandgap reference circuit itself acts as the load for the envelope detector circuit, which minimizes the circuit complexity and area. For the communication between the reader and the RFID/NFC tag, amplitude-shift keying (ASK) is used to modulate signals, where the smallest modulation index can be as low as 10%. A novel technique involving a comparator with a preset offset voltage effectively demodulates the ASK signal. With an effective die area of 0.7 mm² and power consumption of 107 µW, the standalone RFID/NFC frontend chip has the best figure-of-merits compared to the state-of-the-art frontend chips reported in the relevant literature. A passive RFID/NFC tag developed with the standalone frontend chip, as well as temperature and pressure sensors demonstrate the full passive operational capability of the frontend chip. An NFC reader device using a custom-built Android-based application software reads out the sensor data from the passive tag.
The sensor readout circuit consists of a channel selector with two differential and four single-ended inputs with a programmable-gain instrumentation amplifier. The entire sensor readout part remains deactivated when not in use. The internal memory stores the measured offset voltage of the instrumentation amplifier, where a firmware code removes the offset voltage from the measured sensor signal. A 12-bit successive approximation register (SAR) type analog-to-digital-converter (ADC) based on a charge redistribution architecture converts the measured sensor data to a digital value. The digital peripherals include a serial peripheral interface, four timers, RFID/NFC interfaces, sensor readout unit interfaces, and 12-bit SAR logic.
Two sets of studies with custom-made NFC tag antennas for biomedical applications were conducted to ascertain their compatibility with GAPSIC. The first study involved the link efficiency measurements of NFC tag antennas and an NFC reader antenna with porcine tissue. In a separate experiment, the effect of a ferrite compared to air core on the antenna-coupling factor was investigated. With the ferrite core, the coupling factor increased by four times.
Among the state-of-the-art SoCs published in recent scientific articles, GAPSIC is the only passive programmable SoC with a power management unit, an RFID/NFC communication interface, a sensor readout circuit, a 12-bit SAR ADC, and an integrated 32-bit microcontroller. This doctoral research includes the preliminary study of three passive RFID tags designed with discrete components for biomedical and industrial applications like measurements of temperature, pH, conductivity, and oxygen concentration, along with leakage detection in silicone breast implants. Besides its small size and low power consumption, GAPSIC is suitable for each of the biomedical and industrial applications mentioned above due to the integrated high-performance microcontroller, the robust programmable instrumentation amplifier, and the 12-bit analog-to-digital converter. Furthermore, the simulation and measurement data show that GAPSIC is well suited for the design of a passive tag to monitor arterial blood pressure in patients experiencing Peripheral Artery Disease (PAD), which is proposed in this doctoral thesis as an exemplary application of the developed system.
Digitalisierung ist heute allgegenwärtig. Wo im Privaten bereitwillig neue digitale Tools, Apps und Funktionen genutzt werden, tun sich Unternehmen in der Umsetzung von Digitalisierungsprojekten oft schwer. Dieser Beitrag beleuchtet die Motive für Digitalisierungsvorhaben, ihre Hürden sowie die Auswirkung auf die Arbeitsbelastung von Mitarbeitenden und versucht in der Verknüpfung mit den Grundprinzipien des Kontinuierlichen Verbesserungsprozesses, Handlungsempfehlungen für eine erfolgreiche Umsetzung dieser abzuleiten.
Projektmanagement und mit ihm die PM-Prozesse, Methoden und Werkzeuge entwickeln sich stetig weiter, in kleinen, kaum spürbaren Schritten oder in großen unübersehbaren Veränderungen. In den letzten Jahren war der Diskurs über das Pro & Contra agiler Vorgehensweisen so allgegenwärtig, dass andere Aspekte nicht immer die notwendige Aufmerksamkeit bekamen. Erkannte Notwendigkeiten der PM-Entwicklung konnten noch nicht in spürbare Fortschritte umgewandelt werden. Einflüsse der Globalisierung und der IT, aber auch die aus der zunehmenden Forderung nach Nachhaltigkeit resultierenden Veränderungen in der Projektarbeit sollen daher genauer betrachtet werden. Ist erst einmal die Sensibilität für relevante Trends beim Projektpersonal geschaffen, rücken ein aktualisiertes Kompetenzprofil und ein erweiterter Methodenkanon in greifbare Nähe.
Das Zeitalter der Digitalisierung ist geprägt durch einen erhöhten Wettbewerb. Eine Chance, bei steigendem Wettbewerb erfolgreich zu bestehen, liegt daher nur in der durchgängigen Digitalisierung von Produktionsunternehmen. Dieser Beitrag stellt eine dreistufige generische Unternehmensmodellplattform Industrie 4.0 vor, die die Durchgängigkeit von Prozessen vom Kunden bis zum Lieferanten auf allen Unternehmensebenen in den Mittelpunkt stellt. Die Schritte zur Bewertung und Gestaltung des Fortschritts auf dem Weg zum digitalisierten Produktionsunternehmen werden aufgezeigt.
Die fortschreitende Digitalisierung der Schulen macht es möglich, die Lerndaten der Schülerinnen und Schüler in einer zentralen Cloud zu speichern. Die Befürworter versprechen sich davon eine bessere individuelle Förderung und fordern eine bundesweite Lösung, um möglichst viele Daten auswerten zu können. Die Gegner befürchten eine automatisierte Steuerung des Lernens.
Social-Media-Content - Auswirkungen auf Fear of Missing Out und den Selbstwert junger Nutzer*innen
(2023)
Social-Media-Marketing ist ein wichtiger Baustein einer erfolgreichen Content-Strategie. Insbesondere jüngere Zielgruppen sind auf Social Media anzutreffen – und das oftmals über viele Stunden täglich. Neben den Vorteilen, die Social Media den Nutzer*innen bietet, gibt es aber auch Schattenseiten. Zwei negative Aspekte, die sogenannte Fear of Missing Out und ein verminderter Selbstwert, wurden im Frühjahr 2022 in einer empirischen Befragung von 1338 Personen zwischen 14 und 30 Jahren untersucht. Daneben wurden auch Daten zum grundsätzlichen Social-Media-Nutzungsverhalten erhoben. Die zentralen Erkenntnisse, die sich aus der Studie ableiten, werden in diesem Kapitel vorgestellt und mit Bezug auf ihre Relevanz für das Content-Marketing hin eingeordnet.
Die Verwendung von markenbezogenen nutzer-generierten Inhalten auf den unternehmenseigenen Social-Media-Kanälen ist ein äußerst vielversprechender Ansatz im Content-Marketing. Dabei können durch die authentischen, vom Nutzer bereitgestellten Inhalte zahlreiche Kommunikationsziele erreicht werden. Hierzu gehören etwa die Verstärkung des Nutzerengagements oder aber auch die Förderung von Verkäufen. Daneben müssen allerdings auch Risiken, wie etwa rechtliche Aspekte, beachtet werden. Damit Unternehmen die Potentiale von markenbezogenen nutzer-generierten Inhalten für sich nutzen können, wird im nachstehenden Beitrag ein Strukturierungsrahmen vorgestellt. Dieser fasst die wesentlichen Aspekte dieser durchaus komplexen Thematik strukturiert zusammen. Der hier entwickelte Strukturierungsrahmen wurde durch Experteninterviews überprüft.
In diesem Beitrag werden die psychologischen Hintergründe und Wirkungsweisen des Content-Marketing betrachtet. Nach einer kurzen Einführung in die Thematik wird zuerst das für das weitere Verständnis notwendige psychologische Basiswissen vermittelt. Darauf bezugnehmend wird die allgemeine Wirkungsweise von Content-Marketing beleuchtet. Die Sichtweise wird dann für die letzten beiden Kapitel umgedreht und die beschriebenen psychologischen Faktoren dazu genutzt, um Anwender bei der Wahl der Content-Marketing-Inhalte und zuletzt bei der konkreten Ausgestaltung zu unterstützen.
Die meisten Effekte, die durch Content-Marketing hervorgerufen werden, funktionieren im B2C- oder B2B-Bereich durch das Ansprechen von Bedürfnissen, Interessen und Emotionen sowie die recht freien Entscheidungsmöglichkeiten der Adressaten. Im B2B‑Bereich werden ebenfalls Menschen mit Bedürfnissen, Interessen und Emotionen angesprochen, jedoch vorrangig beruflicher Natur, sodass in der Ausgestaltung geringfügige Unterschiede gemacht werden müssen.
Verfassen guter Texte
(2023)
Wer Texte für seinen Internetauftritt schreibt, möchte, dass diese auch gelesen werden. Doch Lesende sind ungeduldig, insbesondere am Monitor. Fasziniert man sie nicht in den ersten Sekunden, springen sie ab. Erfahren Sie hier, welche stilistischen Regeln Journalistinnen und Journalisten nutzen, um die Aufmerksamkeit ihrer Leser- oder Hörerschaft zu gewinnen und Texte mit wenig Aufwand zu perfektionieren. Ein paar Besonderheiten gelten auch für den Aufbau. Ein Schwerpunkt des Kapitels liegt auf dem Teaser, den ersten Zeilen, die in den Text locken sollen, sowie der Headline. Häufig ist es jedoch nicht der Text, der die Aufmerksamkeit der User fesselt, sondern ein Foto, idealerweise mit einer informativen Bildunterschrift. Zahlreiche Beispiele aus dem journalistischen Alltag machen das Beschriebene anschaulich. Als Zugabe informiert die Autorin Sie über die Bedeutung des Nutzwerts und attraktive Anlässe für eine Veröffentlichung.
Vor dem Hintergrund einer zunehmenden Informations- und Reizüberlastung der Konsumenten werden aus Unternehmenssicht zielgruppenadäquate Inhalte, insbesondere zur Erreichung von kommunikationspolitischen Zielsetzungen, immer wichtiger. Um diese zu gewährleisten, bedarf es einer sinnvollen Planung, Produktion und Distribution von Inhalten. Der vorliegende Beitrag gibt einen Überblick über einen solchen Prozess und veranschaulicht die notwendigen Schritte für ein erfolgreiches Content-Marketing.
Content-Marketing
(2023)
Content-Marketing, also die Planung, Produktion und Distribution von zielgruppen-adäquaten Inhalten, hat insbesondere durch Social Media nochmals an Bedeutung gewonnen. Im Hinblick auf die enorme Menge an Inhalten, die auf Nutzer konstant einwirken, ist es für Unternehmen immer schwieriger, die Aufmerksamkeit der Nutzer zu gewinnen. Nur Inhalte, die den Wünschen der Nutzer entsprechen und diesen in irgendeiner Form einen Mehrwert bieten, haben die Chance, zur Erfüllung von Kommunikationszielen von Unternehmen beizutragen. Die Bereitstellung derartiger Inhalte setzt einen sinnvollen (Planungs-)Prozess voraus. Das vorliegende Buch bietet Praktikern und Studierenden einen Überblick über die verschiedenen Bereiche eines Content-Marketing.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
The COVID19 pandemic, a unique and devastating respiratory disease outbreak, has affected global populations as the disease spreads rapidly. Recent Deep Learning breakthroughs may improve COVID19 prediction and forecasting as a tool of precise and fast detection, however, current methods are still being examined to achieve higher accuracy and precision. This study analyzed the collection contained 8055 CT image samples, 5427 of which were COVID cases and 2628 non COVID. The 9544 Xray samples included 4044 COVID patients and 5500 non COVID cases. The most accurate models are MobileNet V3 (97.872 percent), DenseNet201 (97.567 percent), and GoogleNet Inception V1 (97.643 percent). High accuracy indicates that these models can make many accurate predictions, as well as others, are also high for MobileNetV3 and DenseNet201. An extensive evaluation using accuracy, precision, and recall allows a comprehensive comparison to improve predictive models by combining loss optimization with scalable batch normalization in this study. Our analysis shows that these tactics improve model performance and resilience for advancing COVID19 prediction and detection and shows how Deep Learning can improve disease handling. The methods we suggest would strengthen healthcare systems, policymakers, and researchers to make educated decisions to reduce COVID19 and other contagious diseases.
Virtual-Reality
(2023)
Die Virtual-Reality (VR) Technologie ermöglicht Unternehmen eine Produktpräsentation, die weit über traditionelle Darstellungsmethoden hinausgeht. Obgleich die Integration der VR-Technologie für Unternehmen viele Chancen eröffnet, ist deren Einsatz auch mit Risiken verbunden. Insbesondere der Mangel an empirisch gesicherten Erkenntnissen zur Kundenakzeptanz, zu den Auswirkungen der Nutzung sowie zu Kannibalisierungseffekten ist ein wesentlicher Grund, der die Verbreitung von VR in der Kundenkommunikation noch hemmt. Das Buch adressiert diese Forschungslücken und identifiziert mittels eines nutzerzentrierten, quantitativen Forschungsdesigns konkrete Chancen und Risiken, die mit dem Einsatz von VR-Produktpräsentationen verbunden sind.
This paper presents the new Deep Reinforcement Learning (DRL) library RL-X and its application to the RoboCup Soccer Simulation 3D League and classic DRL benchmarks. RL-X provides a flexible and easy-to-extend codebase with self-contained single directory algorithms. Through the fast JAX-based implementations, RL-X can reach up to 4.5x speedups compared to well-known frameworks like Stable-Baselines3.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
There is an ongoing debate about the use and scope of Clayton M. Christensen´s idea of disruptive innovation, including the question of whether it is a management buzz phrase or a valuable theory. This discussion considers the general question of how innovation in the field of management theories and concepts finds its way to the different target groups. This conceptual paper combines the different concepts of the creation and dissemination of management trends in a basic framework based on a short review of models for the dissemination of management ideas. This framework allows an analysis of the character of new management ideas like disruptive innovation. By measuring the impact of the theory on the academic sphere using a bibliometric statistic of the number of academic publications on Google scholar and Scopus and a meta-analysis of research papers, we show the significant influence of disruptive innovation beyond pure management fads.
Modern CNNs are learning the weights of vast numbers of convolutional operators. In this paper, we raise the fundamental question if this is actually necessary. We show that even in the extreme case of only randomly initializing and never updating spatial filters, certain CNN architectures can be trained to surpass the accuracy of standard training. By reinterpreting the notion of pointwise ($1\times 1$) convolutions as an operator to learn linear combinations (LC) of frozen (random) spatial filters, we are able to analyze these effects and propose a generic LC convolution block that allows tuning of the linear combination rate. Empirically, we show that this approach not only allows us to reach high test accuracies on CIFAR and ImageNet but also has favorable properties regarding model robustness, generalization, sparsity, and the total number of necessary weights. Additionally, we propose a novel weight sharing mechanism, which allows sharing of a single weight tensor between all spatial convolution layers to massively reduce the number of weights.
Learning programming fundamentals is considered as one of the most challenging and complex learning activities. Some authors have proposed visual programming language (VPL) approaches to address part of the inherent complexity [1]. A visual programming language lets users develop programs by combining program elements, like loops graphically rather than by specifying them textually. Visual expressions, spatial arrangements of text and graphic symbols are used either as syntax elements or secondary notation. VPLs are normally used for educational multimedia, video games, system development, and data warehousing/business analytics purposes. For example, Scratch, a platform of Massachusetts Institute of Technology, is designed for kids and after school programs.
Design of mobile software applications is considered as one of the most challenging application domains due to the build in sensors as part of a mobile device, like GPS, camera or Near Field Communication (NFC). Sensors enable creation of context-aware mobile applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects, and the current user activity. As a consequence, context-aware mobile applications can sense clues about the situational environment making mobile devices more intelligent, adaptive, and personalized. Such context aware mobile applications seem to be motivating and attractive case studies, especially for programming beginners (“my own first app”).
In this work, we introduce a use-case centered approach as well as clear separation of user interface design and sensor-based program development. We provide an in-depth discussion of a new VPL based teaching method, a step by step development process to enable programming beginners the creation of context aware mobile applications. Finally, we argue that addressing challenges for programming beginners by our teaching approach could make programming teaching more motivating, with an additional impact on the final software quality and scalability.
The key contributions of our study are the following:
- An overview of existing attempts to use VPL approaches for mobile applications
- A use case centered teaching approach based on a clear separation of user interface design and sensor-based program development
- A teaching case study enabling beginners a step by step creation of context-aware mobile applications based on the MIT App Inventor (a platform of Massachusetts Institute of Technology)
- Open research challenges and perspectives for further development of our teaching approach
References:
[1] Idrees, M., Aslam, F. (2022). A Comprehensive Survey and Analysis of Diverse Visual Programming Languages, VFAST Transactions on Software Engineering, 2022, Volume 10, Number 2, pp 47-60.
During pyrolysis, biomass is carbonised in the absence of oxygen to produce biochar with heat and/or electricity as co-products making pyrolysis one of the promising negative emission technologies to reach climate goals worldwide. This paper presents a simplified representation of pyrolysis and analyses the impact of this technology on the energy system. Results show that the use of pyrolysis can allow getting zero emissions with lower costs by making changes in the unit commitment of the power plants, e.g. conventional power plants are used differently, as the emissions will be compensated by biochar. Additionally, the process of pyrolysis can enhance the flexibility of energy systems, as it shows a correlation between the electricity generated by pyrolysis and the hydrogen installation capacity, being hydrogen used less when pyrolysis appears. The results indicate that pyrolysis, which is available on the market, integrates well into the energy system with a promising potential to sequester carbon.
TRIZ Innovationstechnologie
(2023)
3D Bin Picking with an innovative powder filled gripper and a torque controlled collaborative robot
(2023)
A new and innovative powder filled gripper concept will be introduced to a process to pick parts out of a box without the use of a camera system which guides the robot to the part. The gripper is a combination of an inflatable skin, and a powder inside. In the unjammed condition, the powder is soft and can adjust to the geometry of the part which will be handled. By applying a vacuum to the inflatable skin, the powder gets jammed and transforms to a solid shaped form in which the gripper was brought before applying the vacuum. This physical principle is used to pick parts. The flexible skin of the gripper adjusts to all kinds of shapes, and therefore, can be used to realize 3D bin picking. With the help of a force controlled robot, the gripper can be pushed with a consistent force on flexible positions depending of the filling level of the box. A Kuka LBR iiwa with joint torque sensors in all of its seven axis’ was used to achieve a constant contact pressure. This is the basic criteria to achieve a robust picking process.
Socially assistive robots (SARs) are becoming more prevalent in everyday life, emphasizing the need to make them socially acceptable and aligned with users' expectations. Robots' appearance impacts users' behaviors and attitudes towards them. Therefore, product designers choose visual qualities to give the robot a character and to imply its functionality and personality. In this work, we sought to investigate the effect of cultural differences on Israeli and German designers' perceptions and preferences regarding the suitable visual qualities of SARs in four different contexts: a service robot for an assisted living/retirement residence facility, a medical assistant robot for a hospital environment, a COVID-19 officer robot, and a personal assistant robot for domestic use. Our results indicate that Israeli and German designers share similar perceptions of visual qualities and most of the robotics roles. However, we found differences in the perception of the COVID-19 officer robot's role and, by that, its most suitable visual design. This work indicates that context and culture play a role in users' perceptions and expectations; therefore, they should be taken into account when designing new SARs for diverse contexts.
Recent advances in spiked shoe design, characterized by increased longitudinal stiffness, thicker midsole foams, and reconfigured geometry are considered to improve sprint performance. However, so far there is no empirical data on the effects of advanced spikes technology on maximal sprinting speed (MSS) published yet. Consequently, we assessed MSS via ‘flying 30m’ sprints of 44 trained male (PR: 10.32 s - 12.08 s) and female (PR: 11.56 s - 14.18 s) athletes, wearing both traditional and advanced spikes in a randomized, repeated measures design. The results revealed a statistically significant increase in MSS by 1.21% on average when using advanced spikes technology. Notably, 87% of participants showed improved MSS with the use of advanced spikes. A cluster analysis unveiled that athletes with higher MSS may benefit to a greater extent. However, individual responses varied widely, suggesting the influence of multiple factors that need detailed exploration. Therefore, coaches and athletes are advised to interpret the promising performance enhancements cautiously and evaluate the appropriateness of the advanced spike technology for their athletes critically.
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
We revisit the quantitative analysis of the ultrafast magnetoacoustic experiment in a freestanding nickel thin film by Kim and Bigot [J.-W. Kim and J.-Y. Bigot, Phys. Rev. B 95, 144422 (2017)] by applying our recently proposed approach of magnetic and acoustic eigenmode decomposition. We show that the application of our modeling to the analysis of time-resolved reflectivity measurements allows for the determination of amplitudes and lifetimes of standing perpendicular acoustic phonon resonances with unprecedented accuracy. The acoustic damping is found to scale as ∝ω2 for frequencies up to 80 GHz, and the peak amplitudes reach 10−3. The experimentally measured magnetization dynamics for different orientations of an external magnetic field agrees well with numerical solutions of magnetoelastically driven magnon harmonic oscillators. Symmetry-based selection rules for magnon-phonon interactions predicted by our modeling approach allow for the unambiguous discrimination between spatially uniform and nonuniform modes, as confirmed by comparing the resonantly enhanced magnetoelastic dynamics simultaneously measured on opposite sides of the film. Moreover, the separation of timescales for (early) rising and (late) decreasing precession amplitudes provide access to magnetic (Gilbert) and acoustic damping parameters in a single measurement.
While most ultrafast time-resolved optical pump-probe experiments in magnetic materials reveal the spatially homogeneous magnetization dynamics of ferromagnetic resonance (FMR), here we explore the magneto-elastic generation of GHz-to-THz frequency spin waves (exchange magnons). Using analytical magnon oscillator equations, we apply time-domain and frequency-domain approaches to quantify the results of ultrafast time-resolved optical pump-probe experiments in free-standing ferromagnetic thin films. Simulations show excellent agreement with the experiment, provide acoustic and magnetic (Gilbert) damping constants and highlight the role of symmetry-based selection rules in phonon-magnon interactions. The analysis is extended to hybrid multilayer structures to explore the limits of resonant phonon-magnon interactions up to THz frequencies.
The technique of laser ultrasonics perfectly meets the need for noncontact, noninvasive, nondestructive mechanical probing of nanometer- to millimeter-size samples. However, this technique is limited to the excitation of low-amplitude strains, below the threshold for optical damage of the sample. In the context of strain engineering of materials, alternative optical techniques enabling the excitation of high-amplitude strains in a nondestructive optical regime are needed. We introduce here a nondestructive method for laser-shock wave generation based on additive superposition of multiple laser-excited strain waves. This technique enables strain generation up to mechanical failure of a sample at pump laser fluences below optical ablation or melting thresholds. We demonstrate the ability to generate nonlinear surface acoustic waves (SAWs) in Nb-SrTiO3 substrates, with associated strains in the percent range and pressures up to 3 GPa at 1 kHz repetition rate and close to 10 GPa for several hundred shocks. This study paves the way for the investigation of a host of high-strain SAW-induced phenomena, including phase transitions in conventional and quantum materials, plasticity and a myriad of material failure modes, chemistry and other effects in bulk samples, thin layers, and two-dimensional materials.
The utilisation of artificial intelligence (AI) is progressively emerging as a significant mechanism for innovation in human resource management (HRM). The capacity to facilitate the transformation of employee performance across numerous responsibilities. AI development, there remains a dearth of comprehensive exploration into the potential opportunities it presents for enhancing workplace performance among employees. To bridge this gap in knowledge, the present work carried out a survey with 300 participants, utilises a fuzzy set-theoretic method that is grounded on the conceptualisation of AI, KS, and HRM. The findings of our study indicate that the exclusive adoption of AI technologies does not adequately enhance HRM engagements. In contrast, the integration of AI and KS offers a more viable HRM approach for achieving optimal performance in a dynamic digital society. This approach has the potential to enhance employees’ proficiency in executing their responsibilities and cultivate a culture of creativity inside the firm.
Purpose
Although start-ups have gained increasing scholarly attention, we lack sufficient understanding of their entrepreneurial strategic posture (ESP) in emerging economies. The purpose of this study is to examine the processes of ESP of new technology venture start-ups (NTVs) in an emerging market context.
Design/methodology/approach
In line with grounded theory guidelines and the inductive research traditions, the authors adopted a qualitative approach involving 42 in-depth semi-structured interviews with Ghanaian NTV entrepreneurs to gain a comprehensive analysis at the micro-level on the entrepreneurs' strategic posturing. A systematic procedure for data analysis was adopted.
Findings
From the authors' analysis of Ghanaian NTVs, the authors derived a three-stage model to elucidate the nature and process of ESP Phase 1 spotting and exploiting market opportunities, Phase II identifying initial advantages and Phase III ascertaining and responding to change.
Originality/value
The study contributes to advancing research on ESP by explicating the process through which informal ties and networks are utilised by NTVs and NTVs' founders to overcome extreme resource constraints and information vacuums in contexts of institutional voids. The authors depart from past studies in demonstrating how such ties can be harnessed in spotting and exploiting market opportunities by NTVs. On this basis, the paper makes original contributions to ESP theory and practice.
Purpose
Although recent literature has examined diverse measures adopted by SMEs to navigate the COVID-19 turbulence, there is a shortage of evidence on how crisis-time strategy creation behaviour and digitalization activities increase (1) sales and (2) cash flow. Thus, predicated on a novel strategy creation perspective, this inquiry aims to investigate the crisis behaviour, sales and cash flow performance of 528 SMEs in Morocco.
Design/methodology/approach
Novel links between (1) aggregate wage cuts, (2) variable operating hours, (3) deferred payment to suppliers, (4) deferred payment to tax authorities and (5) sales performance are developed and tested. A further link between sales performance and cash flow is also examined and the analysis is conducted using a non-linear structural equation modelling technique.
Findings
While there is a significant association between strategy creation behaviours and sales performance, only variable operating hours have a positive effect. Also, sales performance increases cash flow and this relationship is substantially strengthened by e-commerce digitalization and innovation.
Originality/value
Theoretically, to the best of the authors’ knowledge, this is one of the first inquiries to espouse the strategy creation view to explain SMEs' crisis-time behaviour and digitalization. For practical purposes, to supplement Moroccan SMEs' propensity to seek tax deferrals, it is argued that debt and equity support measures are also needed to boost sales performance and cash flow.
An international study summarizes the threat situation in the OT environment under the heading "Growing security threats" [1]. According to this study, attacks on automation systems are likely to increase in the future. Accordingly, an automation system must be able to protect the integrity of the transmitted information in the future. This requirement is motivated, among other things, by the fact that the network-side isolation of industrial communication systems is no longer considered sufficient as the sole protective measure. This paper uses the example of PROFINET to show how the future requirements for a real-time communication protocol can be met and how they can be derived from the IEC 62443 standard.
Polyarticulated active prostheses constitute a promising solution for upper limb amputees. The bottleneck for their adoption though, is the lack of intuitive control. In this context, machine learning algorithms based on pattern recognition from electromyographic (EMG) signals represent a great opportunity for naturally operating prosthetic devices, but their performance is strongly affected by the selection of input features. In this study, we investigated different combinations of 13 EMG-derived features obtained from EMG signals of healthy individuals performing upper limb movements and tested their performance for movement classification using an Artificial Neural Network. We found that input data (i.e., the set of input features) can be reduced by more than 50% without any loss in accuracy, while diminishing the computing time required to train the classifier. Our results indicate that input features must be properly selected in order to optimize prosthetic control.
The main focus of this chapter is the theoretical and instrumental processes that underpin densitometric methods widely used in thin-layer chromatography (TLC). Densitometric methods include UV–vis, luminescence, and fluorescence optical measurements as well as infrared and Raman spectroscopic measurements. The chapter is divided in two general parts: a theoretical part and a practical part. The systems for direct radioactivity measurements and the combination of TLC with mass spectrometry are also discussed. All these systems allow measuring an intensity distribution directly on a TLC plate. We call this “in situ detection” because no analyte is removed from the plate.
Following the traditional paradigm of convolutional neural networks (CNNs), modern CNNs manage to keep pace with more recent, for example transformer-based, models by not only increasing model depth and width but also the kernel size. This results in large amounts of learnable model parameters that need to be handled during training. While following the convolutional paradigm with the according spatial inductive bias, we question the significance of \emph{learned} convolution filters. In fact, our findings demonstrate that many contemporary CNN architectures can achieve high test accuracies without ever updating randomly initialized (spatial) convolution filters. Instead, simple linear combinations (implemented through efficient 1×1 convolutions) suffice to effectively recombine even random filters into expressive network operators. Furthermore, these combinations of random filters can implicitly regularize the resulting operations, mitigating overfitting and enhancing overall performance and robustness. Conversely, retaining the ability to learn filter updates can impair network performance. Lastly, although we only observe relatively small gains from learning 3×3 convolutions, the learning gains increase proportionally with kernel size, owing to the non-idealities of the independent and identically distributed (\textit{i.i.d.}) nature of default initialization techniques.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
State-of-the-art models for pixel-wise prediction tasks such as image restoration, image segmentation, or disparity estimation, involve several stages of data resampling, in which the resolution of feature maps is first reduced to aggregate information and then sequentially increased to generate a high-resolution output. Several previous works have investigated the effect of artifacts that are invoked during downsampling and diverse cures have been proposed that facilitate to improve prediction stability and even robustness for image classification. However, equally relevant, artifacts that arise during upsampling have been less discussed. This is significantly relevant as upsampling and downsampling approaches face fundamentally different challenges. While during downsampling, aliases and artifacts can be reduced by blurring feature maps, the emergence of fine details is crucial during upsampling. Blurring is therefore not an option and dedicated operations need to be considered. In this work, we are the first to explore the relevance of context during upsampling by employing convolutional upsampling operations with increasing kernel size while keeping the encoder unchanged. We find that increased kernel sizes can in general improve the prediction stability in tasks such as image restoration or image segmentation, while a block that allows for a combination of small-size kernels for fine details and large-size kernels for artifact removal and increased context yields the best results.
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
Assessing the robustness of deep neural networks against out-of-distribution inputs is crucial, especially in safety-critical domains like autonomous driving, but also in safety systems where malicious actors can digitally alter inputs to circumvent safety guards. However, designing effective out-of-distribution tests that encompass all possible scenarios while preserving accurate label information is a challenging task. Existing methodologies often entail a compromise between variety and constraint levels for attacks and sometimes even both. In a first step towards a more holistic robustness evaluation of image classification models, we introduce an attack method based on image solarization that is conceptually straightforward yet avoids jeopardizing the global structure of natural images independent of the intensity. Through comprehensive evaluations of multiple ImageNet models, we demonstrate the attack's capacity to degrade accuracy significantly, provided it is not integrated into the training augmentations. Interestingly, even then, no full immunity to accuracy deterioration is achieved. In other settings, the attack can often be simplified into a black-box attack with model-independent parameters. Defenses against other corruptions do not consistently extend to be effective against our specific attack.
Project website: https://github.com/paulgavrikov/adversarial_solarization
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
Differentiation between human and non-human objects can increase efficiency of human-robot collaborative applications. This paper proposes to use convolutional neural networks for classifying objects in robotic applications. The body temperature of human beings is used to classify humans and to estimate the distance to the sensor. Using image classification with convolutional neural networks it is possible to detect humans in the surroundings of a robot up to five meters distance with low-cost and low-weight thermal cameras. Using transfer learning technique we trained the GoogLeNet and MobilenetV2. Results show accuracies of 99.48 % and 99.06 % respectively.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research [1] has shown how such detection can generally be enabled by deep learning methods, but appears to be very limited regarding the overall amount of detected vulnerabilities. We analyse to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardised LLVM Intermediate Representation. Te vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, our proposed technical approach and methodology enables an accurate detection of 23 (compared to 4 [1]) vulnerabilities.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Grundzüge der Strömungslehre
(2023)
Dieses ausgereifte Lehrbuch stellt in prägnant kurzer und mathematisch verständlicher Darstellung die strömungstechnischen Grundlagen dar. Aufgaben mit Lösungen helfen den Lernstoff richtig anzuwenden und fördern das Verständnis. Das Buch eignet sich zur Begleitung und Vertiefung der Vorlesungen über Strömungslehre sowie zum Selbststudium. Die vorliegende Auflage geht auf die immer größer werdende Rolle des Energiehaushalts ein und trägt damit den aktuellen Entwicklungen Rechnung. Ergänzt wurden aktuelle Übungsaufgaben der Strömungsmechanik, zahlreiche Beispiele veranschaulichen den Energiesatz.
Automation devices or automation stations (AS) take on the task of controlling, regulating, monitoring and, if necessary, optimising building systems and their system components (e.g. pumps, compressors, fans) based on recorded process variables. For this purpose, a wide range of control and regulation methods are used, starting with simple on/off controllers, through classic PID controllers, to higher-order controllers such as adaptive, model-predictive, knowledge-based or adaptive controllers.
Starting with a brief introduction to automation technology (Sect. 7.1), the chapter goes into the structure and functionality of the usual compact controllers using the application examples of solar thermal systems and heat pump systems (Sect. 7.2). Finally, the integration of system automation into a higher-level building automation system and into the building management system is described using specific application examples (Sect. 7.3).
This central book chapter now details the implementation of automation of solar domestic hot water systems, solar assisted building heating, rooms, solar cooling systems, heat pump heating systems, geothermal systems and thermally activated building component systems. Hydraulic and automation diagrams are used to explain how the automation of these systems works. A detailed insight into the engineering and technical interrelationships involved in the use of these systems, as well as the use of simulation tools, enables effective control and regulation. System characteristic curves and systematic procedures support the automation engineer in his tasks.
Renewable energy sources such as solar radiation, geothermal heat and ambient heat are available for energy conversion. With the help of special converters, these resources can be put to use. These include solar collectors, geothermal probes and chillers. They collect the energy and convert it to a temperature level high enough to be suitable for heat purposes. In the case of refrigeration machines, a distinction is made between electrically and thermally driven machines.
The use of renewable energy sources for heating and cooling in buildings today offers the best opportunities to avoid the use of fossil fuels and the associated climate-damaging emissions. However, unlike fossil fuels, renewable energy sources such as solar radiation are not available at the push of a button, but occur uncontrollably depending on weather conditions, the location of the building and the time of year. Their use is free of charge. However, complex converters and systems usually have to be installed in order to use them. These must be carefully planned and operated in order to avoid unnecessary costs and to generate the maximum possible yield. The regenerative energy systems are usually integrated into existing conventional systems. When designing the control and regulation equipment, it is crucial to design the automation of the systems in such a way that primarily renewable energy sources are used and the share of fossil energy sources is minimized.
This textbook helps use regenerative systems for heating and cooling effectively. Integration and automation schemes provide a quick overview. Practical examples clearly show standard solutions for the integration of regenerative energy sources. For the 2nd edition, improvements have been made to the text and illustrations, and references to standards have been updated. Control questions at the end of the main chapters serve to consolidate the understanding of the content.
Public educational institutions are increasingly confronted with a decline in the number of applicants, which is why competition between colleges and universities is also intensifying. For this reason, it is important to position oneself as an institution in order to be perceived by the various target groups and to differentiate oneself from the competition. In this context, the brand and thus its perception and impact play a decisive role, especially in view of the desired communication of the institution's own values and its self-image, the brand identity. To this end, emotions serve as an approach to creating positive stimulation and brand loyalty.
In this study, circular economy (CE) relevance in Germany will be discussed based on LinkedIn readily available data. LinkedIn company profiles located in Germany with ‘circular economy’ in their description or any other field were selected and used as a data source to analyze their CE relation. Overall, 514 German companies were analyzed in reference to the 15 German regions they belong. Most companies are located in the federal state of Berlin (126), followed by North Rhine-Westphalia (96) and Bavaria (77). In terms of the industry sector, they are self-classified to environmental services (64), management consulting (50), renewables & environment (33), research (31), and computer software (18) etc. Regarding their employees with LinkedIn profiles, 22,621 people are affiliated with these companies, ranging from one to 7,877. All examined companies have a total of 819,632 followers on LinkedIn, ranging from none to 88,167. An increase in CE-related companies was recorded in 13 of the 16 federal states of Germany over a one-year period. This work provides essential insights into the increasing relevance and trends of the circular economy in German enterprises and will help conduct further national studies with readily available data from LinkedIn.
Human interaction frequently includes decision-making processes during which interactants call on verbal and non-verbal resources to manage the flow of interaction. In 2017, Stevanovic et al. carried out pioneering work, analyzing the unfolding of moment-by-moment dynamics by investigating the behavioral matching during search and decision-making phases. By studying the similarities in the participant's body sway during a conversation task in Finnish, the authors showed higher behavioral matching during decision phases than during search phases. The purpose of this research was to investigate the whole-body sway and its coordination during joint search and decision-making phases as a replication of the study by Stevanovic et al. (2017) but based on a German population. Overall, 12 dyads participated in this study and were asked to decide on 8 adjectives, starting with a pre-defined letter, to describe a fictional character. During this joint-decision task (duration: 206.46 ± 116.08 s), body sway of both interactants was measured using a 3D motion capture system and center of mass (COM) accelerations were computed. Matching of body sway was calculated using a windowed cross correlation (WCC) of the COM accelerations. A total of 101 search and 101 decision phases were identified for the 12 dyads. Significant higher COM accelerations (5.4*10−3 vs. 3.7*10−3 mm/s2, p < 0.001) and WCC coefficients (0.47 vs. 0.45, p = 0.043) were found during decision-making phases than during search phases. The results suggest that body sway is one of the resources humans use to communicate the arrival at a joint decision. These findings contribute to a better understanding of interpersonal coordination from a human movement science perspective.
Femtosecond (fs) time-resolved magneto-optics is applied to investigate laser-excited ultrafast dynamics of one-dimensional nickel gratings on fused silica and silicon substrates for a wide range of periodicities Λ = 400–1500 nm. Multiple surface acoustic modes with frequencies up to a few tens of GHz are generated. Nanoscale acoustic wavelengths Λ/n have been identified as nth-spatial harmonics of Rayleigh surface acoustic wave (SAW) and surface skimming longitudinal wave (SSLW), with acoustic frequencies and lifetimes being in agreement with theoretical calculations. Resonant magnetoelastic excitation of the ferromagnetic resonance (FMR) by SAW’s third spatial harmonic, and, most interestingly fingerprints of the parametric resonance at 1/2 SAW frequency have been observed. Numerical solutions of Landau–Lifshitz–Gilbert (LLG) equation magnetoelastically driven by complex polychromatic acoustic fields quantitatively reproduce all resonances at once. Thus, our results provide a solid experimental and theoretical base for a quantitative understanding of ultrafast fs-laser-driven magnetoacoustics and tailoring the magnetic-grating-based metasurfaces at the nanoscale.
Die Erfindung betrifft in einem ersten Aspekt eine Vorrichtung zur transkutanen Aufbringung eines elektrischen Stimulationsreizes auf ein Ohr. Die Vorrichtung umfasst einen Schaltungsträger, mindestens zwei Elektroden sowie eine Steuerungseinheit, wobei die Steuerungseinheit dazu konfiguriert ist, anhand von Stimulationsparametern ein elektrisches Stimulationssignal an den Elektroden zu erzeugen. Dabei ist die Vorrichtung, insbesondere eine Oberfläche des Schaltungsträgers der Vorrichtung, auf eine anatomische Form eines Ohres angepasst, sodass Elektroden auf der Oberfläche des Schaltungsträgers aufgebracht sind und ausgewählte Bereiche des Ohres kontaktieren Die Vorrichtung ist dadurch kennzeichnet, dass diese weiterhin einen Sensor zur Erkennung mindestens eines physiologischen Parameter umfasst und eine Steuerungseinheit dazu konfiguriert ist, anhand des mindestens einen physiologischen Parameters die Stimulationsparameter für den Stimulationsreiz anzupassen.In einem weiteren Aspekt betrifft die Erfindung ein Verfahren zur Herstellung der erfindungsgemäßen Vorrichtung.
Die Erfindung betrifft ein Verfahren zum Maximieren der von einer analogen Entropiequelle abgeleiteten Entropie, wobei das Verfahren folgende Schritte aufweist:- Bereitstellen von Eingabedaten für die analoge Entropiequelle (2);- Erzeugen von Rückgabewerten durch die analoge Entropiequelle basierend auf den Eingabedaten (3); und- Gruppieren der Rückgabewerte, wobei das Gruppieren der Rückgabewerte ein Anwenden von Versätzen auf Rückgabewerte aufweist (4).
Bei vielen Schulungen, Unterrichten und Weiterbildungen kommen Präsentationen zum Einsatz, um ausbildungsrelevante Inhalte zu vermitteln. Oft sind diese jedoch nicht interessant und zielführend gestaltet, was sich z. B. durch ein Übermaß an Text auszeichnet. Die Autoren stellen alternativ eine visualisierte Aufbereitung von Inhalten vor. Ziel ist es, komplexe Sachverhalte als einfache Bilder und Skizzen komprimiert darzustellen. Mit Hilfe der vorgestellten Methoden können beispielsweise Übungen effizienter vorbereitet, Einsätze übersichtlich erfasst, aber auch alltägliche Situationen vereinfacht kommuniziert werden.
Sustainable Production
(2023)
Visual programming languages (VPL) let users develop software programs by combining visual program elements, like lists of objects, loops or conditional statements rather than by specifying them textually.
Humanoid robots programming is a very attractive and motivating application domain for students, especially for programming beginners. Humanoid robots are constructed in such a way that they mimic the human body by using actuators that perform like muscles. Typically, a humanoid robot consists of sensors and actuators, i.e. torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body, for example, from the waist up. In some cases, humanoid robots are equipped with heads designed to replicate additional human facial features such as eyes. Additional sensors are needed by a robot to gather information about the conditions of the environment to allow the robot to make necessary decisions about its position or certain actions that the situation requires, e.g. an arm movement or an open/close hand action. Other examples for sensor are reflective infrared sensors used to detect objects in proximity.
In this work, we introduce a use-case centered approach based on sensors and actors of a robot and a workflow model to visually describe the sequence of actions including conditional actions or concurrent actions. We provide an in-depth discussion of a new VPL based teaching method for programming humanoid robots based on VPLs. Open research challenges, limits and perspectives for further development of our teaching approach are discussed as well.
Sensors and actuators enable creation of context-aware applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects. In this work, we use a general context definition, which can be applied to various devices, e.g., robots and mobile devices. Developing context-based software applications is considered as one of the most challenging application domains due to the sensors and actuators as part of a device. We introduce a new development approach for context-based applications by using use-case descriptions and Visual Programming Languages (VPL). The introduction of web-based VPLs, such as Scratch and Snap, has reinvigorated the usefulness of VPLs. We provide an in-depth discussion of our new VPL based method, a step by step development process to enable development of context-based applications. Two case studies illustrate how to apply our approach to different problem domains: Context-based mobile apps and context-based humanoid robot applications.
The main advantage of mobile context-aware applications is to provide effective and tailored services by considering the environmental context, such as location, time, nearby objects and other data, and adapting their functionality according to the changing situations in the context information without explicit user interaction. The idea behind Location-Based Services (LBS) and Object-Based Services (OBS) is to offer fully-customizable services for user needs according to the location or the objects in a mobile user's vicinity. However, developing mobile context-aware software applications is considered as one of the most challenging application domains due to the built-in sensors as part of a mobile device. Visual Programming Languages (VPL) and hybrid visual programming languages are considered to be innovative approaches to address the inherent complexity of developing programs. The key contribution of our new development approach for location and object-based mobile applications is a use case driven development approach based on use case templates and visual code templates to enable even programming beginners to create context-aware mobile applications. An example of the use of the development approach is presented and open research challenges and perspectives for further development of our approach are formulated.
Due to globalization and the resulting increase in competition on the market, products must be produced more and more cheaply, especially in series production, because buyers expect new variants or even completely new products in ever shorter cycles. Injection molding is the most important production process for manufacturing plastic components in large quantities. However, the conventional production of a mold is extremely time-consuming and costly, which creates a contradiction to the fast pace of the market. Additive tooling is an area of application of additive manufacturing, which in the field of injection molding is preferably used for the prototype production of mold inserts. This allows injection molding tools to be produced faster and more cheaply than through the subtractive manufacturing of metal tools. Material Jetting processes using polymers (MJT-UV/P), also called Polyjet Modeling (PJM), have a great potential for use in additive tooling. Due to the poorer mechanical and thermal properties compared to conventional mold insert materials, e.g. steel or aluminum, the previously used design principles cannot be applied. Accordingly, new design guidelines are necessary, which are developed in this paper. The necessary information is obtained with the help of a systematic literature research. The design guidelines are mapped in a uniform design guide, which is structured according to the design process of injection molds. The guidelines do not only refer to the constructive design of the injection mold or the polymer mold insert, but to the entire design process and describe the four phases of planning, conception, development and realization. Particular attention is paid to the special geometric designs of a polymer mold insert and the thermomechanical properties of the mold insert materials. As a result, design guidelines are available that are adapted to the special requirements of additive tooling of molds inserts made of plastics for injection molding.
Wirtschaftliche Krisenzeiten implizieren häufig Liquiditätsengpässe und bei kompletter Zahlungsunfähigkeit auch Insolvenzen. Das Instrument des Working Capital Management hilft bei der schnelleren Freisetzung von gebundenem Kapital. Sofern ein datengetriebenes Management unter Einsatz von Business-Analytics-Techniken und mit der dafür notwendigen technisch-organisatorischen Infrastruktur eingesetzt wird, entstehen neue Möglichkeiten von Einsichten in die Prozesslandschaft und die Optimierung von Durchlaufzeiten. Das Ziel ist der Aufbau eines Working-Capital- Analytics-Ansatzes.
Günter Knieps hat das Forschungsgebiet der Netzökonomie in Deutschland maßgeblich geprägt. Ein in seinen Forschungsarbeiten immer wiederkehrendes Thema ist die Frage nach der richtigen Balance zwischen Wettbewerb und Regulierung in Netzsektoren. Unter den vielen wissenschaftlichen Beiträgen, die Günter Knieps bislang vorgelegt hat, genießt ein Beitrag einen besonderen Stellenwert: sein im August 1997 in der Zeitschrift Kyklos erschienener Aufsatz „Phasing out Sector-Specific Regulation in Competitive Telecommunications“. Der 25. Jahrestag des Erscheinens dieses Aufsatzes wurde von der Herausgeberin und den Herausgebern des vorliegenden Sammelbandes zum Anlass genommen, den Versuch zu unternehmen, das wissenschaftliche Werk und das Wirken von Günter Knieps als Forscher und Hochschullehrer mit einer Festschrift zu würdigen. Mit Beiträgen von (in der Reihenfolge der Kapitel): Johannes M. Bauer, Falk von Bornstaedt, Manfred J. Holler & Florian Rupp, Hans-Ulrich Küpper, Kay Mitusch, Friedrich Schneider, Viktor J. Vanberg, Achim Wambach, Bernhard Wieland und Patrick Zenhäusern sowie einem Geleitwort von Carl Christian von Weizsäcker.
One of the most important questions about smart metering systems for the end users is their data privacy and security. Indeed, smart metering systems provide a lot of advantages for distribution system operators (DSO), but functionalities offered to users of existing smart meters are still limited and society is becoming increasingly critical. Smart metering systems are accused of interfering with personal rights and privacy, providing unclear tariff regulations which not sufficiently encourage households to manage their electricity consumption in advance. In the specific field of smart grids, data security appears to be a necessary condition for consumer confidence without which they will not be able to give their consent to the collection and use of personal data concerning them.
Precisely synchronized communication is a major precondition for many industrial applications. At the same time, hardware cost and power consumption need to be kept as low as possible in the Internet of Things (IoT) paradigm. While many wired solutions on the market achieve these requirements, wireless alternatives are an interesting field for research and development. This article presents a novel IEEE802.11n/ac wireless solution, exhibiting several advantages over state-of-the-art competitors. It is based on a market-available wireless System on a Chip with modified low-level communication firmware combined with a low-cost field-programmable gate array. By achieving submicrosecond synchronization accuracy, our solution outperforms the precision of low-cost products by almost four orders of magnitude. Based on inexpensive hardware, the presented wireless module is up to 20 times cheaper than software-defined-radio solutions with comparable timing accuracy. Moreover, it consumes three to five times less power. To back up our claims, we report data that we collected with a high sampling rate (2000 samples per second) during an extended measurement campaign of more than 120 h, which makes our experimental results far more representative than others reported in the literature. Additional support is provided by the size of the testbed we used during the experiments, composed of a hybrid network with nine nodes divided into two independent wireless segments connected by a wired backbone. In conclusion, we believe that our novel Industrial IoT module architecture will have a significant impact on the future technological development of high-precision time-synchronized communication for the cost-sensitive industrial IoT market.
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Fused Filament Fabrication (FFF) is a widespread additive manufacturing technology, mostly in the field of printable polymers. The use of filaments filled with metal particles for the manufacture of metallic parts by FFF presents specific challenges regarding debinding and sintering. For aluminium and its alloys, the sintering temperature range overlaps with the temperature range of thermal decomposition of many commonly used “backbone” polymers, which provide stability to the green parts. Moreover, the high oxygen affinity of aluminium necessitates the use of special sintering regimes and alloying strategies. Therefore, it is challenging to achieve both low porosity and low levels of oxygen and carbon impurities at the same time. Feedstocks compatible with the special requirements of aluminium alloys were developed. We present results on the investigation of debinding/sintering regimes by Fourier Transform Infrared spectroscopy (FTIR) based In-Situ Process Gas Analysis and discuss optimized thermal treatment strategies for Al-based FFF.
A smart energy concept was designed and implemented for a cluster of 5 existing multi-family houses, which combines heat pumps, photovoltaic (PV) modules and combined heat and power units (CHP) to achieve energy- and cost-efficient operation. Measurement results of the first year of operation show that the local power generation by PV modules and CHP unit has a positive effect on the electrical self-sufficiency by reducing electricity import from the grid. In winter, when the CHP unit operates continuously for long periods, the entire electricity for the heat pump and 91 % of the total electricity demand of the neighborhood are supplied locally. In summer, only 53 % is generated within the neighborhood. The use of a specifically developed energy management system EMS is intended to further increase this share. CO2 emissions for heating and electricity of the neighborhood are evaluated and amount to 18.4 kg/(m2a). Compared to the previous energy system consisting of gas boilers (29.1 kg/(m2a)), savings of 37 % are achieved with electricity consumption from the grid being reduced by 65 %. In the second construction stage, an additional heat pump, CHP unit and PV modules will be added. The measurement results indicate that the final district energy system is likely to achieve the ambitious CO2 reduction goal of -50% and further increase the self-sufficiency of the district.
This book constitutes the proceedings of the 23rd International TRIZ Future Conference on Towards AI-Aided Invention and Innovation, TFC 2023, which was held in Offenburg, Germany, during September 12–14, 2023. The event was sponsored by IFIP WG 5.4.
The 43 full papers presented in this book were carefully reviewed and selected from 80 submissions. The papers are divided into the following topical sections: AI and TRIZ; sustainable development; general vision of TRIZ; TRIZ impact in society; and TRIZ case studies.
Eco-innovations in chemical processes should be designed to use raw materials, energy and water as efficiently and economically as possible to avoid the generation of hazardous waste and to conserve raw material reserves. Applying inventive principles identified in natural systems to chemical process design can help avoid secondary problems. However, the selection of nature-inspired principles to improve technological or environmental problems is very time-consuming. In addition, it is necessary to match the strongest principles with the problems to be solved. Therefore, the research paper proposes a classification and assignment of nature-inspired inventive principles to eco-parameters, eco-engineering contradictions and eco-innovation domains, taking into account environmental, technological and economic requirements. This classification will help to identify suitable principles quickly and also to realize rapid innovation. In addition, to validate the proposed classification approach, the study is illustrated with the application of nature-inspired invention principles for the development of a sustainable process design for the extraction of high-purity silicon dioxide from pyrophyllite ores. Finally, the paper defines a future research agenda in the field of nature-inspired eco-engineering in the context of AI-assisted invention and innovation.
Der vorliegende Leitfaden entstand im Rahmen der wissenschaftlichen Querspange »LowEx-Bestand Analyse« des thematischen Projektverbunds »LowEx-Konzepte für die Wärmeversorgung von Mehrfamilien-Bestandsgebäuden (LowEx-Bestand)« zusammen. In diesem Verbund arbeiteten die drei Forschungsinstitute Fraunhofer ISE, KIT und Universität Freiburg (INATECH) mit Herstellern von Heizungs- und Lüftungstechnik und mit Unternehmen der Wohnungswirtschaft zusammen. Gemeinsam wurden Lösungen entwickelt, analysiert und demonstriert, die den effizienten Einsatz von Wärmepumpen, Wärmeübergabesystemen und Lüftungssystemen bei der energetischen Modernisierung von Mehrfamiliengebäuden zum Ziel haben.
In der Studie "Technisch-wissenschaftliche Analyse zur Energieeffizienz unterschiedlicher Trinkwasser-Erwärmungssysteme im Vergleich" im Auftrag der Viega GmbH & Co. KG werden verschiedene Trinkwasser-Erwärmungssysteme hinsichtlich ihrer Energieeffizienz in Wärmepumpensystemen vergleichend untersucht. Neben Aufbau und Parametrierung eines Simulationsmodells sowie Integration von Lastreihen nach Norm umfasst die Studie eine detaillierte Abbildung aller untersuchten Systeme. Dabei liegt ein Schwerpunkt auf der Einordnung des Energieeinsparpotenzials durch eine Warmwassertemperaturreduktion mit dem Viega AVS Trinkwasser Management System. Die untersuchten Varianten sind: Referenzsystem 1: Durchflusstrinkwassererwärmer DTE (1 stufig) mit Rücklaufeinschichtung. System 2: Viega DTE (2 stufig). System 3: Viega AVS Trinkwasser Management System mit DTE (2 stufig) und Ultrafiltrationsmodul im Zirkulationsrücklauf UFC. System 4: Wohnungsstation, 4-Leiter-System. System 5: Wohnungsstation, 2-Leiter-System. System 6: Elektrischer Durchlauferhitzer. Die Studie ergab, dass sich bei Einsatz einer Niedertemperatur-Wärmepumpe mit maximaler Vorlauftemperatur von 58 °C das Viega AVS System mit DTE und UFC, dezentrale elektrische Durchlauferhitzer sowie das 4-Leiter-System bei einer Trinkwassertemperatur von 45°C im Vergleich als energetisch am besten erweisen. Bei einer Wärmepumpe mit einer höheren maximalen Vorlauftemperatur von 64 °C kann auch das 4-Leiter-System bei einer Trinkwassertemperatur von 50°C sinnvoll eingesetzt werden. Die Ergebnisse zeigten auch, dass je höher die durch die Wärmepumpe bereitgestellte Temperatur (maximale Vorlauftemperatur), desto besser lassen sich auch die anderen Systeme einsetzen, da sich dadurch der Einsatz des Backup-Systems minimieren lässt. Das Viega Aqua VIP System mit Temperaturabsenkung schneidet im Vergleich sehr gut hinsichtlich des Einsatzes der Endenergie und der zu erreichenden Jahresarbeitszahl ab. Der Einsatz dieses Systems in Kombination mit einer Wärmepumpe bietet Potenzial für den Einsatz erneuerbarer Energien.
LowEx-Konzepte für die Wärmeversorgung von Mehrfamilien-Bestandsgebäuden ("LowEx-Bestand Analyse")
(2023)
Der vorliegende Abschlussbericht fasst die Ergebnisse der wissenschaftlichen Querspange »LowEx-Bestand Analyse« des thematischen Projektverbunds »LowEx-Konzepte für die Wärmeversorgung von Mehrfamilien-Bestandsgebäuden (LowEx-Bestand)« zusammen. In diesem Verbund arbeiteten drei Forschungsinstitute mit Herstellern von Heizungs- und Lüftungstechnik und mit Unternehmen der Wohnungswirtschaft zusammen. Gemeinsam wurden Lösungen entwickelt, analysiert und demonstriert, die den effizienten Einsatz von Wärmepumpen, Wärmeübergabesystemen und Lüftungssystemen bei der energetischen Modernisierung von Mehrfamiliengebäuden zum Ziel haben. LowEx-Systeme arbeiten durch geringe Temperaturdifferenzen zwischen Heizmedium und Nutzwärmebesonders effizient. Wärmepumpen haben dabei erhebliches Potenzial zur Absenkung der spezifischen CO2-Emissionen bei der Wärmebereitstellung. Für die energetische Modernisierung von Mehrfamiliengebäuden ist der Einsatz solcher Systeme mit besonderen Herausforderungen und Anforderungen an die Übergabe der Raumwärme, die Warmwasserbereitung und die Nutzung von Umweltwärme verbunden. Diese Herausforderungen werden in LowEx-Bestand adressiert.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
Optimization of energetic refurbishment roadmaps for multi-family buildings utilizing heat pumps
(2023)
A novel methodology for calculating optimized refurbishment roadmaps is developed in this paper. The aim of the roadmaps is to determine when and how should which component of the building envelope and heat generation system be refurbished to achieve the lowest net present value. The integrated optimization approach couples a particle swarm optimization algorithm with a dynamic building simulation of the building envelope and the heat supply system. Due to a free selection of implementation times and refurbishment depth, the optimization method achieves the lowest net present value and high CO2 reduction and is therefore an important contribution to achieve climate neutrality in the building stock.
The method is exemplarily applied to a multi-family house built in 1970. In comparison to a standard refurbishment roadmap, cost savings of 6–16 % and CO2 savings of 6–59 % are possible. The sensitivity of the refurbishment roadmap measures is analyzed on the basis of a parametric analysis. Robust optimization results are obtained with a mean refurbishment level of approx. 50 kWh/m2/a of the building envelope. The preferred heat generation system is a bivalent brine-heat pump system with a share of 70 % of the heat load being covered by the electric heat pump.
Elektrische Wärmepumpen sind eine Schlüsseltechnologie für klimafreundliche Gebäude. In Mehrfamilienhäusern ist ihr Einsatz noch eine Herausforderung und entsprechend wenig verbreitet. Im Rahmen des Verbundprojekts "HEAVEN" haben Forschende nun ein Mehrquellen-Wärmepumpensystem entwickelt, das an die Anforderungen größerer Wohngebäude angepasst ist. Getestet wurde es im Rahmen des Verbundprojekts "Smartes Quartier Durlach" in einem Karlsruher Gebäude. Daten zum ersten Betriebsjahr liegen nun vor.
Diese Metadaten wurden zur Verfügung gestellt von der Literaturdatenbank RSWB®plus
Am 1. Juli 2022 trafen sich im Rahmen des Abschlusskolloquiums des Projekts ACA-Modes rund 60 Teilnehmende aus Forschung, Lehre und Industrie zu einer internationalen Konferenz an der Hochschule Offenburg. Hier wurden die Projektergebnisse rund um die erfolgreiche Implementierung modellprädiktiver Regelstrategien vorgestellt, aktuelle Fragestellungen diskutiert und Entwicklungspfade hin zu einem netzdienlichen Betrieb von Energieverbundsystemen skizziert.