Refine
Year of publication
- 2021 (228) (remove)
Document Type
- Conference Proceeding (72)
- Article (reviewed) (38)
- Part of a Book (36)
- Article (unreviewed) (20)
- Book (11)
- Other (11)
- Report (11)
- Contribution to a Periodical (9)
- Patent (9)
- Doctoral Thesis (5)
Conference Type
- Konferenzartikel (66)
- Konferenz-Abstract (4)
- Konferenz-Poster (2)
Has Fulltext
- no (228) (remove)
Is part of the Bibliography
- yes (228)
Keywords
- Datenqualität (6)
- Kundendaten (6)
- Datenmanagement (5)
- Künstliche Intelligenz (5)
- Maschinelles Lernen (4)
- Regelungstechnik (4)
- Social Media (4)
- TABS (4)
- Generative Adversarial Network (3)
- Götz von Berlichingen (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (79)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (46)
- Fakultät Medien (M) (ab 22.04.2021) (43)
- Fakultät Wirtschaft (W) (43)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (20)
- IMLA - Institute for Machine Learning and Analytics (18)
- INES - Institut für nachhaltige Energiesysteme (13)
- POIM - Peter Osypka Institute of Medical Engineering (10)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (5)
Open Access
- Closed Access (147)
- Open Access (75)
- Bronze (8)
- Closed (3)
- Diamond (2)
- Grün (1)
Durch das Verbundprojekt Gendering MINT digital – Open Science aktiv gestalten wurde ermöglicht, die immer noch marginale Inklusion von Genderwissen in MINT für ein erfolgreiches Gender Mainstreaming zu verbessern. Außerdem konnte das Projekt zur Vernetzung von Genderforschung, Lehre in den Gender Studies und Gleichstellungsarbeit beitragen sowie Transferwissen zur Kompetenzbildung in den MINT-Disziplinen erproben, evaluieren und für einen nachhaltigen Einsatz adaptieren.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
MINT-College TIEFE
(2021)
Das Projekt MINT-College TIEFE konnte in der zweiten Förderperiode die verschiedenen Maßnahmen der vorangegangenen Förderperiode weiter ausbauen und verstetigen. Die Angebote im Rahmen des Projekts MINT-College TIEFE begleiteten die Studierenden über den Student-Life-Cycle hinweg über das komplette Studium der technischen Studiengänge, beginnend in der Schule und endend beim Übergang in den Beruf. Um die Qualität der Lehre an der Hochschule Offenburg zu verbessern, wurden darüber hinaus verschiedene digital unterstützte Lehrformate weiterentwickelt und ausgebaut. Zentrale Angebote des MINT-College, das 2019 zentrale Einrichtung der Hochschule Offenburg wurde, sind die für die Studieneingangsphase entwickelten Angebote der Einführungstage, des Mentorenprogramms, der Brückenkurse, des Lernzentrums und Angebote für den Übergang in den Beruf, wie das Gründerbüro. Die mediendidaktischen Unterstützungsangebote für Lehrende unterstützten den Lernkulturwandel an der Hochschule. Es wurden systematisch nachhaltige Strukturen aufgebaut, um Innovationen für das Lehren und das Lernen auch künftig entwickeln, erproben und etablieren zu können.
Im Rahmen des Forschungsvorhabens GeoSpeicher.bw wurden mehrere Demostandorte in Baden-Württemberg intensiv durch die Projektpartner untersucht bzw. begleitet. Die Forschungsergebnisse zeigen, dass bestehende Geothermieanlagen gut funktionieren und durch den Betrieb auch klimaschädliche Gasemissionen eingespart werden können. Leider konnte im Rahmen des Vorhabens kein Demoprojekt für einen Aquiferspeicher am städtischen Klinikum Karlsruhe oder auch am Campus Nord des Karlsruhe Instituts für Technologie (KIT) trotz des Nachweises der effektiven Kostenersparnisse und CO2-Einsparungen verwirklicht werden.
Sollte sich die Aquiferspeichertechnologie in Baden-Württemberg etablieren, müsste unbedingt ein Demoprojekt für einen flachen Niedrigtemperatur-Aquiferspeicher entwickelt und gefördert werden. Die Rahmenbedingungen für solch einen Aquiferspeicher wären am Campus Nord grundsätzlich gegeben. Dieser Nachweis wurde durch zahlreiche Untersuchungen im Rahmen von GeoSpeicher.bw eindeutig erbracht.
Der verstärkte Einsatz von Wärmepumpen bei der Realisierung einer klimaneutralen Wärmeversorgung führt zu einer signifikanten Zunahme und Änderung der elektrischen Lasten in den Verteilnetzen. Daher gilt es, Wärmepumpen so zu steuern, dass sie Verteilnetze wenig belasten oder sogar unterstützen.
Inhalt des Projekts „PV²WP - PV Vorhersage für die netzdienliche Steuerung von Wärmepumpen“ (Projektlaufzeit 1.07.2018 – 30.06.2021) war die Demonstration eines neuen Ansatzes zur Steuerung von Heizungssystemen, die auf Wärmepumpen und thermischen Speichern basieren und in Kombination mit einer Photovoltaikanlage betrieben werden. Das übergeordnete Ziel war dabei die Verbesserung der Netzintegration und Smart-Grid-Tauglichkeit entsprechender Heizungssysteme durch eine kostengünstige Technologie bei gleichzeitiger Erhöhung der Wirtschaftlichkeit.
Dabei wurden drei zukunftsweisende Technologien in Kombination genutzt und demonstriert: wolkenkamerabasierte Kurzfristprognosen, prädiktive Steuerung und Regelung sowie machinelearning-basierte Systemmodellierung als Basis für die Optimierung. Als Demonstrationsumgebung diente mit dem Projekthaus Ulm ein real bewohntes Einfamilienhaus.Umweltforschung
Digital transformation strengthens the interconnection of companies in order to develop optimized and better customized, cross-company business models. These models require secure, reliable, and traceable evidence and monitoring of contractually agreed information to gain trust between stakeholders. Blockchain technology using smart contracts allows the industry to establish trust and automate cross-company business processes without the risk of losing data control. A typical cross-company industry use case is equipment maintenance. Machine manufacturers and service providers offer maintenance for their machines and tools in order to achieve high availability at low costs. The aim of this chapter is to demonstrate how maintenance use cases are attempted by utilizing hyperledger fabric for building a chain of trust by hardened evidence logging of the maintenance process to achieve legal certainty. Contracts are digitized into smart contracts automating business that increase the security and mitigate the error-proneness of the business processes.
Um die im Pariser Klimaschutzabkommen vereinbarte Begrenzung der Erderwärmung auf 1,5 Grad Celsius zu begrenzen, muss die Energiewende deutlich stärker vorangetrieben werden als bisher. Das Schaufenster C/sells in der größten der SINTEG-Modellregionen hat sich dieser Herausforderung gestellt. Über vier Jahre haben 56 Partner aus Energiewirtschaft, Wissenschaft und Politik in Baden-Württemberg, Bayern und Hessen daran gearbeitet, ein zelluläres Energiesystem zu etablieren. Sie haben Musterlösungen für eine erfolgreiche Energiewende entwickelt. In mehr als 30 Demonstrationszellen sowie in neun Partizipationszellen, den sogenannten C/sells-Citys, wurde demonstriert, wie ein Informationssystem die intelligente Organisation von Stromversorgungsnetzen und den regionalisierten Handel mit Energie und Flexibilitäten ermöglicht.
Schlussbericht VanAssist
(2021)
The use of architectural models is a long-proven method for the visualization of designs. More recently, powerful 3D printers have enabled the rapid and cost-effective additive manufacturing (AM) of textured architectural models. The use of AM technology to sample terraced houses in a specific use case (sampling center with more than 1200 customers per year) is examined within this contribution. The aim is to offer customers with limited spatial imagination assistance in the form of detailed architectural models of the whole house, which are divided into different modules. For this purpose, the structure of the terraced house is first analysed and examined for flexible design elements. The implementation of different variants of each floor should serve as a basis for the customer's decision on design and equipment. Thus, the architectural models are additively manufactured using Polyjet modeling. The necessary CAAD-data and interfaces, the technical possibilities and limits of this approach as well as the resulting costs are analyzed. The results of the AM process are evaluated to determine their applicability for the sampling of terraced houses. In addition, the evaluation will show that the additively manufactured architectural models will allow a more precise visualization of the building and thus a faster understanding of the design choices.
Ziel der Investitionsmaßnahme Enerlab 4.0 war die Bereitstellung einer umfangreichen in-operando und post-mortem Diagnostik für dezentrale Energieerzeuger und -Speicher, z. B. Batteriezellen und Photovoltaikzellen. Diese sind wichtige Komponenten für verschiedene Bereiche der Industrie 4.0 – von autonomen Sensoren über energieautarke Produktion bis hin zur Qualitätskontrolle. Zu diesem Zweck wurde die apparative Ausstattung der Hochschule Offenburg erweitert, und zwar sowohl für in-operando Diagnostik (elektrische Zyklierer, Impedanzspektrometer, Temperaturprüfschränke) als auch für post-mortem Diagnostik (Glovebox, Probenpräparationen für vorhandene Werkstoffanalytik und chemische Analytik). Be-reits vorhandene Geräte aus anderen laufenden oder abgeschlossenen Projekten wurden in die neue Infrastruktur integriert. Im Ergebnis entstand ein modernes und leistungsfähiges Batterie- und Photovoltaiklabor, welches in zahlreichen laufenden und neuen Vorhaben genutzt wird.
Im Jahre 2010 bot die Hochschule Offenburg ein Medizintechnikstudium mit dem Schwerpunkt ’Kardiologie, Elektrophysiologie und elektronische kardiologische Implantate’ als Bachelor- und später auch Masterstudiengang an. Ziel des auf diesen Schwerpunkt ausgelegten didaktischen Lehrkonzeptes ist die Vermittlung sofort anwendungsbereiten theoretischen Wissens und praktischen Könnens, welches die Absolventinnen und Absolventen in ihrer künftigen Berufsausübung in der Industrie oder als technische Partner der behandelnden Ärztinnen und Ärzte in hochspezialisierten klinischen Einrichtungen benötigen.
Aufgrund fehlender kommerzieller Angebote ist zur Umsetzung dieses Lehrkonzeptes die ingenieurtechnische Realisierung geeigneter Lehrmittel zwingend erforderlich. Dies betrifft die hard- und softwareseitige Erstellung visueller Demonstrationsmöglichkeiten für pathologische und implantatinduzierte Herzrhythmen, sowie die synthetische Bereitstellung originalgetreuer elektrokardiographischer Ableitsignale aus der klinischen Routine. Des Weiteren den Aufbau von in-vitro Trainingssystemen zu Therapien mit elektronischen kardiologischen Implantaten sowie zur Hochfrequenz-Katheterablation.
Insbesondere die Wahlfächer ’Programmierung von Herzschrittmachern’ und ‚Programmierung von Defibrillatoren’, deren Besuch den Teilnehmenden einen besonders raschen Berufseinstieg ermöglichen sollte, wurden in didaktischer Hinsicht in engem Bezug zum 4-Komponenten-Instruktionsdesign-Modell der Lehre gestaltet.
Durch den kontinuierlichen Einsatz der Instrumente der formativen Evaluation gelangen sowohl deutliche Verbesserungen am Gesamtkonzept der Lehrveranstaltungen als auch an den dort eingesetzten, selbst realisierten Lösungen des benannten speziellen Lehr- und Trainingsequipments.
Eine summative Evaluation des Lehrkonzeptes ist aufgrund seines Alleinstellungsmerkmals schwierig. Aus diesem Grund erschien die quantitative Prüfung des Einflusses eines Besuchs des praktisch orientierten Wahlfachs ’Programmierung von Herzschrittmachern’ auf die Note der kombinierten Abschlussklausur in den Fächern ’Elektrokardiographie’ und ’Elektrostimulation’ sinnvoll. In diese Evaluation eingeschlossen wurde eine Kohorte von 221 Studierenden, 76 Frauen und 145 Männer, von denen 93 am Wahlfach nicht teilnahmen und 128 die es besucht hatten.
Über 7 zusammengefasste Studienjahre zeigte sich, dass die praktische Ausbildung im Wahlfach ’Programmierung von Herzschrittmachern’ das Leistungsniveau der Studierenden der Medizintechnik in der kombinierten Abschlussprüfung ’Elektrokardiographie und Elektrostimulation’ deutlich beeinflusste.
Das im Rahmen dieser Arbeit mitgestaltete Lehrkonzept, die realisierten Lehrmaterialien und Lehrumgebungen wurden im Bachelor- und Masterstudiengang der Medizintechnik an der Hochschule Offenburg in den Praktika, Seminaren und Vorlesungen des Schwerpunktes ’Kardiologie, Elektrophysiologie und elektronische kardiologische Implantate’ vielfältig genutzt. Sie ermöglichten die Gestaltung interaktiver praktischer Weiterbildungsveranstaltungen für ärztliches und mittleres medizinisches Personal und für auf diesen Gebieten tätige medizintechnische Firmen.
The transition from college to university can have a variety of psychological effects on students who need to cope with daily obligations by themselves in a new setting, which can result in loneliness and social isolation. Mobile technology, specifically mental health apps (MHapps), have been seen as promising solutions to assist university students who are facing these problems, however, there is little evidence around this topic. My research investigates how a mobile app can be designed to reduce social isolation and loneliness among university students. The Noneliness app is being developed to this end; it aims to create social opportunities through a quest-based gamified system in a secure and collaborative network of local users. Initial evaluations with the target audience provided evidence on how an app should be designed for this purpose. These results are presented and how they helped me to plan the further steps to reach my research goals. The paper is presented at MobileHCI 2020 Doctoral Consortium.
Significant progress in the development and commercialization of electrically conductive adhesives has been made. This makes shingling a very attractive approach for solar cell interconnection. In this study, we investigate the shading tolerance of two types of solar modules based on shingle interconnection: first, the already commercialized string approach, and second, the matrix technology where solar cells are intrinsically interconnected in parallel and in series. An experimentally validated LTspice model predicts major advantages for the power output of the matrix layout under partial shading. Diagonal as well as random shading of a 1.6-m2 solar module is examined. Power gains of up to 73.8 % for diagonal shading and up to 96.5 % for random shading are found for the matrix technology compared to the standard string approach. The key factor is an increased current extraction due to lateral current flows. Especially under minor shading, the matrix technology benefits from an increased fill factor as well. Under diagonal shading, we find the probability of parts of the matrix module being bypassed to be reduced by 40 % in comparison to the string module. In consequence, the overall risk of hotspot occurrence in matrix modules is decreased significantly.
A versatile liquid metal (LM) printing process enabling the fabrication of various fully printed devices such as intra- and interconnect wires, resistors, diodes, transistors, and basic circuit elements such as inverters which are process compatible with other digital printing and thin film structuring methods for integration is presented. For this, a glass capillary-based direct-write method for printing LMs such as eutectic gallium alloys, exploring the potential for fully printed LM-enabled devices is demonstrated. Examples for successful device fabrication include resistors, p–n diodes, and field effect transistors. The device functionality and easiness of one integrated fabrication flow shows that the potential of LM printing is far exceeding the use of interconnecting conventional electronic devices in printed electronics.
Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.
Loneliness, an emotional distress caused by the lack of meaningful social connections, has been increasingly affecting university students who need to deal with everyday situations in a new setting, especially those who have come from abroad. Currently there is little work on digital solutions to reduce loneliness. Therefore, this work describes the general design considerations for mobile apps in this context and outlines a potential solution. The mobile app Noneliness is used to this end: it aims to reduce loneliness by creating social opportunities through a quest-based gamified system in a secure and collaborative network of local users. The results of initial evaluations with the target audience are described. The results informed a user interface redesign as well as a review of the features and the gamification principles adopted.
Objective: To identify and evaluate the evidence of the most relevant running-related risk factors (RRRFs) for running-related overuse injuries (ROIs) and to suggest future research directions.
Design: Systematic review considering prospective and retrospective studies. (PROSPERO_ID: 236832)
Data sources: Pubmed. Connected Papers. The search was performed in February 2021.
Eligibility criteria: English language. Studies on participants whose primary sport is running addressing the risk for the seven most common ROIs and at least one kinematic, kinetic (including pressure measurements), or electromyographic RRRF. An RRRF needed to be identified in at least one prospective or two retrospective studies.
Results: Sixty-two articles fulfilled our eligibility criteria. Levels of evidence for specific ROIs ranged from conflicting to moderate evidence. Running populations and methods applied varied considerably between studies. While some RRRFs appeared for several ROIs, most RRRFs were specific for a particular ROI. The biomechanical measurements performed in many studies would have allowed for consideration of many more RRRFs than have been reported, highlighting a potential for more effective data usage in the future.
Conclusion: This study offers a comprehensive overview of RRRFs for the most common ROIs, which might serve as a starting point to develop ROI-specific risk profiles of individual runners. Future work should use macroscopic (big data) approaches involving long-term data collections in the real world and microscopic approaches involving precise stress calculations using recent developments in biomechanical modelling. However, consensus on data collection standards (including the quantification of workload and stress tolerance variables and the reporting of injuries) is warranted.
Das hier vorgestellte System verbindet das neue Konzept der Peer-to-Peer-Navigation mit dem Einsatz von Augmented Reality zur Unterstützung von bettseitig durchgeführten externen Ventrikeldrainagen. Das sehr kompakte und genaue Gesamtsystem beinhaltet einen Patiententracker mit integrierter Kamera, eine Augmented-Reality-Brille mit Kamera und eine Punktionsnadel bzw. einen Pointer mit zwei Trackern, mit dessen Hilfe die Anatomie des Patienten aufgenommen wird. Die exakte Position und Richtung der Punktionsnadel wird unter Zuhilfenahme der aufgenommenen Landmarken berechnet und über die Augmented-Reality-Brille für den Chirurgen sichtbar auf dem Patienten dargestellt. Die Methode zur Kalibrierung der statischen Transformationen zwischen Patiententracker und daran befestigter Kamera beziehungsweise zwischen den Trackern der Punktionsnadel sind für die Genauigkeit sehr wichtig und werden hier vorgestellt. Das Gesamtsystem konnte in vitro erfolgreich getestet werden und bestätigt den Nutzen eines Peer-to-Peer-Navigationssystems.
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE).
Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states.
Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients.
Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
Die Erfindung betrifft ein Verfahren zur Synchronisation eines Netzwerkgeräts für die drahtlose Kommunikation, insbesondere eines Netzwerk-Endgeräts, in einem Drahtlosnetzwerk, wobei das Netzwerkgerät einen integrierten Schaltkreis für die drahtlose Kommunikation (IWC), eine Synchronisationsevent-Detektoreinrichtung (SED) für das Detektieren von Synchronisationsevents, einen steuerbaren Clock-Generator (CCG) für das Erzeugen eines synchronisierten Zeitsignals TCCGund eine Synchronisationssteuereinrichtung (SCD) zur Steuerung des Synchronisationsvorgangs des Netzwerkgeräts umfasst. In dem Netzwerkgerät werden während einer Synchronisationsphase folgende Verfahrensschritte durchgeführt: Zunächst wird ein Synchronisations-Frame empfangen und ein Synchronisations-Timestamp TAPdetektiert. Anschließend wird ein Timestamp TBmittels einer im IWC enthaltenen IWC-Clock erzeugt, der die Empfangszeit des Synchronisations-Frames definiert. In einem weiteren Schritt wird an einem Port des IWC ein Potenzialwechsel erzeugt, der einen Synchronisationsevent darstellt. Weiterhin wird ein Timestamp TSEmittels der IWC-Clock erzeugt, der den Zeitpunkt des Synchronisationsevents definiert. Die SED detektiert den Synchronisationsevent durch Auswerten der zeitlichen Länge des Potenzialwechsels des Ports des IWC und erzeugt einen Timestamp TSunter Verwendung des synchronisierten Zeitsignals TCCG, wobei der Timestamp TSdenselben Zeitpunkt des Synchronisationsevents definiert wie der Timestamp TSE. Die Timestamps TAP, TB, TSEund TS, die mittels Verarbeitung von ein oder mehreren Synchronisationsevent-Frames gemäß den Schritten (a) bis (d) ermittelt wurden, werden dann zur Synchronisierung des vom CCG erzeugten synchronisierten Zeitsignals TCCGauf das Master-Zeitsignal verwendet.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years [1]. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37 percent can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.
Printed electronics (PE) offers flexible, extremely low-cost, and on-demand hardware due to its additive manufacturing process, enabling emerging ultra-low-cost applications, including machine learning applications. However, large feature sizes in PE limit the complexity of a machine learning classifier (e.g., a neural network (NN)) in PE. Stochastic computing Neural Networks (SC-NNs) can reduce area in silicon technologies, but still require complex designs due to unique implementation tradeoffs in PE. In this paper, we propose a printed mixed-signal system, which substitutes complex and power-hungry conventional stochastic computing (SC) components by printed analog designs. The printed mixed-signal SC consumes only 35% of power consumption and requires only 25% of area compared to a conventional 4-bit NN implementation. We also show that the proposed mixed-signal SC-NN provides good accuracy for popular neural network classification problems. We consider this work as an important step towards the realization of printed SC-NN hardware for near-sensor-processing.
Data Science
(2021)
Know-how für Data Scientists
• Übersichtliche und anwendungsbezogene Einführung
• Zahlreiche Anwendungsfälle und Praxisbeispiele aus unterschiedlichen Branchen
• Potenziale, aber auch mögliche Fallstricke werden aufgezeigt
Data Science steht derzeit wie kein anderer Begriff für die Auswertung großer Datenmengen mit analytischen Konzepten des Machine Learning oder der künstlichen Intelligenz. Nach der bewussten Wahrnehmung der Big Data und dabei insbesondere der Verfügbarmachung in Unternehmen sind Technologien und Methoden zur Auswertung dort gefordert, wo klassische Businss Intelligence an ihre Grenzen stößt.
Dieses Buch bietet eine umfassende Einführung in Data Science und deren praktische Relevanz für Unternehmen. Dabei wird auch die Integration von Data Science in ein bereits bestehendes Business-Intelligence-Ökosystem thematisiert. In verschiedenen Beiträgen werden sowohl Aufgabenfelder und Methoden als auch Rollen- und Organisationsmodelle erläutert, die im Zusammenspiel mit Konzepten und Architekturen auf Data Science wirken.
Diese 2., überarbeitete Auflage wurde um neue Themen wie Feature Selection und Deep Reinforcement Learning sowie eine neue Fallstudie erweitert.
Achieving Positive Hospitality Experiences through Technology: Findings from Singapore and Malaysia
(2021)
Customers’ experience is one of the most impactful factors in the tourism industry. Only by offering customers an excellent experience is it possible to build and ensure long-term customer loyalty. In today’s world, technology plays a key role in providing customers with an excellent customer experience. This study has the objective of analyzing how a positive customer experience can be achieved, and which technologies are necessary to ensure this. Results were collected through a literature review, and qualitative interviews with managers of selected hotels, as well as of attractions in Malaysia and Singapore. The analysis of these hotels and attractions is based on a set of criteria to determine the extent of the adoption of the new standards that contribute to positive online customer experiences. As a conclusion, different perspectives are compared, and positive and negative aspects of the use of modern technologies in the tourism industry are specified and discussed.
Strategische Analysetechniken ermöglichen langfristig eine strukturierte Erfassung unternehmensinterner Ressourcen in Ausrichtung auf den Markt. Die hier beschriebenen Basis-Techniken umfassen das Produkt-Lebenszyklusanalyse-Modell, verschiedene Typen der Portfolio-Analyse, die Wertketten-Analyse und die SWOT-Analyse. Diese Techniken unterstützen das Marketing-Controlling, Geschäftsfeld- und Marktanalysen für das Management zu erstellen und strategische Handlungsoptionen abzuleiten.
In diesem einführenden Kapitel geben die Autoren einen Überblick über die Entstehung des Marketing-Controllings, dessen Aufgaben, organisatorische Einbindung in das Unternehmen sowie dessen strategische und operative Ausprägungen. Zudem werden die einzelnen Beiträge dieses Handbuches im Zusammenhang vorgestellt.
Produkt-Controlling
(2021)
Zentraler Baustein des Marketing ist die „facettenreiche“ Produktpolitik. Im nachstehenden Beitrag wird zunächst die Einordnung der Produktpolitik in den Zielkatalog des Marketing und des Unternehmens skizziert. Das Produkt-Controlling wird verstanden als zielgerichtete Unterstützung der Managementaufgaben im Kontext der Produktpolitik mittels passender Instrumente – Instrumente, die der Phase der Produktentstehung wie der Marktzyklusphase zuzuordnen sind. Erkennbar wird: es gibt ein umfangreiches Set an Methoden, die das Marketing-Management unterstützen und die Sicherstellung der Marketing-Effektivität und Marketing-Effizienz gewährleisten. Die Komplexität des Produkt-Controllings bedingt sich auch durch den ausreichenden Einbezug preis-, qualitäts- und markenpolitischer Informationen in die Zielkontrolle.
Wie entscheiden Helden?
(2021)
As the world economy rapidly decarbonises to meet global climate goals, the export credit sector must keep pace. Countries representing over two-thirds of global GDP have now set net zero targets, as have hundreds of private financial institutions. Public and private initiatives are now working to develop new standards and methodologies for shifting investment portfolios to decarbonisation pathways based on science.
However, export credit agencies (ECAs) are only at the beginning stages of this seismic transformation. On the one hand, the net zero transition creates risks to existing business models and clients for the many ECAs, while on the other, it creates a significant opportunity for ECAs to refocus their support to help countries and trade partners meet their climate targets. ECAs can best take advantage of this transition, and minimise its risks, by setting net zero targets and adopting credible plans to decarbonise their portfolios. Collaboration across the sector can be a powerful tool for advancing this goal.
The compliant nature of distal limb muscle-tendon units is traditionally considered suboptimal in explosive movements when positive joint work is required. However, during accelerative running, ankle joint net mechanical work is positive. Therefore, this study aims to investigate how plantar flexor muscle-tendon behavior is modulated during fast accelerations. Eleven female sprinters performed maximum sprint accelerations from starting blocks, while gastrocnemius muscle fascicle lengths were estimated using ultrasonography. We combined motion analysis and ground reaction force measurements to assess lower limb joint kinematics and kinetics, and to estimate gastrocnemius muscle-tendon unit length during the first two acceleration steps. Outcome variables were resampled to the stance phase and averaged across three to five trials. Relevant scalars were extracted and analyzed using one-sample and two-sample t-tests, and vector trajectories were compared using statistical parametric mapping. We found that an uncoupling of muscle fascicle behavior from muscle-tendon unit behavior is effectively used to produce net positive mechanical work at the joint during maximum sprint acceleration. Muscle fascicles shortened throughout the first and second steps, while shortening occurred earlier during the first step, where negative joint work was lower compared with the second step. Elastic strain energy may be stored during dorsiflexion after touchdown since fascicles did not lengthen at the same time to dissipate energy. Thus, net positive work generation is accommodated by the reuse of elastic strain energy along with positive gastrocnemius fascicle work. Our results show a mechanism of how muscles with high in-series compliance can contribute to net positive joint work.
Über zwei Jahrzehnte hat sich an der Hochschule Offenburg im Umfeld von Professor Elmar Bollin eine Forschungsgruppe etabliert, die die Bereiche Gebäudeautomation und nachhaltige Energietechnik zusammenführten. Anfänglich ging es darum die Potenziale der internetbasierten Wetterprognostik und modell-basierten Anlagensteuerung für die Verbesserung des Komforts und der Energieeffizienz im Gebäude zu nutzen. Im Rahmen von Forschungs- und Entwicklungsarbeiten mit Einsatz von dynamischen Gebäudesimulationen konnte schließlich ein Algorithmus gefunden werden, der es ermöglichte auf Basis von prognostizierter Außentemperatur und Sonneneinstrahlung den Energiebedarf eines Bürogebäudes für den Folgetag vorherzusagen. In Verbindung mit der Gebäudeautomation entstand so die adaptive und prädiktive TABS-Steuerung AMLR.
This book constitutes the refereed proceedings of the 21st International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2021, held virtually in September 2021 and sponsored by IFIP WG 5.4.
The 28 full papers and 8 short papers presented were carefully reviewed and selected from 48 submissions. They are organized in the following thematic sections: inventiveness and TRIZ for sustainable development; TRIZ, intellectual property and smart technologies; TRIZ: expansion in breadth and depth; TRIZ, data processing and artificial intelligence; and TRIZ use and divulgation for engineering design and beyond.
Chapter ‘Domain Analysis with TRIZ to Define an Effective “Design for Excellence’ is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
Sustainable chemical processes should be designed to combine the technological advantages and progress with lower safety risks and minimization of environmental impact such as, for example, reduction of raw materials, energy and water consumption, and avoidance of hazardous waste and pollution with toxic chemical agents. A number of novel eco-friendly chemical technologies have been developed in the recent decades with the help of the eco-innovations approaches and methods such as Life Cycle Analysis, Green Process Engineering, Process Intensification, Process Design for Sustainability, and others. An emerging approach to the sustainable process design in process engineering builds on the innovative solutions inspired from nature. However, the implementation of the eco-friendly technologies often faces secondary ecological problems. The study postulates that the eco-inventive principles identified in natural systems allow to avoid secondary eco-problems and proposes to apply these principles for sustainable design in chemical process engineering. The research work critically examines how this approach differs from the biomimetics, as it is commonly used for copying natural systems. The application of nature-inspired eco-design principles is illustrated with an example of a sustainable technology for extraction of nickel from pyrophyllite.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
Energiemanagement im Betrieb
(2021)
Über zwei Jahrzehnte hat sich an der Hochschule Offenburg eine Forschungsgruppe etabliert, die die beiden Bereiche Gebäudeautomation und nachhaltige Energietechnik zusammenführte. Anfangs ging es darum, Potentiale der internetbasierten Wetterprognostik und modell-basierten Anlagensteuerung für die Verbesserung des Komforts und der Energieeffizienz im Gebäude zu nutzen. Im Rahmen von Forschungs- und Entwicklungsarbeiten mit Einsatz von dynamischen Gebäudesimulationen konnte ein Algorithmus gefunden werden, der es ermöglichte auf Basis von prognostizierter Außentemperatur und Sonneneinstrahlung den Energiebedarf eines Bürogebäudes für den Folgetag vorherzusagen. In Verbindung mit der Gebäudeautomation entstand so die adaptive und prädiktive TABS-Steuerung AMLR.
Dieses Fachbuch gibt einen vertieften Einblick in das dynamische Verhalten von thermoaktiven Bauteilsystemen. Es wird eine neu entwickelte und vielfach erprobte, selbstlernende und vorausschauende TABS-Steuerung vorgestellt. Dazu wird auf die Erfordernisse einer effektiven TABS-Steuerung eingegangen und die Grundlagen und Funktionsweise der neu entwickelten AMLR-Steuerung erläutert. Anhand mehrerer Anwendungsbeispiele wird die Umsetzung in die bauliche Praxis erläutert und mit Hilfe von umfangreichen Messergebnissen die Funktion der neuen AMLR-Steuerung nachgewiesen. Abschließend werden Empfehlungen für die Anwendung von AMLR in der baulichen TABS-Praxis hinsichtlich Anlagenhydraulik und Umsetzung in der Gebäudeautomation gegeben.
The paper describes the implementation of practical laboratory settings in a virtual environment. With the entry of VR glasses into the mass market, there is a chance to establish educational and training applications for displaying some teaching materials and practical works. Therefore our project focuses on the realization of virtual experiments and environments, which gives users a deep insight into selected subfields of Optics and Photonics. Our goal is not to substitute the hand on experiments rather to extend them. By means of VR glasses, the user is offered the possibility to view the experiment from several angles and to make changes through interactive control functions. During the VR application, additional context-related information is displayed. By using object recognition, the specific graphics and texts for the respective object are loaded and supplemented at the appropriate place. Thus, complex facts are supported in an informative way. The prototype is developed using the Unity Engine and can thus be exported to different platforms and end devices. Another major advantage of virtual simulations to the real situation is the high degree of controllability as well as the easy repeatability. With slight modifications, entire experiments can be reused. Our research aims to acquire new knowledge in the field of e-learning in association with VR technology. Here we try to answer a core question of the compatibility of the individual media components.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
Increasing power density causes increased self-generation of harmonics and intermodulation. As this leads to violations of the strict linearity requirements, especially for carrier aggregation (CA), the nonlinearity must be considered in the design process of RF devices. This raises the demand of accurate simulation models. Linear and nonlinear P-Matrix/COM models are used during the design due to their fast simulation times and accurate results. However, the finite element method (FEM) is useful to get a deeper insight in the device's nonlinearities, as the total field distributions can be visualized. The FE method requires complete sets of material tensors, which are unknown for most relevant materials in nonlinear micro-acoustics. In this work, we perform nonlinear FEM simulations, which allow the calculation of nonlinear field distributions of a lithium tantalate based layered SAW system up to third order. We aim at achieving good correspondence to measured data and determine the contributions of each material layer to the nonlinear signals. Therefore, we use approximations circumventing the issue of limited higher order tensor data. Experimental data for the third order nonlinearity is shown to validate the presented approach.
Surface acoustic waves are propagated toward the edge of an anisotropic elastic medium (a silicon crystal), which supports leaky waves with a high degree of localization at the tip of the edge. At an angle of incidence corresponding to phase matching with this leaky wedge wave, a sharp peak in the reflection coefficient of the surface wave was found. This anomalous reflection is associated with efficient excitation of the leaky wedge wave. In laser ultrasound experiments, surface acoustic wave pulses were excited and their reflection from the edge of the sample and their partial conversion into leaky wedge wave pulses was observed by optical probe-beam deflection. The reflection scenario and the pulse shapes of the surface and wedge-localized guided waves, including the evolution of the acoustic pulse traveling along the edge, have been confirmed in detail by numerical simulations.
Properties of higher-order surface acoustic wave modes in Al(1-x)Sc(x)N / sapphire structures
(2021)
In this work, surface acoustic wave (SAW) modes and their dependence on propagation directions in epitaxial Al0.68Sc0.32N(0001) films on Al2O3(0001) substrates were studied using numerical and experimental methods. In order to find optimal propagation directions for higher-order SAW modes, phase velocity dispersion branches of Al0.68Sc0.32N on Al2O3 with Pt mass loading were computed for the propagation directions <11-20> and <1-100> with respect to the substrate. Experimental investigations of phase velocities and electromechanical coupling were performed for comparison with the numerical results. Simulations carried out with the finite element method (FEM) and with a Green function approach allowed identification of each wave type, including Rayleigh, Sezawa and shear horizontal wave modes. For the propagation direction <1-100>, significantly increased wave guidance of the Sezawa mode compared to other directions was observed, resulting in enhanced electromechanical coupling (k2eff = 1.6 %) and phase velocity (vphase = 6 km/s). We demonstrated, that selecting wave propagation in <1-100> with high mass density electrodes results in increased electromechanical coupling without significant reduction in phase velocities for the Sezawa wave mode. An improved combination of metallization, Sc concentration x, and SAW propagation direction is suggested which exhibits both high electromechanical coupling (k2eff > 6 %) and high velocity (vphase = 5.5 km/s) for the Sezawa mode.
The present invention is directed to a storage-stable formulation of long-chain RNA. In particular, the invention concerns a dry powder composition comprising a long-chain RNA molecule. The present invention is furthermore directed to methods for preparing a dry powder composition comprising a long-chain RNA molecule by spray-drying. The invention further concerns the use of such a dry powder composition comprising a long-chain RNA molecule in the preparation of pharmaceutical compositions and vaccines, to a method of treating or preventing a disorder or a disease, to first and second medical uses of such a dry powder composition comprising a long-chain RNA molecule and to kits, particularly to kits of parts, comprising such a dry powder composition comprising a long-chain RNA molecule.
The present invention is directed to a storage-stable formulation of long-chain RNA. In particular, the invention concerns a dry powder composition comprising a long-chain RNA molecule. The present invention is furthermore directed to methods for preparing a dry powder composition comprising a long-chain RNA molecule by spray-freeze drying. The invention further concerns the use of such a dry powder composition comprising a long- chain RNA molecule in the preparation of pharmaceutical compositions and vaccines, to a method of treating or preventing a disorder or a disease, to first and second medical uses of such a dry powder composition comprising a long-chain RNA molecule and to kits, particularly to kits of parts, comprising such a dry powder composition comprising a long-chain RNA molecule.
The present invention is directed to a storage-stable formulation of long-chain RNA. In particular, the invention concerns a dry powder composition comprising a long-chain RNA molecule. The present invention is furthermore directed to methods for preparing a dry powder composition comprising a long-chain RNA molecule by spray-drying. The invention further concerns the use of such a dry powder composition comprising a long-chain RNA molecule in the preparation of pharmaceutical compositions and vaccines, to a method of treating or preventing a disorder or a disease, to first and second medical uses of such a dry powder composition comprising a long-chain RNA molecule and to kits, particularly to kits of parts, comprising such a dry powder composition comprising a long-chain RNA molecule.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
Sowohl die Entwicklung neuer als auch die Weiterentwicklung bestehender Quartiere sind mit vielfältigen Herausforderungen verbunden. Durch weitere Klimaschutzmaßnahmen und ein zunehmendes Umweltbewusstsein steigen die energetischen Anforderungen an Wohn- und Gewerbeimmobilien. Die besonders für Deutschland ungünstige demografische Entwicklung bedingt eine weiter zunehmende Urbanisierung, bedingt durch Migration und Zuzug älterer Menschen in Städte, die künftig noch mehr altersgerechte Wohnungen und Pflegeeinrichtungen etablieren müssen. Hinzu kommen die steigenden Anforderungen aus der digitalen Transformation und einer Informationsgesellschaft, die sich mit Konnektivität, Schnelllebigkeit, Individualisierungstendenzen und veränderten Konsumgewohnheiten auseinandersetzen muss.
Lernziele:
Die Leser
• kennen die Bestandteile und Aufgaben der Produktion;
• wissen, welche Ansätze die Vision einer 100 % nachhaltigen Produktion hinsichtlich Material- und Energieverwendung beinhalten;
• verfügen über den Einblick, welche Stufen auf dem Weg zur nachhaltigen Produktion zu nehmen sind;
• wissen auch anhand von Beispielen, welche Ansätze zur Umsetzung einer nachhaltigen Produktion bestehen;
• kennen im Überblick die Ansätze zur Messung von Nachhaltigkeit in der Produktion.
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
Es wird ein neuer Ansatz zur Bestimmung des Abstands zweier oder mehrerer Smartphones zueinander vorgestellt. Dabei wird die Position des jeweiligen Smartphones im Raum bzw. im Gelände bezüglich eines Referenzpunkts (Spatial Anchor Point) ermittelt. Über einen zentralen Server tauschen die Smartphones ihre Position relativ zum Referenzpunkt aus und können daraus die Abstände zueinander berechnen. Unterschreitet der Abstand zweier Smartphones einen Schwellwert (< 2 m), erfolgt eine entsprechende Signalisierung auf den Smartphones.
Für Bildungseinrichtungen steht das Jahr 2020 für Schulschließungen und coronabedingte Zwangsdigitalisierung. Binnen weniger Wochen wurde notgedrungen auf Fernunterricht und Lernmanagementsysteme (LMS), Schulcloud und Videokonferenzen umgestellt. Zu denken geben sollte weniger die der Pandemie geschuldete Umstellung von Präsenz- auf Distanzunterricht, sondern dessen beabsichtigte Verstetigung samt Forderung nach zunehmend automatisierten Beschulungssystemen. Werden Bildungseinrichtungen Teil der Daten-Ökonomie oder gelten weiterhin pädagogische Prämissen?
We describe a prototype for power line communi- cation for grid monitoring. The PLC receiver is used to gain information about the PLC channel and the current state of the power grid. The PLC receiver uses the communication signal to obtain an accurate estimate of the current channel and provides information which can be used as a basis for further processing with the aim to detect partial discharges and other anomalies in the grid. This monitoring of the power grid takes advantage of existing PLC infrastructure and uses the data signals, which are transmitted anyway to obtain a real-time measurement of the channel transfer function and the received noise signal. Since this signal is sampled at a high sampling rate compared to simpler measurement sensors, it contains valuable information about possible degradations in the grid which need to be addressed. While channel measurements are based on a received PLC signal, information about partial discharges or other sources of interference can be gathered by a PLC receiver in the absence of a transmit signal. A prototype based on Software Defined Radio has been developed, which implements the simultaneous communication and sensing for a power grid.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
This paper describes a thorough analysis of using PPO to learn kick behaviors with simulated NAO robots in the simspark environment. The analysis includes an investigation of the influence of PPO hyperparameters, network size, training setups and performance in real games. We believe to improve the state of the art mainly in four points: first, the kicks are learned with a toed version of the NAO robot, second, we improve the reliability with respect to kickable area and avoidance of falls, third, the kick can be parameterized with desired distance and direction as input to the deep network and fourth, the approach allows to integrate the learned behavior seamlessly into soccer games. The result is a significant improvement of the general level of play.
This study aims to investigate the individual response concerning BRFs for AT when the mid-sole hardness underneath the rearfoot was systematically altered. We first identified FGs based on the footwear condition that minimised the risk for AT across BRFs. We then tested the FGs for differences in anthropometrics, footwear comfort, and running characteristics.
Onlineshops in Deutschland verschenken sehr viel Potenzial im Registrierungs- und Bestellprozess. Dabei lässt sich mit wenigen gezielten Verbesserungen der Checkout barrierefrei und smart gestalten. Zu diesem Ergebnis kommt eine heuristische Untersuchung der Top 100 Onlineshops von Uniserv gemeinsam mit der Hochschule Offenburg. Die Eingabe und Qualität von Adressdaten spielen dabei eine besondere Rolle.
Mit zunehmender Datenverfügbarkeit wird der Einsatz Maschinellen Lernens zur Steuerung und Optimierung von Supply Chains attraktiver, da die Qualität der Datenauswertung erhöht und gleichzeitig der Aufwand gesenkt werden kann. Anhand des SCOR-Modells werden exemplarische Ansätze als Orientierungshilfe eingeordnet und dazu passende Verfahren des Maschinellen Lernens vorgestellt.
IoT-Plattformen stellen ein zentrales Element für die Vernetzung von physischen Objekten und die Bereitstellung deren Daten für digitale Zwillinge dar. Der Markt für solche Plattformen ist in den vergangenen Jahren stark gewachsen. Bei inzwischen über 600 Anbietern ist die Wahl der „richtigen“ Plattform für das eigene Unternehmen keine triviale Aufgabe mehr. Dieser Beitrag soll Unternehmen im Auswahlprozess unterstützen, indem gängige Funktionen von IoT-Plattformen und Kriterien für die Auswahl von IoT-Plattformen aufgezeigt werden.
Zeitliche Anpassung führt zu verbesserter Schalllokalisation bei bimodal versorgten CI-/HG-Trägern
(2021)
Bei bimodal versorgten Cochlea-Implantaten (CI) / Hörgerät (HG)-Trägern entsteht durch die unterschiedliche Signalverarbeitung der Geräte eine konstante interaurale Zeitverzögerung in der Größenordnung von mehreren Millisekunden. Für MED-EL CI-Systeme in Kombination mit verschiedenen HG-Typen haben wir den jeweiligen Device-Delay-Mismatch quantifiziert. In der aktuellen Studie untersuchen wir den Einfluss der Device-Delay-Mismatch bei simulierten und tatsächlichen bimodalen Hörern auf die Genauigkeit der Schalllokalisation.
Um den Device-Delay-Mismatch bei bimodal versorgten Patienten zu verringern, haben wir die CI-Stimulation um die gemessene HG-Signallaufzeit und zwei weitere Werte verzögert. Nach einer Angewöhnungsphase war der effektive Winkelfehler bei Verzögerung um die HG-Signallaufzeit hochsignifikant reduziert im Vergleich zu der Testkondition ohne CI-Verzögerung (mittlere Verbesserung: 11 % ; p < .01, Wilcoxon Signed Rank Test). Aber auch mit den beiden weiteren Verzögerungswerten wurden Verbesserungen erreicht. Anhand der Ergebnisse lässt sich der optimale patientenspezifische Verzögerungswert näher eingrenzen.
In bimodal cochlear implant (CI) / hearing aid (HA) users a constant interaural time delay in the order of several milliseconds occurs due to differences in signal processing of the devices. For MED-EL CI systems in combination with different HA types, we have quantified the respective device delay mismatch (Zirn et al. 2015). In the current study, we investigate the effect of the device delay mismatch in simulated and actual bimodal listeners on sound localization accuracy.
To deal with the device delay mismatch in actual bimodal listeners we delayed the CI stimulation according to the measured HA processing delay and two other values. With all delay values highly significant improvements of the rms error in the localization task were observed compared to the test without the delay. The results help to narrow down the optimal patient-specific delay value.
Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as semantic mask labels, show our model’s capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification. For reproducibility, we provide an interactive open-source tool to perform facial manipulations, and the Pytorch implementation of the model.
Wood juice, a liquid produced during wood processing, is a harmful waste that requires utilization. To achieve a circular economy, biowastes should be recycled, reducing fossil carbon usage. Therefore, the objective of this work was to examine the potential of wood juice as a feedstock for bioplastic synthesis by Bacillus sp. G8_19. Polyhydroxyalkanoate (PHA) syntheses using wood juice from Douglas fir trees and that from a mixture of spruce/fir trees were compared. It was found that the PHA content was higher after using wood juice from spruce/fir trees than that from Douglas fir trees (18.0% vs 6.1% of cell dry mass). Gas chromatography analysis showed that, with both wood juices, Bacillus sp. G8_19 accumulated poly(3-hydroxybutyrate-co-3-hydroxyvalerate). The content of 3-hydroxyvalerate (3HV) monomers was higher when spruce/fir wood juice was used (10.7% vs 1.9%). The C/N ratio did not have a statistically significant effect on the copolymer content in biomass, but it did significantly influence the 3HV content. The proposed concept may serve as an approach to wood waste valorization via production of biodegradable materials.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
Systematische Erfassung von Einflussfaktoren für das Additive Tooling von Spritzgusswerkzeugen
(2021)
Additive tooling is a quick and cost-effective way of producing injection molded products and high fidelity prototypes using the injection molding process. As part of product development, additive tooling is integrated into a complex process. A lack of design and application knowledge represents a barrier in its use. The present work shows how a Design-Structure-Matrix (DSM) can be used to systematically record and analyze influencing factors and their interrelationships. A systematic literature search is carried out to identify the factors and relationships.
As a reaction to the increasing market dynamics and complex requirements, today’s products need to be developed quickly and customized to the customer’s individual needs. In the past, CAD systems are mainly used to visualize the model that the product designer creates. Generative Design shifts the task of the CAD program by actively participating in the shaping process. This results in more design options and the complexity of the shapes and geometries increases significantly. This potential can be optimally exploited through the combination of Generative Design with Additive Manufacturing (AM). Artificial intelligence and the input of target parameters generate geometries, for example, by creating material for stressed areas, which in turn develops biomorphic shapes and thus significantly reduces the consumption of resources. This contribution aims at the evaluation of existing applications in CAD systems for generative design. Special attention is paid to the requirements in design education and easy access for students. For this purpose, three representative CAD systems are selected and analyzed with the help of a comprehensive example of mass reduction. The aim is to perform an individual result analysis in order to assess the application based on various criteria. By using different materials, the influence of the material for the generation is investigated by comparing the material distribution. By comparing the generated models, differences of the CAD systems can be identified and possible fields of application can be presented. By specifying the manufacturing parameters for the generation of the models, the feasibility of AM can be guaranteed without having to modify the results. The physical implementation of the example by means of Fused Deposition Modeling demonstrates this in an exemplary way and examines the interface of the Generative Design and AM. The results of this contribution will enable an evaluation of the different CAD systems for Generative Design according to technical, visual and economic aspects.
In diesem Artikel werden die neuesten Entwicklungen in der Forschungsgruppe um Herrn Prof. Dr. Wendt vorgestellt. Es wird der Einsatz des neuen 3-D-Druckers der Firma Neotech, sowie die neuesten Entwicklungen im Leuchtturmprojekt Flitzmo beschrieben. Zudem konnte dieses Jahr mit dem Projekt zum Einsatz von Robotik im Bereich Assisted Living begonnen werden.
This paper presents the development of an energy harvesting solution for a driven tool holder. The tool holder environment was analysed, a test stand built and the designed electromagnetic rotation harvester was evaluated. The reported harvester is based on low cost off-the-shelf components and 3D printed parts. The utilisation of SMD coils allows easy adaptation to changing parameters of the integration area. Energy harvesting in tool holders enables predictive maintenance or condition monitoring in the industrial production. These capabilities are mandatory nowadays in regards of IIoT. A reliable energy source is key for continuous monitoring. Changing batteries becomes obsolete. The results provide useful insight for future harvesters.
Engineering, construction and operation of complex machines involves a wide range of complicated, simultaneous tasks, which potentially could be automated. In this work, we focus on perception tasks in such systems, investigating deep learning approaches for multi-task transfer learning with limited training data. We show an approach that takes advantage of a technical systems’ focus on selected objects and their properties. We create focused representations and simultaneously solve joint objectives in a system through multi-task learning with convolutional autoencoders. The focused representations are used as a starting point for the data-saving solution of the additional tasks. The efficiency of this approach is demonstrated using images and tasks of an autonomous circular crane with a grapple.
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Most eCommerce applications, like web-shops have millions of products. In this context, the identification of similar products is a common sub-task, which can be utilized in the implementation of recommendation systems, product search engines and internal supply logistics. Providing this data set, our goal is to boost the evaluation of machine learning methods for the prediction of the category of the retail products from tuples of images and descriptions.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (eg Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Interpreting seismic data requires the characterization of a number of key elements such as the position of faults and main reflections, presence of structural bodies, and clustering of areas exhibiting a similar amplitude versus angle response. Manual interpretation of geophysical data is often a difficult and time-consuming task, complicated by lack of resolution and presence of noise. In recent years, approaches based on convolutional neural networks have shown remarkable results in automating certain interpretative tasks. However, these state-of-the-art systems usually need to be trained in a supervised manner, and they suffer from a generalization problem. Hence, it is highly challenging to train a model that can yield accurate results on new real data obtained with different acquisition, processing, and geology than the data used for training. In this work, we introduce a novel method that combines generative neural networks with a segmentation task in order to decrease the gap between annotated training data and uninterpreted target data. We validate our approach on two applications: the detection of diffraction events and the picking of faults. We show that when transitioning from synthetic training data to real validation data, our workflow yields superior results compared to its counterpart without the generative network.
We demonstrate how to exploit group sparsity in order to bridge the areas of network pruning and neural architecture search (NAS). This results in a new one-shot NAS optimizer that casts the problem as a single-level optimization problem and does not suffer any performance degradation from discretizating the architecture.