Refine
Year of publication
- 2021 (228) (remove)
Document Type
- Conference Proceeding (72)
- Article (reviewed) (38)
- Part of a Book (36)
- Article (unreviewed) (20)
- Book (11)
- Other (11)
- Report (11)
- Contribution to a Periodical (9)
- Patent (9)
- Doctoral Thesis (5)
- Letter to Editor (4)
- Working Paper (2)
Conference Type
- Konferenzartikel (66)
- Konferenz-Abstract (4)
- Konferenz-Poster (2)
Has Fulltext
- no (228) (remove)
Is part of the Bibliography
- yes (228)
Keywords
- Datenqualität (6)
- Kundendaten (6)
- Datenmanagement (5)
- Künstliche Intelligenz (5)
- Maschinelles Lernen (4)
- Regelungstechnik (4)
- Social Media (4)
- TABS (4)
- Generative Adversarial Network (3)
- Götz von Berlichingen (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (79)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (46)
- Fakultät Medien (M) (ab 22.04.2021) (43)
- Fakultät Wirtschaft (W) (43)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (20)
- IMLA - Institute for Machine Learning and Analytics (18)
- INES - Institut für nachhaltige Energiesysteme (13)
- POIM - Peter Osypka Institute of Medical Engineering (10)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (10)
- ACI - Affective and Cognitive Institute (5)
Open Access
- Closed Access (147)
- Open Access (75)
- Bronze (8)
- Closed (3)
- Diamond (2)
- Grün (1)
The need for the logistics sector to timely respond to the increasing requirements of a globalised and digitalised world relies greatly on the com- petences and skills of its labour force. It becomes therefore essential to reinforce the cooperation between universities and business partners in the logistics and supply chain management fields across the European region and to build a logistics knowledge cluster supported by a communication and collaboration platform to foster continuous learning, skill acquisition and experience sharing anytime anywhere. In this paper we focus on designing the conceptual and technical framework for a communication and collaboration platform with the aim to establish the communication pipelines between the partner institutions, facilitating user interactions and exchange, leading to the creation of new knowledge and innovation in the logistics field. This framework is based on the requirements of the three main stakeholders: students, lecturers and companies, and consists of four functional areas defined according to the platform opera- tional requirements. A working prototype of the platform was developed using the Moodle learning management system and its core tools to determine its applicability and possible enhancement requirements. In the next stages of the project some additional tools like a knowledge base and the integration of the partners’ learning management systems to form the logistics knowledge cluster will be implemented.
With the increasing share of renewable energies and the nuclear phase-out, the energy transition is accelerating. From the perspective of building technology, there is great potential to support this transition given its large share in total energy consumption and the increasing number of flexible and controllable components and storages. However, a question often asked at the plant level is: "How do we use this flexibility to support the regional grid?". In this work, a grid-supportive controller of a real-world building energy plant was developed using mathematical optimisation methods and its technical feasibility was demonstrated. The results could convince actors from the energy industry and academia about the practicality of these methods and offer tools for their implementation.
The Go programming language is an increasingly popular language but some of its features lack a formal investigation. This article explains Go's resolution mechanism for overloaded methods and its support for structural subtyping by means of translation from Featherweight Go to a simple target language. The translation employs a form of dictionary passing known from type classes in Haskell and preserves the dynamic behavior of Featherweight Go programs.
A Hybrid Optoelectronic Sensor Platform with an Integrated Solution‐Processed Organic Photodiode
(2021)
Hybrid systems, unifying printed electronics with silicon‐based technology, can be seen as a driving force for future sensor development. Especially interesting are sensing elements based on printed devices in combination with silicon‐based high‐performance electronics for data acquisition and communication. In this work, a hybrid system integrating a solution‐processed organic photodiode in a silicon‐based system environment, which enables flexible device measurement and application‐driven development, is presented. For performance evaluation of the integrated organic photodiode, the measurements are compared to a silicon‐based counterpart. Therefore, the steady state response of the hybrid system is presented. Promising application scenarios are described, where a solution‐processed organic photodiode is fully integrated in a silicon system.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
Quantifying the midsole material characteristics of athletic footwear is a standard task in footwear research and development. Current material testing protocols primarily focus on the determination of cushioning properties of the heel region or the quantification of the midsole properties as one assembly. However, midsoles possess different spatial material properties that have not been quantified from previous methodologies. Therefore, new material testing methods are required to quantify the local material response of athletic footwear. We developed a cyclical force-controlled material testing protocol for the determination of non-homogeneously distributed material stiffness with a high spatial resolution. In five prototype shoes varying in their stiffness distribution, we found that the material properties can be reliably measured across the midsole. Furthermore, we observed a characteristic non-linear material response regardless of the midsole location. We found that the material stiffness increased with an increase of the applied force and that this effect is further intensified by higher testing cycles. Additionally, the obtained midsole stiffness depends on the geometry of the midsole. We explored different approaches to reduce the measurement time of the testing protocol and found that the number of measurements can be reduced by 70% using 2 D-interpolation procedures. Determining the spatial material properties of midsoles needs to be considered to understand foot-shoe interactions. Furthermore, this measurement protocol can be used for quality control within the footwear and can be adapted for considering the effects of different running styles or speeds on ground force application characteristics.
The transition from college to university can have a variety of psychological effects on students who need to cope with daily obligations by themselves in a new setting, which can result in loneliness and social isolation. Mobile technology, specifically mental health apps (MHapps), have been seen as promising solutions to assist university students who are facing these problems, however, there is little evidence around this topic. My research investigates how a mobile app can be designed to reduce social isolation and loneliness among university students. The Noneliness app is being developed to this end; it aims to create social opportunities through a quest-based gamified system in a secure and collaborative network of local users. Initial evaluations with the target audience provided evidence on how an app should be designed for this purpose. These results are presented and how they helped me to plan the further steps to reach my research goals. The paper is presented at MobileHCI 2020 Doctoral Consortium.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
Time-Sensitive Networking (TSN) is the most promising time-deterministic wired communication approach for industrial applications. To extend TSN to "IEEE 802.11" wireless networks two challenging problems must be solved: synchronization and scheduling. This paper is focused on the first one. Even though a few solutions already meet the required synchronization accuracies, they are built on expensive hardware that is not suited for mass market products. While next Wi-Fi generation might support the required functionalities, this paper proposes a novel method that makes possible high-precision wireless synchronization using commercial low-cost components. With the proposed solution, a standard deviation of synchronization error of less than 500 ns can be achieved for many use cases and system loads on both CPU and network. This performance is comparable to modern wired real-time field busses, which makes the developed method a significant contribution for the extension of the TSN protocol to the wireless domain.
Most eCommerce applications, like web-shops have millions of products. In this context, the identification of similar products is a common sub-task, which can be utilized in the implementation of recommendation systems, product search engines and internal supply logistics. Providing this data set, our goal is to boost the evaluation of machine learning methods for the prediction of the category of the retail products from tuples of images and descriptions.
It is important to minimize the unscheduled downtime of machines caused by outages of machine components in highly automated production lines. Considering machine tools such as, grinding machines, the bearing inside of spindles is one of the most critical components. In the last decade, research has increasingly focused on fault detection of bearings. In addition, the rise of machine learning concepts has also intensified interest in this area. However, up to date, there is no single one-fits-all solution for predictive maintenance of bearings. Most research so far has only looked at individual bearing types at a time.
This paper gives an overview of the most important approaches for bearing-fault analysis in grinding machines. There are two main parts of the analysis presented in this paper. The first part presents the classification of bearing faults, which includes the detection of unhealthy conditions, the position of the error (e.g. at the inner or at the outer ring of the bearing) and the severity, which detects the size of the fault. The second part presents the prediction of remaining useful life, which is important for estimating the productive use of a component before a potential failure, optimizing the replacement costs and minimizing downtime.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
Achieving Positive Hospitality Experiences through Technology: Findings from Singapore and Malaysia
(2021)
Customers’ experience is one of the most impactful factors in the tourism industry. Only by offering customers an excellent experience is it possible to build and ensure long-term customer loyalty. In today’s world, technology plays a key role in providing customers with an excellent customer experience. This study has the objective of analyzing how a positive customer experience can be achieved, and which technologies are necessary to ensure this. Results were collected through a literature review, and qualitative interviews with managers of selected hotels, as well as of attractions in Malaysia and Singapore. The analysis of these hotels and attractions is based on a set of criteria to determine the extent of the adoption of the new standards that contribute to positive online customer experiences. As a conclusion, different perspectives are compared, and positive and negative aspects of the use of modern technologies in the tourism industry are specified and discussed.
Additive manufacturing is a rapidly growing manufacturing process for which many new processes and materials are currently being developed. The biggest advantage is that almost any shape can be produced, while conventional manufacturing methods reach their limits. Furthermore, a lot of material is saved because the part is created in layers and only as much material is used as necessary. In contrast, in the case of machining processes, it is not uncommon for more than half of the material to be removed and disposed of. Recently, new additive manufacturing processes have been on the market that enables the manufacturing of components using the FDM process with fiber reinforcement. This opens up new possibilities for optimizing components in terms of their strength and at the same time increasing sustainability by reducing materials consumption and waste. Within the scope of this work, different types of test specimens are to be designed, manufactured and examined. The test specimens are tensile specimens, which are used both for standardized tensile tests and for examining a practical component from automotive engineering used in student project. This project is a vehicle designed to compete in the Shell Eco-marathon, one of the world’s largest energy efficiency competitions. The aim is to design a vehicle that covers a certain distance with as little fuel as possible. Accordingly, it is desirable to manufacture the components with the lowest possible weight, while still ensuring the required rigidity. To achieve this, the use of fiber-reinforced 3D-printed parts is particularly suitable due to the high rigidity. In particular, the joining technology for connecting conventionally and additively manufactured components is developed. As a result, the economic efficiency was assessed, and guidelines for the design of components and joining elements were created. In addition, it could be shown that the additive manufacturing of the component could be implemented faster and more sustainably than the previous conventional manufacturing.
In bimodal cochlear implant (CI) / hearing aid (HA) users a constant interaural time delay in the order of several milliseconds occurs due to differences in signal processing of the devices. For MED-EL CI systems in combination with different HA types, we have quantified the respective device delay mismatch (Zirn et al. 2015). In the current study, we investigate the effect of the device delay mismatch in simulated and actual bimodal listeners on sound localization accuracy.
To deal with the device delay mismatch in actual bimodal listeners we delayed the CI stimulation according to the measured HA processing delay and two other values. With all delay values highly significant improvements of the rms error in the localization task were observed compared to the test without the delay. The results help to narrow down the optimal patient-specific delay value.
Wer als Pädagoge und Wissenschaftler das Thema „Digitalisierung und Unterricht“ kritisch reflektiert, stellt fest, dass nur Wenige die Tragweite der schon lange beabsichtigten Transformation von Bildungseinrichtungen zu IT-konformen, algorithmisch gesteuerten Lernfabriken realisieren. Die Corona-Pandemie ist nur der aktuelle Anlass, seit langem bekannte Digitalisierungsstrategien nur schneller umzusetzen. Dabei ist der Wechsel von ursprünglich pädagogischen Prämissen als Basis von Lehr- und Lernprozessen hin zum Paradigma der datengestützte Schulentwicklung und der empirischer Bildungsforschung wesentlich. Daten und Statistik dominieren das Individuum wie das Unterrichtsgeschehen. Es bedeutet sachlogisch, möglichst viele Daten der Schülerinnen und Schüler zu sammeln, auszuwerten und zur Grundlage von Entscheidungen über Lerninhalte und -prozessen zu machen. Lehren und Lernen wird wieder einmal als, heute digital, steuerbarer Prozess behauptet, wie schon beim programmierten Lernen in den 1950er Jahren. Was sind mögliche Alternativen?
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
Soziale Roboter, die mit uns kommunizieren und menschliche Verhaltensmuster imitieren, sind ein wichtiges Zukunftsthema. Während viele Arbeiten ihr Design und ihre Akzeptanz erforschen, gibt es bislang nur wenige Untersuchungen zu ihrer Marktfähigkeit. Der Schwerpunkt dieser Arbeit liegt auf dem Einsatz sozialer Roboter in den Bereichen Gesundheit und Pflege, wo die zukünftige Integration sozialer Roboter ein enormes Potenzial hat. Eine Studie mit 197 Personen aus Italien und Deutschland untersucht gewünschte Funktionalitäten und Kaufpräferenzen und berücksichtigt hierbei kulturelle Unterschiede. Dabei bestätigte sich die Wichtigkeit mehrerer Dimensionen des ALMERE-Modells (z. B. wahrgenommene Freude, Nützlichkeit und Vertrauenswürdigkeit). Die Akzeptanz korreliert stark mit der Investitionsbereitschaft. Viele ältere Personen betrachten soziale Roboter als „assistierende technische Geräte“ und erwarten, dass diese von Versicherungen und der öffentlichen Hand bezuschusst werden. Um ihren zukünftigen Einsatz zu erleichtern, sollten soziale Roboter in die Datenbanken medizinischer Hilfsmittel integriert werden.
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
(2021)
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.
We present a video-densitometric high-performance thin-layer chromatography (HPTLC) quantification method for patulin in apple juice, developed in a vertical chamber from the starting point to a distance of 50 mm, using MTBE, n-pentane (9 + 5, v/v) as mobile phase. After separation the plate is sprayed with methyl-benzothiazolinone hydrazone hydrochloride monohydrate (MBTH) solution (40 mg in 20 mL methanol) and heated at 105 °C for 15 min. Patulin zones are transformed into yellow spots. The quantification is based on direct measurements using an inexpensive 48-bit flatbed scanner for color measurements (in red, green, and blue). Evaluation of the blue channel makes the measurements very specific. Quantification in fluorescence was also done by use of a 16-bit CCD-camera and UV-366 nm illumination as well as using a HPTLC DAD-scanner. For linearization the extended Kubelka–Munk expression for data transformation was used. The range of linearity covers more than two magnitudes and lies between 5 and 800 ng patulin. The extraction of 20 g apple juice and an extract application on plate up to 50 µL allows statistically defined checking the limit of detection (LOD) of 50 ng patulin per track, which is equivalent to 50 µg patulin per kg apple juice.
Ein tiefgreifendes Verständnis des zyklischen Plastizitätsverhaltens metallischer Werkstoffe ist sowohl für die Optimierung der Materialeigenschaften als auch für die industrielle Auslegung und Fertigung von Bauteilen von hoher Relevanz. Insbesondere moderne Legierungen wie Duplex-Stähle zeigen unter Lastumkehr aufgrund des komplexen mehrphasigen Gefüges sowie der Neigung zu verschiedenen Ausscheidungsreaktionen einen ausgeprägten Bauschinger-Effekt, welcher bei technischen Umformvorgängen berücksichtigt werden muss. Der Bauschinger-Effekt begründet sich maßgeblich in der Entstehung von Rückspannungen, welche aus dem unterschiedlichen Plastizitätsverhalten der austenitischen und ferritischen Phase resultieren. Instrumentierte Mikroindenter-Versuche in ausgewählten Ferrit- und Austenitkörnern haben gezeigt, dass austenitische Gefügebestandteile durch einen deutlich früheren Fließbeginn sowie eine stärkere Rückplastifizierung während der Entlastung charakterisiert sind. Zudem wurde nachgewiesen, dass Ausscheidungen im Rahmen einer 475°C-Versprödung diesen Phasenunterschied verstärken und somit in einem höheren Bauschinger-Effekt resultieren.
Das vermutlich wichtigste Tatbestandsmerkmal der Business Judgment Rule ist das Vorliegen einer angemessenen Informationsgrundlage. Sie gilt dann als erreicht, wenn ein Geschäftsleiter vernünftigerweise annehmen darf, dass die Verbesserung einer gegebenen Informationsqualität den dafür erforderlichen Aufwand an Zeit bzw. Geld nicht rechtfertigt. Implizit wird hierbei vorausgesetzt, dass man verschiedene Ausmaße an Zeit, Geld und Informationsqualität unterscheiden kann. Für den Zeit- und Geldaufwand stimmt das auch, aber wie stuft man die Informationsqualität ab? Im Beitrag wird für prognosebezogene Informationen ein entsprechender Vorschlag gemacht.
Surface acoustic waves are propagated toward the edge of an anisotropic elastic medium (a silicon crystal), which supports leaky waves with a high degree of localization at the tip of the edge. At an angle of incidence corresponding to phase matching with this leaky wedge wave, a sharp peak in the reflection coefficient of the surface wave was found. This anomalous reflection is associated with efficient excitation of the leaky wedge wave. In laser ultrasound experiments, surface acoustic wave pulses were excited and their reflection from the edge of the sample and their partial conversion into leaky wedge wave pulses was observed by optical probe-beam deflection. The reflection scenario and the pulse shapes of the surface and wedge-localized guided waves, including the evolution of the acoustic pulse traveling along the edge, have been confirmed in detail by numerical simulations.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
IoT-Plattformen stellen ein zentrales Element für die Vernetzung von physischen Objekten und die Bereitstellung deren Daten für digitale Zwillinge dar. Der Markt für solche Plattformen ist in den vergangenen Jahren stark gewachsen. Bei inzwischen über 600 Anbietern ist die Wahl der „richtigen“ Plattform für das eigene Unternehmen keine triviale Aufgabe mehr. Dieser Beitrag soll Unternehmen im Auswahlprozess unterstützen, indem gängige Funktionen von IoT-Plattformen und Kriterien für die Auswahl von IoT-Plattformen aufgezeigt werden.
Digital- und Medientechnik sind heute typische Bestandteile des Unterrichts. Um Lern- und Verstehensprozesse zu ermöglichen, braucht es aber vor allem das Gespräch und den Diskurs. Lernen ist ein individueller und sozialer Prozess, der nicht digital kompensiert werden kann, wenn Verstehen und nicht nur Repetition das Ziel ist. Medien und Medientechnik können Lernprozesse unterstützen, aber wir lernen im Miteinander.
Dieser Band betrachtet die beabsichtigte digitale Transformation von Schule und Unterricht aus sowohl pädagogischer wie philosophischer, aus bildungstheoretischer wie kognitionswissenschaftlicher Perspektive. Es werden praxisnah die beabsichtigte digitale Steuerung und Quantifizierung von Lernprozessen transparent gemacht und Alternativen für einen verantwortungsvollen und pädagogisch sinnvollen Einsatz von Medien- wie Digitaltechnik im Unterricht aufgezeigt. Das Ziel ist Emanzipation und Mündigkeit durch konstruktive und produktive Medienarbeit.
Cyclic micro-bending tests on fcc single crystal Ni-base Alloy 718 cantilevers with different crystal orientations were performed to analyze the influence of activated slip systems on dislocation plasticity, latent hardening and the Bauschinger effect. The investigations indicate that plasticity in single crystal micro-cantilevers is significantly influenced by two phenomena - dislocation interaction and dislocation pile-up at the neutral plane. Both phenomena occur at the same time. Their ratio seems to be determined by the activated slip systems. Slip trace analysis indicates that the activation of only one slip system leads to a strong localization of plasticity to a limited number of parallel slip bands. This results in low dislocation interaction and consequently pronounced pile-ups at the neutral plane. In multi slip orientation, the second slip system leads to activation of significantly more dislocation sources, causing a much earlier and more homogeneous elastic-plastic transition zone. In stress-strain hysteresis loops during bending, pronounced dislocation interaction in multi slip orientation leads to a more pronounced latent hardening. The results suggest that on a microstructural length scale, plasticity behavior is strongly affected by activated slip systems, which determine local dislocation phenomena. Based on the results presented in this paper, a finite element analysis of latent hardening and the Bauschinger effect using a single crystal plasticity model with latent kinematic hardening is presented in Part II.
In the last decade, deep learning models for condition monitoring of mechanical systems increasingly gained importance. Most of the previous works use data of the same domain (e.g., bearing type) or of a large amount of (labeled) samples. This approach is not valid for many real-world scenarios from industrial use-cases where only a small amount of data, often unlabeled, is available.
In this paper, we propose, evaluate, and compare a novel technique based on an intermediate domain, which creates a new representation of the features in the data and abstracts the defects of rotating elements such as bearings. The results based on an intermediate domain related to characteristic frequencies show an improved accuracy of up to 32 % on small labeled datasets compared to the current state-of-the-art in the time-frequency domain.
Furthermore, a Convolutional Neural Network (CNN) architecture is proposed for transfer learning. We also propose and evaluate a new approach for transfer learning, which we call Layered Maximum Mean Discrepancy (LMMD). This approach is based on the Maximum Mean Discrepancy (MMD) but extends it by considering the special characteristics of the proposed intermediate domain. The presented approach outperforms the traditional combination of Hilbert–Huang Transform (HHT) and S-Transform with MMD on all datasets for unsupervised as well as for semi-supervised learning. In most of our test cases, it also outperforms other state-of-the-art techniques.
This approach is capable of using different types of bearings in the source and target domain under a wide variation of the rotation speed.
Datenanalysen gibt es schon immer, auch wenn statistische Verfahren, Optimierungsmodelle, Regressionsanalysen oder Zeitreihenmodelle in allen möglichen Publikationen neu aufbereitet und unter dem Titel Business Analytics verkauft werden. Der entscheidende Unterschied liegt jedoch darin, dass heute größere und vor allem andere Datenmengen sowie leistungsfähigere Verarbeitungs- und Speicherkapazitäten zur Verfügung stehen, die zusammen mit bestehenden und neu entwickelten Methoden erhebliche Einsichten ermöglichen. Erfolgversprechend sind Ansätze, die eine komplette Einbettung von Business Analytics in den strategischen Managementprozess garantieren und die optimale Ausschöpfung operationaler Exzellenz über ein Alignment ermöglichen. Die funktionale Perspektive bildet hier das Marketing mit einer konkreten Betrachtung der Kundenloyalität und deren Profitabilität. Dieser Zusammenhang wird über ein Monte-Carlo Simulationsmodell hergestellt. Das darf aber nicht darüber hinwegtäuschen, dass die Breite und die Tiefe der Methoden erheblich sind, vermehrt zu Spezialisierungen führen und andere Analyse-Optionen zur Wahl stehen. Letztendlich müsste es aber deutlich werden, dass das Thema Business Analytics technologische, Domain-spezifische, methodologische und methodische Kenntnisse erfordert, die selten nur durch eine Person erbracht werden können.
Um die im Pariser Klimaschutzabkommen vereinbarte Begrenzung der Erderwärmung auf 1,5 Grad Celsius zu begrenzen, muss die Energiewende deutlich stärker vorangetrieben werden als bisher. Das Schaufenster C/sells in der größten der SINTEG-Modellregionen hat sich dieser Herausforderung gestellt. Über vier Jahre haben 56 Partner aus Energiewirtschaft, Wissenschaft und Politik in Baden-Württemberg, Bayern und Hessen daran gearbeitet, ein zelluläres Energiesystem zu etablieren. Sie haben Musterlösungen für eine erfolgreiche Energiewende entwickelt. In mehr als 30 Demonstrationszellen sowie in neun Partizipationszellen, den sogenannten C/sells-Citys, wurde demonstriert, wie ein Informationssystem die intelligente Organisation von Stromversorgungsnetzen und den regionalisierten Handel mit Energie und Flexibilitäten ermöglicht.
Objective: To quantify the effect of inhaled 5% carbon-dioxide/95% oxygen on EEG recordings from patients in non-convulsive status epilepticus (NCSE).
Methods: Five children of mixed aetiology in NCSE were given high flow of inhaled carbogen (5% carbon dioxide/95% oxygen) using a face mask for maximum 120s. EEG was recorded concurrently in all patients. The effects of inhaled carbogen on patient EEG recordings were investigated using band-power, functional connectivity and graph theory measures. Carbogen effect was quantified by measuring effect size (Cohen's d) between "before", "during" and "after" carbogen delivery states.
Results: Carbogen's apparent effect on EEG band-power and network metrics across all patients for "before-during" and "before-after" inhalation comparisons was inconsistent across the five patients.
Conclusion: The changes in different measures suggest a potentially non-homogeneous effect of carbogen on the patients' EEG. Different aetiology and duration of the inhalation may underlie these non-homogeneous effects. Tuning the carbogen parameters (such as ratio between CO2 and O2, duration of inhalation) on a personalised basis may improve seizure suppression in future.
The manufacturing of conventional electronics has become a highly complicated process, which requires intensive investment. In this context, printed electronics keeps attracting attention from both academia and industry. The primary reason is the simplification of the manufacturing process via additive printing technology such as ink-jet printing. Consequently, advantages are realized such as on-demand fabrication, minimal material waste and versatile choice of substrate materials. Central to the development of printed electronic circuits are printed transistors. Recently, metal oxide semiconductors such as indium oxide have become promising materials for the fabrication of printed transistors due to their high charge mobility. Furthermore, electrolyte-gating also provides benefits such as the low-voltage operation in sub-1 V regime due to the large gate capacitance provided by electrical double layers. This opens new possibilities to fabricate printed devices and circuits for niche applications.
To facilitate the design and fabrication of printed circuits, the development of compact models is necessary. However, most of the current works have focused on the study of the static behavior of transistors, while the in-depth understanding of other characteristics such as the dynamic or noise behavior is missing. To this end, the purpose of this work is the comprehensive study on capacitance and noise properties of inkjet-printed electrolyte-gated thin-film transistors (EGT) based on indium oxide semiconductors. Proper modeling approaches are also proposed to capture accurately the electrical behaviour, which can be further utilized to enable advanced analysis of digital, analog and mixed-signal circuits.
In this work, the capacitance of EGTs is characterized using voltage-dependent impedance spectroscopy. Intrinsic and extrinsic effects are carefully separated by using de-embedding test structures. Also, a dedicated equivalent circuit model is established to offer accurate simulations of the measured frequency response of the gate impedance. Based on that, it is revealed that top-gated EGTs have the potential to reach operation frequency in the kHz regime with proper optimizations of materials and printing process. Furthermore, a Meyer-like model is proposed to accurately capture the capacitance-voltage characteristics of the lumped terminal capacitance. Both parasitic and nonquasi-static effects are considered. This further enables the AC and transient analysis of complex circuits in circuit simulators.
Following, the study of noise properties in the field of printed electronics is conducted. Low-frequency noise of EGTs is characterized using a reliable experimental setup. By examining measured noise spectra of the drain current at various gate voltages, the number fluctuation with correlated mobility fluctuation has been determined as the primary noise mechanism. Based on that, normalized flat-band voltage noise can be determined as the key performance metrics, which is only 1.08 × 10−7 V^2 µm^2, significantly lower in comparison with other thin-film technologies, which are based on dielectric gating and semiconductors such as IZO and IGZO. A plausible reason could be the large gate capacitance offered by the electrical double layers. This renders EGT technology useful for low-noise and sensitive applications such as sensor periphery circuits.
Last but not least, various circuit designs based on EGT technology are proposed, including basic digital circuits such as inverters and ring oscillators. Their performance metrics such as the propagation delay and power consumption are extensively characterized. Also, the first design of a printed full-wave rectifier is presented by using diode-connected EGTs, which features near-zero threshold voltage. As a consequence, the presented rectifier can effectively process input voltage with a small amplitude of 100 mV and a cut-off frequency of 300 Hz, which is particularly attractive for the application domain of energy harvesting. Additionally, the previously established capacitance models are verified on those circuits, which provide a satisfactory agreement between the simulation and measurement data.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
As a reaction to the increasing market dynamics and complex requirements, today’s products need to be developed quickly and customized to the customer’s individual needs. In the past, CAD systems are mainly used to visualize the model that the product designer creates. Generative Design shifts the task of the CAD program by actively participating in the shaping process. This results in more design options and the complexity of the shapes and geometries increases significantly. This potential can be optimally exploited through the combination of Generative Design with Additive Manufacturing (AM). Artificial intelligence and the input of target parameters generate geometries, for example, by creating material for stressed areas, which in turn develops biomorphic shapes and thus significantly reduces the consumption of resources. This contribution aims at the evaluation of existing applications in CAD systems for generative design. Special attention is paid to the requirements in design education and easy access for students. For this purpose, three representative CAD systems are selected and analyzed with the help of a comprehensive example of mass reduction. The aim is to perform an individual result analysis in order to assess the application based on various criteria. By using different materials, the influence of the material for the generation is investigated by comparing the material distribution. By comparing the generated models, differences of the CAD systems can be identified and possible fields of application can be presented. By specifying the manufacturing parameters for the generation of the models, the feasibility of AM can be guaranteed without having to modify the results. The physical implementation of the example by means of Fused Deposition Modeling demonstrates this in an exemplary way and examines the interface of the Generative Design and AM. The results of this contribution will enable an evaluation of the different CAD systems for Generative Design according to technical, visual and economic aspects.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years [1]. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37 percent can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
This book constitutes the refereed proceedings of the 21st International TRIZ Future Conference on Automated Invention for Smart Industries, TFC 2021, held virtually in September 2021 and sponsored by IFIP WG 5.4.
The 28 full papers and 8 short papers presented were carefully reviewed and selected from 48 submissions. They are organized in the following thematic sections: inventiveness and TRIZ for sustainable development; TRIZ, intellectual property and smart technologies; TRIZ: expansion in breadth and depth; TRIZ, data processing and artificial intelligence; and TRIZ use and divulgation for engineering design and beyond.
Chapter ‘Domain Analysis with TRIZ to Define an Effective “Design for Excellence’ is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
Emotionen sind Teil jedes menschlichen Wesens: Sie begleiten Konsumenten und Konsumentinnen durch alle Alltagssituationen – auch und insbesondere bei Kaufentscheidungen. Jedoch war es bisher nur bedingt möglich, diese Emotionen im Dialogmarketing genau zu erfassen und zu interpretieren. Die innovative Customer Experience Tracking Methode der Hochschule Offenburg ermöglicht eine verzerrungsreduzierte Messung und Auswertung von Kundenemotionen, die vor, während und nach der Benutzerinteraktion mit Dialogmarketingaktivitäten auftreten. Aus den im Labor oder im Feld gewonnenen Untersuchungsergebnissen lassen sich konkrete Handlungsempfehlungen ableiten, um Dialogmarketingangebote im Offline-, Online- oder crossmedialen Bereich optimal auf die Bedürfnisse und Erwartungen der Kunden und Kundinnen auszurichten.
As one result of the digital transformation in the automotive industry, new digital business models comprising software-based solutions are demanded by OEMs. To adequately meet these new requirements, automotive suppliers implement interdisciplinary roles – called Customer Solution Designers. However, due to the novelty, the Customer Solution Design research field is not yet well developed, neither in theory nor in practice. Besides giving an overview of the current state of the Customer Solution Design research field, the core of this paper is two-fold: Based on the conduction of 14 guided expert interviews with selected experts of a large German automotive supplier, we establish a uniform understanding of the Customer Solution Design role by using the Role Model Canvas (I). In addition, a case study strategy comprising two software-based projects, which are executed by a large German automotive supplier, is used to derive a common approach for Customer Solution Design in the context of an agile business framework (II).
The work focuses on predictive capabilities of fundamental cyclic plasticity and fatigue life models, which can be calibrated using limited amount of experiments as specific ones needed for more advanced models are often absent. The analyses are conducted for the synthetic case of exhaust manifold made from cast iron. The thermal boundary conditions from the forced convection were obtained from the computational fluid dynamics considered as a conjugate heat transfer problem. Two rate-independent and temperature-dependent material models were calibrated for structural analyses. Both were validated with experiments on isothermal and anisothermal levels. Sequential thermal–mechanical finite element simulations were performed. Two fatigue life models were employed. The first was a temperature-dependent strain-based fatigue life criterion calibrated from uniaxial data. The second was a temperature-independent energy-based fatigue life criterion resulting in twice lower life than the strain-based criterion, while none of the plasticity models made a significant difference in that prediction.
Über zwei Jahrzehnte hat sich an der Hochschule Offenburg eine Forschungsgruppe etabliert, die die beiden Bereiche Gebäudeautomation und nachhaltige Energietechnik zusammenführte. Anfangs ging es darum, Potentiale der internetbasierten Wetterprognostik und modell-basierten Anlagensteuerung für die Verbesserung des Komforts und der Energieeffizienz im Gebäude zu nutzen. Im Rahmen von Forschungs- und Entwicklungsarbeiten mit Einsatz von dynamischen Gebäudesimulationen konnte ein Algorithmus gefunden werden, der es ermöglichte auf Basis von prognostizierter Außentemperatur und Sonneneinstrahlung den Energiebedarf eines Bürogebäudes für den Folgetag vorherzusagen. In Verbindung mit der Gebäudeautomation entstand so die adaptive und prädiktive TABS-Steuerung AMLR.
Sowohl die Entwicklung neuer als auch die Weiterentwicklung bestehender Quartiere sind mit vielfältigen Herausforderungen verbunden. Durch weitere Klimaschutzmaßnahmen und ein zunehmendes Umweltbewusstsein steigen die energetischen Anforderungen an Wohn- und Gewerbeimmobilien. Die besonders für Deutschland ungünstige demografische Entwicklung bedingt eine weiter zunehmende Urbanisierung, bedingt durch Migration und Zuzug älterer Menschen in Städte, die künftig noch mehr altersgerechte Wohnungen und Pflegeeinrichtungen etablieren müssen. Hinzu kommen die steigenden Anforderungen aus der digitalen Transformation und einer Informationsgesellschaft, die sich mit Konnektivität, Schnelllebigkeit, Individualisierungstendenzen und veränderten Konsumgewohnheiten auseinandersetzen muss.
Data Science
(2021)
Know-how für Data Scientists
• Übersichtliche und anwendungsbezogene Einführung
• Zahlreiche Anwendungsfälle und Praxisbeispiele aus unterschiedlichen Branchen
• Potenziale, aber auch mögliche Fallstricke werden aufgezeigt
Data Science steht derzeit wie kein anderer Begriff für die Auswertung großer Datenmengen mit analytischen Konzepten des Machine Learning oder der künstlichen Intelligenz. Nach der bewussten Wahrnehmung der Big Data und dabei insbesondere der Verfügbarmachung in Unternehmen sind Technologien und Methoden zur Auswertung dort gefordert, wo klassische Businss Intelligence an ihre Grenzen stößt.
Dieses Buch bietet eine umfassende Einführung in Data Science und deren praktische Relevanz für Unternehmen. Dabei wird auch die Integration von Data Science in ein bereits bestehendes Business-Intelligence-Ökosystem thematisiert. In verschiedenen Beiträgen werden sowohl Aufgabenfelder und Methoden als auch Rollen- und Organisationsmodelle erläutert, die im Zusammenspiel mit Konzepten und Architekturen auf Data Science wirken.
Diese 2., überarbeitete Auflage wurde um neue Themen wie Feature Selection und Deep Reinforcement Learning sowie eine neue Fallstudie erweitert.
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
This paper describes a thorough analysis of using PPO to learn kick behaviors with simulated NAO robots in the simspark environment. The analysis includes an investigation of the influence of PPO hyperparameters, network size, training setups and performance in real games. We believe to improve the state of the art mainly in four points: first, the kicks are learned with a toed version of the NAO robot, second, we improve the reliability with respect to kickable area and avoidance of falls, third, the kick can be parameterized with desired distance and direction as input to the deep network and fourth, the approach allows to integrate the learned behavior seamlessly into soccer games. The result is a significant improvement of the general level of play.
In dieser Arbeit wird ein historischer Fallbericht des bis heute weit über seine Landesgrenzen bekannten italienischen Kriminalanthropologen Cesare Lombroso (1835–1909) vorgestellt. In diesem Fallbericht wird der berüchtigte und psychisch auffällige Dieb Pietro Bersone mit Hilfe eines sog. Hydrosphygmographen überführt, einem zur damaligen Zeit neuartigen technischen Gerät, das den Puls nicht-invasiv aufzeichnen konnte. Lombroso ist vermutlich einer der ersten, wenn nicht sogar der erste, der durch den Einsatz eines solchen Geräts die Idee zum „Lügendetektor“ vorweggenommen hat. Die vorgestellte Textstelle aus Lombrosos Buch „Neue Fortschritte in den Verbrecherstudien“ ist daher ein besonderes Fundstück auch für die Geschichte der Polygraphie.
The Human-Robot-Collaboration (HRC) has developed rapidly in recent years with the help of collaborative lightweight robots. An important prerequisite for HRC is a safe gripper system. This results in a new field of application in robotics, which spreads mainly in supporting activities in the assembly and in the care. Currently, there are a variety of grippers that show recognizable weaknesses in terms of flexibility, weight, safety and price.
By means of Additive manufacturing (AM) gripper systems can be developed which can be used multifunctionally, manufactured quickly and customized. In addition, the subsequent assembly effort can be reduced due to the integration of several components to a complex component. An important advantage of AM is the new freedom in designing products. Thus, components using lightweight design can be produced. Another advantage is the use of 3D multi-material printing, wherein a component with different material properties and also functions can be realized.
This contribution presents the possibilities of AM considering HRC requirements. First of all, the topic of Human-Robot-Interaction with regard to additive manufacturing will be explained on the basis of a literature review. In addition, the development steps of the HRI gripper through to assembly are explained. The acquired knowledge regarding the AM are especially emphasized here. Furthermore, an application example of the HRC gripper is considered in detail and the gripper and its components are evaluated and optimized with respect to their function. Finally, a technical and economic evaluation is carried out. As a result, it is possible to additively manufacture a multifunctional and customized human-robot collaboration gripping system. Both the costs and the weight were significantly reduced. Due to the low weight of the gripping system only a small amount of about 13% of the load of the robot used is utilized.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Herzfehler sind weltweit die häufigste Form von angeborenen Organdefekten. In unterschiedlichen Studien wird die Inzidenz zumeist zwischen vier und elf von 1.000 Lebendgeburten angegeben (1–5). Im Rahmen der multizentrischen PAN-Studie (PAN: Prävalenz angeborener Herzfehler bei Neugeborenen), welche die Häufigkeit angeborener Herzfehler bei Neugeborenen in Deutschland zwischen Juli 2006 und Juni 2007 untersuchte, ergab sich eine Gesamtprävalenz von 107,6 pro 10.000 Lebendgeburten. Gegenstand dieser Arbeit sind Untersuchungen an Implantaten zur Behandlung von Atriumseptumdefekten (ASD). Vorhofseptumdefekte machen mit 17,0%, nach den Ventrikelseptumdefekten (VSD) mit 48,9%die zweithäufigste Art von Herzfehlern aus (6, 7).Als Vorhofseptumdefekte werden Öffnungen in der Scheidewand zwischen den Herzvorhöfen bezeichnet. Bei der Therapie eines ASD ist der minimalinvasive Verschluss mittels sogenannter Okkluder heute das Mittel der Wahl. Diese werden über einen femoralen Zugang im Rahmen einer Herzkatheteruntersuchung unter Ultraschallkontrolle und Durchleuchtung an die Implantationsstelle vorgeschoben und dort platziert(8). Die Okkluder bestehen in der Regel aus einem Drahtgeflecht aus Nitinol und haben die typische Form eines sogenannten Doppelschirmchens. Dabei weichen die unterschiedlichen Okkluder der einzelnen Firmen hinsichtlich Form und Beschaffenheit oft erheblich voneinander ab. Derzeit gibt es keine Untersuchungsmethode, die die auf dem Markt befindlichen Okkluder hinsichtlich ihrer mechanischen Eigenschaften vergleichbar macht. Diese Arbeit solleinen Beitrag erbringen, um grundlegende, die Okkludermodelle charakterisierende Parameter zu schaffen, um so deren interindividuelle Vergleichbarkeit zu ermöglichen. Hierzu werden in-vitro Messungen durchgeführt, welche geeignet sind das Verhalten der untersuchten Modelle unter unterschiedlichen Bedingungen und bei variierenden Defektgrößen zu charakterisieren.
Die Rolle des Aufsichtsrats wird zunehmend als eine strategische charakterisiert, ohne dies jedoch näher zu erläutern. Die aktuelle Diskussion zeigt, dass daraus Unschärfen in der Abgrenzung zur Rolle des Vorstands resultieren. In dem Beitrag wird die Rolle des Aufsichtsrats im Rahmen strategischer Entscheidungen präzisiert.
Digitale Lernszenarien in der Hochschullehre. Bedeutung und Funktion aus Sicht von Studierenden
(2021)
Bedingt durch die Coronapandemie wurde in den Informatikkursen Software Engineering und Computernetze an der Hochschule Offenburg ein Lernsetting entwickelt, das mehrere digitale Lernszenarien (Online-Sessions, Lernvideos, Wikis, Quiz, Foren und die selbst entwickelte Lernplattform MILearning) integriert. Im Wintersemester 2020/2021 fand eine Evaluierung statt, um den Einsatz der unterschiedlichen digitalen Lernszenarien in der aktuellen Situation zu bewerten und um zu entscheiden, welche Lernszenarien sinnvoll für einen Einsatz nach der Pandemie sind. Aus dem Blickwinkel des Didaktischen Designs spielen dabei die Eignung der Szenarien für die Wissensvermittlung, die Aktivierung der Studierenden und die Betreuung bei Fragen und Problemen eine wichtige Rolle. Die Ergebnisse zeigen, dass Studierende das Lernsetting intensiv nutzen und die angebotenen digitalen Lernszenarien lernförderlich kombinieren.
Wer sich mit dem Thema „Digitalisierung und Schule“ befasst, stellt fest, dass die Tragweite der intendierten Transformation von Bildungseinrichtungen zu möglichst automatisierten Lernfabriken durch Digitaltechnik nur von Wenigen realisiert wird. Selbst Begriffe wie „Lernen 4.0“ oder „Schule 4.0“ führen nicht dazu, die zugrunde liegenden Theorien und Automatisierungstechniken der „Industrie 4.0“-Projekte zu hinterfragen. Dann wäre sofort klar, dass diese an Technik-Geschichte und Software-Updates erinnernde Zählweise für Bildungseinrichtungen ebenso abwegig ist wie die Fiktion der Wissensproduktion.
Die „Digitalisierung“ scheint die aktuelle Heilslehre zu sein, die viele Menschen (ver-)blendet. Das „Vordringen der Digitaltechnik in alle Lebensbereiche“ sei alternativlos und unumkehrbar, heißt es allenthalben – als sei das Etablieren einer technischen Infrastruktur eine Naturgewalt und als stünden nicht ganz konkrete wirtschaftliche Interessen hinter diesen Netzwerkdiensten. Ausgeblendet wird, dass hinter Phrasen wie „die Digitalisierung verändert…“ oder „die Digitalisierung sorgt für… “ oder „die Digitalisierung führt zu ...“ ganz konkrete und benennbare Akteure bzw. Unternehmen und deren Geschäftsfelder stehen.
Für Bildungseinrichtungen steht das Jahr 2020 für Schulschließungen und coronabedingte Zwangsdigitalisierung. Binnen weniger Wochen wurde notgedrungen auf Fernunterricht und Lernmanagementsysteme (LMS), Schulcloud und Videokonferenzen umgestellt. Zu denken geben sollte weniger die der Pandemie geschuldete Umstellung von Präsenz- auf Distanzunterricht, sondern dessen beabsichtigte Verstetigung samt Forderung nach zunehmend automatisierten Beschulungssystemen. Werden Bildungseinrichtungen Teil der Daten-Ökonomie oder gelten weiterhin pädagogische Prämissen?
Offenburg university of Applied Sciences offers pre-study extracurricular preparatory courses for future engineering students in mathematics and physics. Due to pandemic restrictions, the two-week preparatory physics course preceeding winter term 2020/21 was presented as an online -only course.
Students enrolled to the course attended eight online lect ures of approximately 90 minutes duration followed by a group assignment. Both lectures and tutoring to the group assignment used a videoconference system with group sizes of 120 (lecture) and 6 (peer instruction and group assignments). The eight lectures focused on the high school physics curriculum of mechanics, electricity, thermodynamics and optics. Each lecture included four “peer instruction” questions to improve student activation. Student responses were collected using an audience response online tool.
The “peer instruction” questions were discussed by the students in online groups of six students. These groups also received written group assignments consisting of common textbook exercises and additional problems with incomplete information. To solve these problems, groups were encouraged to discuss possible solutions. The on-line course attendance was monitored and showed a characteristic exponential “decay” curve with a half-life of approximately 18 lectures which is comparable to conventional courses: Around 73% of the students enrolled in the preparatory course attended all eight lectures. In addition to the attendance, the progress of the participants was monitored by two online tests: A pre-course online test the first course day and a post -course online test on the last day.
The completion of both tests was highly recommended, but not a formal requirement for the students. The fraction of students completing the pre-course, but not the post-course test was used as an estimate for the drop-out rate of (34±3)%.
The present invention is directed to a storage-stable formulation of long-chain RNA. In particular, the invention concerns a dry powder composition comprising a long-chain RNA molecule. The present invention is furthermore directed to methods for preparing a dry powder composition comprising a long-chain RNA molecule by spray-drying. The invention further concerns the use of such a dry powder composition comprising a long-chain RNA molecule in the preparation of pharmaceutical compositions and vaccines, to a method of treating or preventing a disorder or a disease, to first and second medical uses of such a dry powder composition comprising a long-chain RNA molecule and to kits, particularly to kits of parts, comprising such a dry powder composition comprising a long-chain RNA molecule.
The present invention is directed to a storage-stable formulation of long-chain RNA. In particular, the invention concerns a dry powder composition comprising a long-chain RNA molecule. The present invention is furthermore directed to methods for preparing a dry powder composition comprising a long-chain RNA molecule by spray-freeze drying. The invention further concerns the use of such a dry powder composition comprising a long- chain RNA molecule in the preparation of pharmaceutical compositions and vaccines, to a method of treating or preventing a disorder or a disease, to first and second medical uses of such a dry powder composition comprising a long-chain RNA molecule and to kits, particularly to kits of parts, comprising such a dry powder composition comprising a long-chain RNA molecule.
The present invention is directed to a storage-stable formulation of long-chain RNA. In particular, the invention concerns a dry powder composition comprising a long-chain RNA molecule. The present invention is furthermore directed to methods for preparing a dry powder composition comprising a long-chain RNA molecule by spray-drying. The invention further concerns the use of such a dry powder composition comprising a long-chain RNA molecule in the preparation of pharmaceutical compositions and vaccines, to a method of treating or preventing a disorder or a disease, to first and second medical uses of such a dry powder composition comprising a long-chain RNA molecule and to kits, particularly to kits of parts, comprising such a dry powder composition comprising a long-chain RNA molecule.
Eine Frage der Qualität
(2021)
In diesem einführenden Kapitel geben die Autoren einen Überblick über die Entstehung des Marketing-Controllings, dessen Aufgaben, organisatorische Einbindung in das Unternehmen sowie dessen strategische und operative Ausprägungen. Zudem werden die einzelnen Beiträge dieses Handbuches im Zusammenhang vorgestellt.
eLetter: "The ancient Capua leg from 300 BC and the 1941 air raid on the Royal College of Surgeons"
(2021)
eLetter zum Artikel "The College of Surgeons, London", veröffentlicht in Science, Vol. 93, Issue 2425, Seite 587 (DOI: 10.1126/science.93.2425.587).
eLetter zum Artikel "Plague Through History" von Nils Chr. Stenseth, veröffentlicht in Science, Vol. 321, Issue 5890, Seite 773-774 (doi.org/10.1126/science.1161496)
Due to the pandemic of 2020, many teaching and research institutions are confronted with extraordinary working conditions. In order to enable empirical data collection under these special circumstances, teachers and scientists need to respond flexibly and new concepts need to be developed. This paper deals with the challenges that arise in day-to-day teaching and provides different approaches to meet these challenges. It covers quantitative surveys, remote UX-testing methods as an alternative to eye tracking studies in the lab, as well as face-to-face user experience testings under strict hygiene measures.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Im Rahmen des EU-Forschungsprojektes ACA-Modes (Advanced Control Algorithms for Management of Decentralised Energy Systems) werden reale Labore der Projektpartner primärenergetisch, ökonomisch und die Emissionen betreffend bewertet. Vier Projektpartner liefern Datensätze aus Messreihen typischer Bereitstellungsszenarien. Die verschiedenen Systeme bestehen unter anderem aus einer KWK-Anlage mit Erdgas-Verbrennungsmotor, einer KWKK-Anlage mit Adsorptionskältemaschine, einer Photovoltaik-Anlage mit Batteriespeicher und Wärmepumpe und einer Solarthermieanlage mit Adsorptionskältemaschine.
Energiemanagement im Betrieb
(2021)
Ziel der Investitionsmaßnahme Enerlab 4.0 war die Bereitstellung einer umfangreichen in-operando und post-mortem Diagnostik für dezentrale Energieerzeuger und -Speicher, z. B. Batteriezellen und Photovoltaikzellen. Diese sind wichtige Komponenten für verschiedene Bereiche der Industrie 4.0 – von autonomen Sensoren über energieautarke Produktion bis hin zur Qualitätskontrolle. Zu diesem Zweck wurde die apparative Ausstattung der Hochschule Offenburg erweitert, und zwar sowohl für in-operando Diagnostik (elektrische Zyklierer, Impedanzspektrometer, Temperaturprüfschränke) als auch für post-mortem Diagnostik (Glovebox, Probenpräparationen für vorhandene Werkstoffanalytik und chemische Analytik). Be-reits vorhandene Geräte aus anderen laufenden oder abgeschlossenen Projekten wurden in die neue Infrastruktur integriert. Im Ergebnis entstand ein modernes und leistungsfähiges Batterie- und Photovoltaiklabor, welches in zahlreichen laufenden und neuen Vorhaben genutzt wird.
When shopping online, it is usually not possible to view products in the same way as you are used to when shopping offline. With augmented reality (AR), it is not only possible to view the product in detail, but also to view it at home in the real environment. Such an AR application sets stimuli that can affect the users and their purchase decision and Word-of-mouth intention. In this work, we assume that when viewing a product in AR, not only affective internal states but also cognitive perception processes have an impact on purchase decision and Word-of-mouth intention. While positive affective reactions have already been studied in the context of AR, this paper will also describe inner cognitive perception processes, using the construct of AR authenticity. To test these assumptions, a study was conducted with 155 participants. The results show that both the purchase intention and the Word-of-mouth intention are influenced by the constructs of positive affective reactions and AR authenticity.
Onlineshops in Deutschland verschenken sehr viel Potenzial im Registrierungs- und Bestellprozess. Dabei lässt sich mit wenigen gezielten Verbesserungen der Checkout barrierefrei und smart gestalten. Zu diesem Ergebnis kommt eine heuristische Untersuchung der Top 100 Onlineshops von Uniserv gemeinsam mit der Hochschule Offenburg. Die Eingabe und Qualität von Adressdaten spielen dabei eine besondere Rolle.
Bevor man der Frage nachgeht, wie und in welcher Form am besten unterschiedliche Social-Media-Kanäle für die Kommunikation kommunaler Themen eingesetzt werden und somit ein wirksames Tool für die Interaktion der Kommunen mit ihren spezifischen Zielgruppen darstellen können, muss man sich ein paar generelle Gedanken machen, welche Stakeholder und Zielgruppen überhaupt auf diesem Weg bei welchen Themenkategorien erreichbar sind. Gerade aktuelle Fälle wie die Kommunikation während der Corona-Krise zeigen, wie man im Rahmen der Mediaplanung digitale Kanäle in das Gesamtspektrum der Kommunikationskanäle einordnen kann.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
Detailed material investigations of the fatigue behavior of two cast aluminium alloys used in combustion engines are presented. The network of intermetallic phases of both aluminium alloys is characterized by means of detailed energy dispersive X-ray spectroscopy. In order to investigate the temperature-dependent fatigue behavior of the materials, tensile, low cycle and thermomechanical fatigue tests are performed over a wide temperature and loading range. The influence of the temperature dependence on the experimental results is discussed.
In this work, the influence of superimposed high cycle fatigue on the LCF/HCF and TMF/HCF lifetime is investigated for two cast aluminium alloys of the types AlSi7 and AlSi12. The replica technique is used to examine the short crack growth behavior under pure LCF and LCF/HCF loading. The observed short crack growth evolution explains the observed lifetime reduction with increasing HCF amplitudes.
In the present paper, the influence of locally varying microstructures in case of an AlSi12 cast aluminium alloy is investigated by means of extracting the test pieces from different removal positions and low cycle fatigue tests. The temperature-dependent damage mechanisms, the material specific defect types, sizes and their influence on the fatigue properties of two AlSi7 and AlSi12 cast aluminium alloys are studied. An extreme value statistics methodology is applied to predict maximum defect sizes expected in a critical surface volume from two-dimensional metallographic micrographs. A damage map for the AlSi12 cast aluminium alloy is presented explaining the influence of the temperature- and load-dependent damage mechanisms on the observed isothermal and thermomechanical lifetime behavior.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
Das hier vorgestellte System verbindet das neue Konzept der Peer-to-Peer-Navigation mit dem Einsatz von Augmented Reality zur Unterstützung von bettseitig durchgeführten externen Ventrikeldrainagen. Das sehr kompakte und genaue Gesamtsystem beinhaltet einen Patiententracker mit integrierter Kamera, eine Augmented-Reality-Brille mit Kamera und eine Punktionsnadel bzw. einen Pointer mit zwei Trackern, mit dessen Hilfe die Anatomie des Patienten aufgenommen wird. Die exakte Position und Richtung der Punktionsnadel wird unter Zuhilfenahme der aufgenommenen Landmarken berechnet und über die Augmented-Reality-Brille für den Chirurgen sichtbar auf dem Patienten dargestellt. Die Methode zur Kalibrierung der statischen Transformationen zwischen Patiententracker und daran befestigter Kamera beziehungsweise zwischen den Trackern der Punktionsnadel sind für die Genauigkeit sehr wichtig und werden hier vorgestellt. Das Gesamtsystem konnte in vitro erfolgreich getestet werden und bestätigt den Nutzen eines Peer-to-Peer-Navigationssystems.
Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as semantic mask labels, show our model’s capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification. For reproducibility, we provide an interactive open-source tool to perform facial manipulations, and the Pytorch implementation of the model.
In the field of network security, the detection of possible intrusions is an important task to prevent and analyse attacks. Machine learning has been adopted as a particular supporting technique over the last years. However, the majority of related published work uses post mortem log files and fails to address the required real-time capabilities of network data feature extraction and machine learning based analysis [1-5]. We introduce the network feature extractor library FEX, which is designed to allow real-time feature extraction of network data. This library incorporates 83 statistical features based on reassembled data flows. The introduced Cython implementation allows processing individual packets within 4.58 microseconds. Based on the features extracted by FEX, existing intrusion detection machine learning models were examined with respect to their real-time capabilities. An identified Decision-Tree Classifier model was thus further optimised by transpiling it into C Code. This reduced the prediction time of a single sample to 3.96 microseconds on average. Based on the feature extractor and the improved machine learning model an IDS system was implemented which supports a data throughput between 63.7 Mbit/s and 2.5 Gbit/s making it a suitable candidate for a real-time, machine-learning based IDS.
Increasing power density causes increased self-generation of harmonics and intermodulation. As this leads to violations of the strict linearity requirements, especially for carrier aggregation (CA), the nonlinearity must be considered in the design process of RF devices. This raises the demand of accurate simulation models. Linear and nonlinear P-Matrix/COM models are used during the design due to their fast simulation times and accurate results. However, the finite element method (FEM) is useful to get a deeper insight in the device's nonlinearities, as the total field distributions can be visualized. The FE method requires complete sets of material tensors, which are unknown for most relevant materials in nonlinear micro-acoustics. In this work, we perform nonlinear FEM simulations, which allow the calculation of nonlinear field distributions of a lithium tantalate based layered SAW system up to third order. We aim at achieving good correspondence to measured data and determine the contributions of each material layer to the nonlinear signals. Therefore, we use approximations circumventing the issue of limited higher order tensor data. Experimental data for the third order nonlinearity is shown to validate the presented approach.
Verantwortung in Unternehmen zu übernehmen bedeutet, jeden Tag Unsicherheiten zu akzeptieren und damit umzugehen. Aus diesem Grund bemühen sich Wissenschaft und Praxis um Theorien, Methodologien und Methoden, um dieses Bedürfnis zu unterstützen. Das mittlerweile sehr ausdifferenzierte Feld der Futures Studies ist geprägt von epistemologischen Grundannahmen, die in die theoretischen Positionen hineinragen. Bezüglich der theoretischen Positionen und den methodologischen Rahmen ist danach zu fragen, worauf diese ausgerichtet sind. Geht es nur um Darstellungen oder um bestimmte Vorstellungen, die erreicht werden sollen? Auch die Wahl der Methode, hier konkret die Szenario-Technik, wird davon beeinflusst. Der Rahmen ist damit klar, aber die konkrete Ausgestaltung muss geklärt werden. Das Ziel dieses Beitrags ist es, diese Differenzierung herauszuarbeiten und am Ende mittels zwei Methoden zu erläutern.
Interpreting seismic data requires the characterization of a number of key elements such as the position of faults and main reflections, presence of structural bodies, and clustering of areas exhibiting a similar amplitude versus angle response. Manual interpretation of geophysical data is often a difficult and time-consuming task, complicated by lack of resolution and presence of noise. In recent years, approaches based on convolutional neural networks have shown remarkable results in automating certain interpretative tasks. However, these state-of-the-art systems usually need to be trained in a supervised manner, and they suffer from a generalization problem. Hence, it is highly challenging to train a model that can yield accurate results on new real data obtained with different acquisition, processing, and geology than the data used for training. In this work, we introduce a novel method that combines generative neural networks with a segmentation task in order to decrease the gap between annotated training data and uninterpreted target data. We validate our approach on two applications: the detection of diffraction events and the picking of faults. We show that when transitioning from synthetic training data to real validation data, our workflow yields superior results compared to its counterpart without the generative network.
Im Rahmen des Forschungsvorhabens GeoSpeicher.bw wurden mehrere Demostandorte in Baden-Württemberg intensiv durch die Projektpartner untersucht bzw. begleitet. Die Forschungsergebnisse zeigen, dass bestehende Geothermieanlagen gut funktionieren und durch den Betrieb auch klimaschädliche Gasemissionen eingespart werden können. Leider konnte im Rahmen des Vorhabens kein Demoprojekt für einen Aquiferspeicher am städtischen Klinikum Karlsruhe oder auch am Campus Nord des Karlsruhe Instituts für Technologie (KIT) trotz des Nachweises der effektiven Kostenersparnisse und CO2-Einsparungen verwirklicht werden.
Sollte sich die Aquiferspeichertechnologie in Baden-Württemberg etablieren, müsste unbedingt ein Demoprojekt für einen flachen Niedrigtemperatur-Aquiferspeicher entwickelt und gefördert werden. Die Rahmenbedingungen für solch einen Aquiferspeicher wären am Campus Nord grundsätzlich gegeben. Dieser Nachweis wurde durch zahlreiche Untersuchungen im Rahmen von GeoSpeicher.bw eindeutig erbracht.
We demonstrate how to exploit group sparsity in order to bridge the areas of network pruning and neural architecture search (NAS). This results in a new one-shot NAS optimizer that casts the problem as a single-level optimization problem and does not suffer any performance degradation from discretizating the architecture.
Dieses Handbuch vermittelt Managern in leitenden Funktionen von Unternehmen und Organisationen einen fundierten Überblick über die erfolgsrelevanten Aspekte des strategischen und operativen Marketing-Controllings. Ebenso erhalten Wissenschaftler und Studierende wertvolle Anregungen.
Renommierte Autoren aus Wissenschaft und Praxis zeigen die bewährten Instrumente des Marketing-Controllings sowie die zahlreichen neuen Möglichkeiten etwa im Bereich des Online-Marketing und des E-Commerce auf. Die Beiträge widmen sich aktuellen Themen wie Foresight Management oder Customer Experience und neuen Verfahren im Zusammenhang mit der Datengewinnung, -analyse und -aufbereitung.
Diese Weiterentwicklungen sowie die bestehenden Instrumente dienen einer Analyse, Bewertung und Optimierung der Effektivität und Effizienz der Marketing-Aktivitäten einer Organisation.
Die 5. Auflage wurde vollständig überarbeitet und um neue Beiträge zu aktuellen Themen erweitert.