Refine
Year of publication
- 2023 (243) (remove)
Document Type
- Conference Proceeding (105)
- Article (reviewed) (37)
- Article (unreviewed) (37)
- Part of a Book (24)
- Other (12)
- Book (9)
- Patent (9)
- Doctoral Thesis (4)
- Report (4)
- Letter to Editor (1)
Conference Type
- Konferenzartikel (80)
- Konferenz-Abstract (20)
- Konferenz-Poster (2)
- Sonstiges (2)
- Konferenzband (1)
Has Fulltext
- no (243) (remove)
Is part of the Bibliography
- yes (243) (remove)
Keywords
- Biomechanik (10)
- Deep Leaning (9)
- Wärmepumpe (6)
- Content-Marketing (5)
- Export (5)
- Deep Learning (4)
- Digitalisierung (4)
- Künstliche Intelligenz (4)
- Additive Manufacturing (3)
- Automation (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (75)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (63)
- Fakultät Medien (M) (ab 22.04.2021) (53)
- Fakultät Wirtschaft (W) (49)
- IMLA - Institute for Machine Learning and Analytics (25)
- INES - Institut für nachhaltige Energiesysteme (24)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (18)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (17)
- POIM - Peter Osypka Institute of Medical Engineering (9)
- IfTI - Institute for Trade and Innovation (8)
Unterschiedliche Stimulationszeitpunkte bei bimodaler Versorgung mit Hörgerät und Cochleaimplantat
(2023)
Die bimodale Versorgung von Patienten mit Hörgerät (HG) ipsilateral und Cochleaimplantat (CI) kontralateral bei asymmetrischem Hörverlust ist aufgrund vieler inhärenter Variablen die komplizierteste Versorgungsart im Kontext der Versorgung mit CI. Im vorliegenden Übersichtsartikel werden alle systematischen interauralen Unterschiede zwischen elektrischer und akustischer Stimulation dargestellt, die bei dieser Versorgungsart auftreten können. Darüber hinaus werden Methoden zur Quantifizierung des interauralen Latenzoffsets, also des Zeitunterschieds zwischen der akustischen und elektrischen Stimulation des Hörnervs, mittels Registrierung auditorisch evozierter Potenziale – erzeugt durch akustische bzw. elektrische Stimulation – und Messungen an den Sprachprozessoren und Hörgeräten vorgestellt. Die technische Kompensation des interauralen Latenzoffsets und ihre positive Auswirkung auf die Schalllokalisationsfähigkeit bimodal mit CI und HG versorgter Patienten wird ebenfalls beschrieben. Zuletzt werden neueste Erkenntnisse diskutiert, die Gründe dafür aufzeigen, warum die Kompensation des interauralen Latenzoffsets das Sprachverstehen im Störgeräusch bei bimodal versorgten CI-/HG-Trägern nicht verbessert.
Learning programming fundamentals is considered as one of the most challenging and complex learning activities. Some authors have proposed visual programming language (VPL) approaches to address part of the inherent complexity [1]. A visual programming language lets users develop programs by combining program elements, like loops graphically rather than by specifying them textually. Visual expressions, spatial arrangements of text and graphic symbols are used either as syntax elements or secondary notation. VPLs are normally used for educational multimedia, video games, system development, and data warehousing/business analytics purposes. For example, Scratch, a platform of Massachusetts Institute of Technology, is designed for kids and after school programs.
Design of mobile software applications is considered as one of the most challenging application domains due to the build in sensors as part of a mobile device, like GPS, camera or Near Field Communication (NFC). Sensors enable creation of context-aware mobile applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects, and the current user activity. As a consequence, context-aware mobile applications can sense clues about the situational environment making mobile devices more intelligent, adaptive, and personalized. Such context aware mobile applications seem to be motivating and attractive case studies, especially for programming beginners (“my own first app”).
In this work, we introduce a use-case centered approach as well as clear separation of user interface design and sensor-based program development. We provide an in-depth discussion of a new VPL based teaching method, a step by step development process to enable programming beginners the creation of context aware mobile applications. Finally, we argue that addressing challenges for programming beginners by our teaching approach could make programming teaching more motivating, with an additional impact on the final software quality and scalability.
The key contributions of our study are the following:
- An overview of existing attempts to use VPL approaches for mobile applications
- A use case centered teaching approach based on a clear separation of user interface design and sensor-based program development
- A teaching case study enabling beginners a step by step creation of context-aware mobile applications based on the MIT App Inventor (a platform of Massachusetts Institute of Technology)
- Open research challenges and perspectives for further development of our teaching approach
References:
[1] Idrees, M., Aslam, F. (2022). A Comprehensive Survey and Analysis of Diverse Visual Programming Languages, VFAST Transactions on Software Engineering, 2022, Volume 10, Number 2, pp 47-60.
Visual programming languages (VPL) let users develop software programs by combining visual program elements, like lists of objects, loops or conditional statements rather than by specifying them textually.
Humanoid robots programming is a very attractive and motivating application domain for students, especially for programming beginners. Humanoid robots are constructed in such a way that they mimic the human body by using actuators that perform like muscles. Typically, a humanoid robot consists of sensors and actuators, i.e. torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body, for example, from the waist up. In some cases, humanoid robots are equipped with heads designed to replicate additional human facial features such as eyes. Additional sensors are needed by a robot to gather information about the conditions of the environment to allow the robot to make necessary decisions about its position or certain actions that the situation requires, e.g. an arm movement or an open/close hand action. Other examples for sensor are reflective infrared sensors used to detect objects in proximity.
In this work, we introduce a use-case centered approach based on sensors and actors of a robot and a workflow model to visually describe the sequence of actions including conditional actions or concurrent actions. We provide an in-depth discussion of a new VPL based teaching method for programming humanoid robots based on VPLs. Open research challenges, limits and perspectives for further development of our teaching approach are discussed as well.
The main advantage of mobile context-aware applications is to provide effective and tailored services by considering the environmental context, such as location, time, nearby objects and other data, and adapting their functionality according to the changing situations in the context information without explicit user interaction. The idea behind Location-Based Services (LBS) and Object-Based Services (OBS) is to offer fully-customizable services for user needs according to the location or the objects in a mobile user's vicinity. However, developing mobile context-aware software applications is considered as one of the most challenging application domains due to the built-in sensors as part of a mobile device. Visual Programming Languages (VPL) and hybrid visual programming languages are considered to be innovative approaches to address the inherent complexity of developing programs. The key contribution of our new development approach for location and object-based mobile applications is a use case driven development approach based on use case templates and visual code templates to enable even programming beginners to create context-aware mobile applications. An example of the use of the development approach is presented and open research challenges and perspectives for further development of our approach are formulated.
Sensors and actuators enable creation of context-aware applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects. In this work, we use a general context definition, which can be applied to various devices, e.g., robots and mobile devices. Developing context-based software applications is considered as one of the most challenging application domains due to the sensors and actuators as part of a device. We introduce a new development approach for context-based applications by using use-case descriptions and Visual Programming Languages (VPL). The introduction of web-based VPLs, such as Scratch and Snap, has reinvigorated the usefulness of VPLs. We provide an in-depth discussion of our new VPL based method, a step by step development process to enable development of context-based applications. Two case studies illustrate how to apply our approach to different problem domains: Context-based mobile apps and context-based humanoid robot applications.
Grundzüge der Strömungslehre
(2023)
Dieses ausgereifte Lehrbuch stellt in prägnant kurzer und mathematisch verständlicher Darstellung die strömungstechnischen Grundlagen dar. Aufgaben mit Lösungen helfen den Lernstoff richtig anzuwenden und fördern das Verständnis. Das Buch eignet sich zur Begleitung und Vertiefung der Vorlesungen über Strömungslehre sowie zum Selbststudium. Die vorliegende Auflage geht auf die immer größer werdende Rolle des Energiehaushalts ein und trägt damit den aktuellen Entwicklungen Rechnung. Ergänzt wurden aktuelle Übungsaufgaben der Strömungsmechanik, zahlreiche Beispiele veranschaulichen den Energiesatz.
This article provides an overview of the legal framework for website marketing. The presentation of the numerous legal provisions, which are spread over several areas of law, is oriented towards business challenges and measures. After placing the website in the context of marketing, the article focuses on the legal framework relating to the establishment, design and operation of a website. If, in addition to its communication function, a website also has a sales function, i. e. in e-commerce (online trade), additional specific legal conditions must be taken into account.
Vor dem Hintergrund einer zunehmenden Informations- und Reizüberlastung der Konsumenten werden aus Unternehmenssicht zielgruppenadäquate Inhalte, insbesondere zur Erreichung von kommunikationspolitischen Zielsetzungen, immer wichtiger. Um diese zu gewährleisten, bedarf es einer sinnvollen Planung, Produktion und Distribution von Inhalten. Der vorliegende Beitrag gibt einen Überblick über einen solchen Prozess und veranschaulicht die notwendigen Schritte für ein erfolgreiches Content-Marketing.
Mathematik lässt sich in vielen Objekten finden. Sei es die lineare Steigung eines Handlaufs zum Schulgebäude oder die nahezu zylindrische Form einer Litfaßsäule in der Innenstadt. Das Bestreben, Schüler*innen diese Zusammenhänge entdecken zu lassen, steht im Zentrum des MathCityMap Projekts (Ludwig et al., 2013). Auf sogenannten mathematischen Wanderpfaden (bzw. Mathtrails) werden Schüler*innen durch eine App zu Mathematikaufgaben an realen Objekten bzw. in realen Situationen ihrer Umwelt geleitet. Um die Aufgaben zu lösen, werden Daten erhoben, z. B. durch Messungen oder Zählen. Entscheidend ist, dass die Aufgaben so gestellt sind, dass der Schritt der Datenbeschaffung nur vor Ort stattfinden kann und somit direkt mit dem Objekt bzw. der Situation verknüpft wird.
Footwear plays a critical role in our daily lives, affecting our performance, health and overall well-being. Well-designed footwear can provide protection, comfort and improved foot functionality, while poorly designed footwear can lead to mobility problems and declines in physical activity. The overall goal of footwear research is to provide a scientific basis for professionals in the field to provide an optimal footwear solution for a given person, for a given task, in a given environment, while using sustainable manufacturing processes. This article suggests potential directions for future research with a focus on athletic footwear biomechanics. Directions include the evidence-based individualisation of footwear, the interaction between design and prolonged use, and improving the sustainability of footwear. The authors also provide a speculative outlook on methodological developments that may provide greater insight into these areas. These developments may include: (1) the use of larger scale, real-world and representative data, (2) the use of 3D printing to create experimental footwear, (3) the advancement of in silico research methods, and (4) furthering multidisciplinary collaboration. If successfully applied in the future, footwear research will contribute to active and healthy lifestyles across the lifespan.
Bewegungsanalysesysteme in der Forschung und für niedergelassene Orthopädinnen und Orthopäden
(2023)
Hintergrund
Komplexe biomechanische Bewegungsanalysen können für eine Vielzahl orthopädischer Fragestellungen wichtige Informationen liefern. Bei der Beschaffung von Bewegungsanalysesystemen sind neben den klassischen Messgütekriterien (Validität, Reliabilität, Objektivität) auch räumliche und zeitliche Rahmenbedingungen sowie Anforderungen an die Qualifikation des Messpersonals zu berücksichtigen.
Anwendung
In der komplexen Bewegungsanalyse werden Systeme zur Bestimmung der Kinematik, der Kinetik und der Muskelaktivität (Elektromyographie) eingesetzt. Der vorliegende Artikel gibt einen Überblick über Methoden der komplexen biomechanischen Bewegungsanalyse für den Einsatz in der orthopädischen Forschung oder in der individuellen Patientenversorgung. Neben dem Einsatz zur reinen Bewegungsanalyse wird auch der Einsatz von Bewegungsanalyseverfahren im Bereich des Biofeedbacktrainings diskutiert.
Beschaffung
Für die konkrete Anschaffung von Bewegungsanalysesystemen empfiehlt sich die Kontaktaufnahme mit Fachgesellschaften (z. B. Deutsche Gesellschaft für Biomechanik), Hochschulen und Universitäten mit vorhandenen Bewegungsanalyseeinrichtungen oder Vertriebsfirmen im Bereich der Biomechanik.
High-tech running shoes and spikes ("super-footwear") are currently being debated in sports. There is direct evidence that distance running super shoes improve running economy; however, it is not well established to which extent world-class performances are affected over the range of track and road running events.
This study examined publicly available performance datasets of annual best track and road performances for evidence of potential systematic performance effects following the introduction of super footwear. The analysis was based on the 100 best performances per year for men and women in outdoor events from 2010 to 2022, provided by the world governing body of athletics (World Athletics).
We found evidence of progressing improvements in track and road running performances after the introduction of super distance running shoes in 2016 and super spike technology in 2019. This evidence is more pronounced for distances longer than 1500 m in women and longer than 5000 m in men. Women seem to benefit more from super footwear in distance running events than men.
While the observational study design limits causal inference, this study provides a database on potential systematic performance effects following the introduction of super shoes/spikes in track and road running events in world-class athletes. Further research is needed to examine the underlying mechanisms and, in particular, potential sex differences in the performance effects of super footwear.
Printed circuit boards (PCB) are a foundation of electronical devices in modern society. The fabrication of these boards requires various processes and machines. The utilisation of a robot with multiple tools can shorten the process chain compared to screen printing. In this paper a system is presented, which utilises an industrial six axis robot to manufacture
PCBs. The process flow and conversion process of the Gerber format into robot specific commands is presented. The advantages and challenges applying a robot to print circuits are discussed.
Ensuring that software applications present their users the most recent version of data is not trivial. Self-adjusting computations are a technique for automatically and efficiently recomputing output data whenever some input changes.
This article describes the software architecture of a large, commercial software system built around a framework for coarse-grained self-adjusting computations in Haskell. It discusses advantages and disadvantages based on longtime experience. The article also presents a demo of the system and explains the API of the framework.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
Optimization of energetic refurbishment roadmaps for multi-family buildings utilizing heat pumps
(2023)
A novel methodology for calculating optimized refurbishment roadmaps is developed in this paper. The aim of the roadmaps is to determine when and how should which component of the building envelope and heat generation system be refurbished to achieve the lowest net present value. The integrated optimization approach couples a particle swarm optimization algorithm with a dynamic building simulation of the building envelope and the heat supply system. Due to a free selection of implementation times and refurbishment depth, the optimization method achieves the lowest net present value and high CO2 reduction and is therefore an important contribution to achieve climate neutrality in the building stock.
The method is exemplarily applied to a multi-family house built in 1970. In comparison to a standard refurbishment roadmap, cost savings of 6–16 % and CO2 savings of 6–59 % are possible. The sensitivity of the refurbishment roadmap measures is analyzed on the basis of a parametric analysis. Robust optimization results are obtained with a mean refurbishment level of approx. 50 kWh/m2/a of the building envelope. The preferred heat generation system is a bivalent brine-heat pump system with a share of 70 % of the heat load being covered by the electric heat pump.
Redesigning a curriculum for teaching media technology is a major challenge. Up-to-date teaching and learning concepts are necessary that meet the constant technological progress and prepare students specifically for their professional life. Teaching and studying should be characterized by a student-oriented teaching and learning culture. In order to achieve this goal, consistent evaluation is essential. The aim of the evaluation concept presented here is to generate structured information regarding the quality of content-related, didactic and organizational aspects of teaching. The exchange of opinions between students and lecturers should be encouraged in order to continuously improve the teaching and learning processes.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
Die meisten Webseiten im World Wide Web werden mit Web-Content-Management-Systemen (WCMS) realisiert. In diesem Beitrag werden Ziele, Prinzipien, Funktionen und Architektur dieser wichtigen Infrastruktursysteme für unser tägliches Online-Leben und -Arbeiten vorgestellt.
Nach der Definition der grundlegenden Begriffe werden die Basisprinzipien Content Life Cycle und Trennung von Inhalt, Struktur und Darstellung erläutert. Es folgt die Vorstellung der wichtigen Komponenten von WCMS und deren Funktionalität, wie etwa Asset Management, Benutzerverwaltung und Workflow-Maschine. Zum Schluss wird ein Einblick in die aktuellen Entwicklungen von WCMS gegeben.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
The main focus of this chapter is the theoretical and instrumental processes that underpin densitometric methods widely used in thin-layer chromatography (TLC). Densitometric methods include UV–vis, luminescence, and fluorescence optical measurements as well as infrared and Raman spectroscopic measurements. The chapter is divided in two general parts: a theoretical part and a practical part. The systems for direct radioactivity measurements and the combination of TLC with mass spectrometry are also discussed. All these systems allow measuring an intensity distribution directly on a TLC plate. We call this “in situ detection” because no analyte is removed from the plate.
Hot forging dies are subjected to high cyclic thermo-mechanical loads. In critical areas, the occurring stresses can exceed the material’s yield limit. Additionally, loading at high temperatures leads to thermal softening of the used martensitic materials. These effects can result in an early crack initiation and unexpected failure of the dies, usually described as thermo-mechanical fatigue (TMF). In previous works, a temperature-dependent cyclic plasticity model for the martensitic hot forging tool steel 1.2367 (X38CrMoV5-3) was developed and implemented in the finite element (FE)-software Abaqus. However, in the forging industry, application-specific software is usually used to ensure cost-efficient numerical process design. Therefore, a new implementation for the FE-software Simufact Forming 16.0 is presented in this work. The results are compared and validated with the original implementation by means of a numerical compression test and a cyclic simulation is calculated with Simufact Forming.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
Bei vielen Schulungen, Unterrichten und Weiterbildungen kommen Präsentationen zum Einsatz, um ausbildungsrelevante Inhalte zu vermitteln. Oft sind diese jedoch nicht interessant und zielführend gestaltet, was sich z. B. durch ein Übermaß an Text auszeichnet. Die Autoren stellen alternativ eine visualisierte Aufbereitung von Inhalten vor. Ziel ist es, komplexe Sachverhalte als einfache Bilder und Skizzen komprimiert darzustellen. Mit Hilfe der vorgestellten Methoden können beispielsweise Übungen effizienter vorbereitet, Einsätze übersichtlich erfasst, aber auch alltägliche Situationen vereinfacht kommuniziert werden.
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
Gamification wird in vielen Bereichen, die auch den Bildungssektor einschließen, zur Motivations- und Leistungssteigerung eingesetzt. Dieser Beitrag beschreibt das Design, die Umsetzung und Evaluierung eines Gamification-Konzeptes für die Vorlesung „Software Engineering" an der Hochschule Offenburg. Gamification soll nach Intention der Lehrenden eine kontinuierliche und tiefergehende Auseinandersetzung mit den Themen der Vorlesung forcieren sowie einen positiven Einfluss auf die Motivation der Studierenden haben, um den Lernprozess zu unterstützen. Zentral für das Gamification-Design sind dabei eine freiwillige Teilnahme, die Wahrnehmung der Bedeutung der Lerninhalte und ein zielorientierter Einsatz von Gamification-Elementen. Das entwickelte Konzept wurde in der Lernplattform Moodle realisiert, über drei Semester eingesetzt und parallel evaluiert. Die Ergebnisse dieser Evaluierungen zeigen, dass die Studierenden den gamifizierten Kurs intensiv und oft über das gesamte Semester nutzten und aus eigenem Antrieb eine Vielzahl von Übungen absolvierten.
In diesem Beitrag werden die psychologischen Hintergründe und Wirkungsweisen des Content-Marketing betrachtet. Nach einer kurzen Einführung in die Thematik wird zuerst das für das weitere Verständnis notwendige psychologische Basiswissen vermittelt. Darauf bezugnehmend wird die allgemeine Wirkungsweise von Content-Marketing beleuchtet. Die Sichtweise wird dann für die letzten beiden Kapitel umgedreht und die beschriebenen psychologischen Faktoren dazu genutzt, um Anwender bei der Wahl der Content-Marketing-Inhalte und zuletzt bei der konkreten Ausgestaltung zu unterstützen.
Die meisten Effekte, die durch Content-Marketing hervorgerufen werden, funktionieren im B2C- oder B2B-Bereich durch das Ansprechen von Bedürfnissen, Interessen und Emotionen sowie die recht freien Entscheidungsmöglichkeiten der Adressaten. Im B2B‑Bereich werden ebenfalls Menschen mit Bedürfnissen, Interessen und Emotionen angesprochen, jedoch vorrangig beruflicher Natur, sodass in der Ausgestaltung geringfügige Unterschiede gemacht werden müssen.
Künstliche Intelligenz (KI) durchdringt unser Leben immer stärker. Studierende werden im Alltag und an Hochschulen zunehmend mit KI-Anwendungen konfrontiert. An der Hochschule Offenburg werden deshalb KI-bezogene Lehrangebote curricular verankert, um Studierende im Erwerb von KI-Kompetenz zu unterstützen.
Der Beitrag stellt ein Konzept für die Entwicklung von Lehrveranstaltungen nach der Idee des pädagogischen Makings zur Förderung von KI-Kompetenz in der Hochschullehre vor. Konkretisiert wird das Konzept anhand eines Moduls zum Thema Chatbots, dessen Lehrinhalte interdisziplinär aus verschiedenen Perspektiven ausgearbeitet werden.
With the rising necessity of explainable artificial intelligence (XAI), we see an increase in task-dependent XAI methods on varying abstraction levels. XAI techniques on a global level explain model behavior and on a local level explain sample predictions. We propose a visual analytics workflow to support seamless transitions between global and local explanations, focusing on attributions and counterfactuals on time series classification. In particular, we adapt local XAI techniques (attributions) that are developed for traditional datasets (images, text) to analyze time series classification, a data type that is typically less intelligible to humans. To generate a global overview, we apply local attribution methods to the data, creating explanations for the whole dataset. These explanations are projected onto two dimensions, depicting model behavior trends, strategies, and decision boundaries. To further inspect the model decision-making as well as potential data errors, a what-if analysis facilitates hypothesis generation and verification on both the global and local levels. We constantly collected and incorporated expert user feedback, as well as insights based on their domain knowledge, resulting in a tailored analysis workflow and system that tightly integrates time series transformations into explanations. Lastly, we present three use cases, verifying that our technique enables users to (1)~explore data transformations and feature relevance, (2)~identify model behavior and decision boundaries, as well as, (3)~the reason for misclassifications.
Material flow simulation is a core technology of Industry 4.0. It can analyze and improve large-scale production systems through experimentation with digital simulation models. However, modeling in discrete event simulation is considered as an effortful and time-consuming activity and challenges especially small and medium-sized enterprises. Systematic experiments and what-if-analysis require a large number of models. Modeling and simulation becomes a repetitive activity and the ability to model and simulate instantly becomes crucial for industry, 4.0. However, model generation typically uses specific methods to build models with individual properties for specific physical systems. A general literature review cannot sufficiently describe the current state of model generation. This study aims to provide an analysis of model generation based on the modeling strategy, modeling view, and production system type, as well as model properties and limitations.
This thesis deals with the redesign of manufacturing systems by simulation and optimization. Material flow simulation is a common tool for solving problems in system design. Limitations are the high requirements in time and knowledge to execute simulation studies, evaluate results and solve design problems. New chances arrives with the technologies of industry 4.0 and the digital shadow, providing data for simulation. However, the methods to use production data for the redesign of production systems are not available yet. Purpose of this work is providing the methods to automate simulation from digital shadow, use simulation to optimize and solve problems in system design. Two case studies are used to support the action research approach of this work. The result of this work is a framework for the application of the digital shadow in optimization and problem-solving.
Erlang is a functional programming language with dynamic typing. The language offers great flexibility for destructing values through pattern matching and dynamic type tests. Erlang also comes with a type language supporting parametric polymorphism, equi-recursive types, as well as union and a limited form of intersection types. However, type signatures only serve as documentation; there is no check that a function body conforms to its signature.
Set-theoretic types and semantic subtyping fit Erlang’s feature set very well. They allow expressing nearly all constructs of its type language and provide means for statically checking type signatures. This article brings set-theoretic types to Erlang and demonstrates how existing Erlang code can be statically type checked without or with only minor modifications to the code. Further, the article formalizes the main ingredients of the type system in a small core calculus, reports on an implementation of the system, and compares it with other static type checkers for Erlang.
Die Verwendung von markenbezogenen nutzer-generierten Inhalten auf den unternehmenseigenen Social-Media-Kanälen ist ein äußerst vielversprechender Ansatz im Content-Marketing. Dabei können durch die authentischen, vom Nutzer bereitgestellten Inhalte zahlreiche Kommunikationsziele erreicht werden. Hierzu gehören etwa die Verstärkung des Nutzerengagements oder aber auch die Förderung von Verkäufen. Daneben müssen allerdings auch Risiken, wie etwa rechtliche Aspekte, beachtet werden. Damit Unternehmen die Potentiale von markenbezogenen nutzer-generierten Inhalten für sich nutzen können, wird im nachstehenden Beitrag ein Strukturierungsrahmen vorgestellt. Dieser fasst die wesentlichen Aspekte dieser durchaus komplexen Thematik strukturiert zusammen. Der hier entwickelte Strukturierungsrahmen wurde durch Experteninterviews überprüft.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research [1] has shown how such detection can generally be enabled by deep learning methods, but appears to be very limited regarding the overall amount of detected vulnerabilities. We analyse to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardised LLVM Intermediate Representation. Te vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, our proposed technical approach and methodology enables an accurate detection of 23 (compared to 4 [1]) vulnerabilities.
Am 1. Juli 2022 trafen sich im Rahmen des Abschlusskolloquiums des Projekts ACA-Modes rund 60 Teilnehmende aus Forschung, Lehre und Industrie zu einer internationalen Konferenz an der Hochschule Offenburg. Hier wurden die Projektergebnisse rund um die erfolgreiche Implementierung modellprädiktiver Regelstrategien vorgestellt, aktuelle Fragestellungen diskutiert und Entwicklungspfade hin zu einem netzdienlichen Betrieb von Energieverbundsystemen skizziert.