Refine
Year of publication
- 2023 (70)
- 2022 (45)
- 2014 (24)
- 2012 (10)
- 2007 (8)
- 2010 (7)
- 2008 (6)
- 2018 (6)
- 1995 (5)
- 2011 (5)
- 2006 (4)
- 2017 (4)
- 2001 (3)
- 1994 (2)
- 1997 (2)
- 2002 (2)
- 2004 (2)
- 2005 (2)
- 2009 (2)
- 1986 (1)
- 1987 (1)
- 1988 (1)
- 1989 (1)
- 1990 (1)
- 1991 (1)
- 1992 (1)
- 1993 (1)
- 1998 (1)
- 1999 (1)
- 2003 (1)
- 2013 (1)
- 2016 (1)
- 2020 (1)
Document Type
- Conference Proceeding (223) (remove)
Conference Type
- Konferenzartikel (200)
- Konferenz-Abstract (11)
- Konferenzband (6)
- Konferenz-Poster (5)
- Sonstiges (1)
Keywords
- Finite-Elemente-Methode (4)
- Kühldecke (4)
- Raumklima (4)
- 3D printing (3)
- Applied computing (3)
- Biomechanik (3)
- Couplings (3)
- Deep Leaning (3)
- Design automation (3)
- Education (3)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (61)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (55)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (52)
- Fakultät Wirtschaft (W) (46)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (21)
- INES - Institut für nachhaltige Energiesysteme (12)
- Fakultät Medien (M) (ab 22.04.2021) (10)
- IMLA - Institute for Machine Learning and Analytics (10)
- IBMS - Institute for Advanced Biomechanics and Motion Studies (ab 16.11.2022) (7)
- CRT - Campus Research & Transfer (5)
- IUAS - Institute for Unmanned Aerial Systems (5)
- POIM - Peter Osypka Institute of Medical Engineering (4)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (3)
- WLRI - Work-Life Robotics Institute (3)
- ACI - Affective and Cognitive Institute (2)
- Zentrale Einrichtungen (1)
Open Access
- Closed (223) (remove)
Organized by the Fraunhofer Additive Manufacturing Alliance, the bi-annual Direct Digital Manufacturing Conference brings together researchers, educators and practitioners from around the world. The conference covers the entire range of topics in additive manufacturing, starting with methodologies, design and simulation, right up to more application-specific topics, e.g. from the realm of medical engineering and electronics.
In many application areas, Deep Reinforcement Learning (DRL) has led to breakthroughs. In Curriculum Learning, the Machine Learning algorithm is not randomly presented with examples, but in a meaningful order of increasing difficulty. This has been used in many application areas to further improve the results of learning systems or to reduce their learning time. Such approaches range from learning plans created manually by domain experts to those created automatically. The automated creation of learning plans is one of the biggest challenges.In this work, we investigate an approach in which a trainer learns in parallel and analogously to the student to automatically create a learning plan for the student during this Double Deep Reinforcement Learning (DDRL). Three Reward functions, Friendly, Adversarial, and Dynamic based on the learner’s reward are compared. The domain for evaluation is kicking with variable distance, direction and relative ball position in the SimSpark simulated soccer environment.As a result, Statistic Curriculum Learning (SCL) performs better than a random curriculum with respect to training time and result quality. DDRL reaches a comparable quality as the baseline and outperforms it significantly in shorter trainings in the distance-direction subdomain reducing the number of required training cycles by almost 50%.
Printed circuit boards (PCB) are a foundation of electronical devices in modern society. The fabrication of these boards requires various processes and machines. The utilisation of a robot with multiple tools can shorten the process chain compared to screen printing. In this paper a system is presented, which utilises an industrial six axis robot to manufacture
PCBs. The process flow and conversion process of the Gerber format into robot specific commands is presented. The advantages and challenges applying a robot to print circuits are discussed.
Ensuring that software applications present their users the most recent version of data is not trivial. Self-adjusting computations are a technique for automatically and efficiently recomputing output data whenever some input changes.
This article describes the software architecture of a large, commercial software system built around a framework for coarse-grained self-adjusting computations in Haskell. It discusses advantages and disadvantages based on longtime experience. The article also presents a demo of the system and explains the API of the framework.
Eco-innovations in chemical processes should be designed to use raw materials, energy and water as efficiently and economically as possible to avoid the generation of hazardous waste and to conserve raw material reserves. Applying inventive principles identified in natural systems to chemical process design can help avoid secondary problems. However, the selection of nature-inspired principles to improve technological or environmental problems is very time-consuming. In addition, it is necessary to match the strongest principles with the problems to be solved. Therefore, the research paper proposes a classification and assignment of nature-inspired inventive principles to eco-parameters, eco-engineering contradictions and eco-innovation domains, taking into account environmental, technological and economic requirements. This classification will help to identify suitable principles quickly and also to realize rapid innovation. In addition, to validate the proposed classification approach, the study is illustrated with the application of nature-inspired invention principles for the development of a sustainable process design for the extraction of high-purity silicon dioxide from pyrophyllite ores. Finally, the paper defines a future research agenda in the field of nature-inspired eco-engineering in the context of AI-assisted invention and innovation.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research [1] has shown how such detection can generally be enabled by deep learning methods, but appears to be very limited regarding the overall amount of detected vulnerabilities. We analyse to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardised LLVM Intermediate Representation. Te vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, our proposed technical approach and methodology enables an accurate detection of 23 (compared to 4 [1]) vulnerabilities.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Erlang is a functional programming language with dynamic typing. The language offers great flexibility for destructing values through pattern matching and dynamic type tests. Erlang also comes with a type language supporting parametric polymorphism, equi-recursive types, as well as union and a limited form of intersection types. However, type signatures only serve as documentation; there is no check that a function body conforms to its signature.
Set-theoretic types and semantic subtyping fit Erlang’s feature set very well. They allow expressing nearly all constructs of its type language and provide means for statically checking type signatures. This article brings set-theoretic types to Erlang and demonstrates how existing Erlang code can be statically type checked without or with only minor modifications to the code. Further, the article formalizes the main ingredients of the type system in a small core calculus, reports on an implementation of the system, and compares it with other static type checkers for Erlang.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
Turbocharger housings in internal combustion engines are subjected to severe mechanical and thermal cyclic loads throughout their life-time or during engine testing. The combination of thermal transients and mechanical load cycling results in a complex evolution of damage, leading to thermo-mechanical fatigue (TMF) of the material. For the computational TMF life assessment of high temperature components, the DTMF model can provide reliable TMF life predictions. The model is based on a short fatigue crack growth law and uses local finite-element (FE) results to predict the number of cycles to failure for a technical crack. In engine applications, it is nowadays often acceptable to have short cracks as long as they do not propagate and cause loss of function of the component. Thus, it is necessary to predict not only potential crack locations and the corresponding number of cycles for a technical crack, but also to determine subsequent crack growth or even a possible crack arrest. In this work, a method is proposed that allows the simulation of TMF crack growth in high temperature components using FE simulations and non-linear fracture mechanics (NLFM).
A NLFM based crack growth simulation method is described. This method starts with the FE analysis of a component. In this paper, the method is demonstrated for an automotive turbocharger housing subjected to TMF loading. A transient elastic-viscoplastic FE analysis is used to simulate four heating and cooling cycles of an engine test. The stresses, inelastic strains, and temperature histories from the FEA are then used to perform TMF life predictions using the standard DTMF model. The crack position and the crack plane of critical hotspots are then identified. Simulated cracks are inserted at the hotspots. For the model demonstrated, cracks were inserted at two hotspot locations. The ΔJ integral is computed as a fracture mechanics parameter at each point along the crack-front, and the crack extension of each point is then evaluated, allowing the crack to grow iteratively. The paper concludes with a comparison of the crack growth curves for both hotspots with experimental results.
Learning programming fundamentals is considered as one of the most challenging and complex learning activities. Some authors have proposed visual programming language (VPL) approaches to address part of the inherent complexity [1]. A visual programming language lets users develop programs by combining program elements, like loops graphically rather than by specifying them textually. Visual expressions, spatial arrangements of text and graphic symbols are used either as syntax elements or secondary notation. VPLs are normally used for educational multimedia, video games, system development, and data warehousing/business analytics purposes. For example, Scratch, a platform of Massachusetts Institute of Technology, is designed for kids and after school programs.
Design of mobile software applications is considered as one of the most challenging application domains due to the build in sensors as part of a mobile device, like GPS, camera or Near Field Communication (NFC). Sensors enable creation of context-aware mobile applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects, and the current user activity. As a consequence, context-aware mobile applications can sense clues about the situational environment making mobile devices more intelligent, adaptive, and personalized. Such context aware mobile applications seem to be motivating and attractive case studies, especially for programming beginners (“my own first app”).
In this work, we introduce a use-case centered approach as well as clear separation of user interface design and sensor-based program development. We provide an in-depth discussion of a new VPL based teaching method, a step by step development process to enable programming beginners the creation of context aware mobile applications. Finally, we argue that addressing challenges for programming beginners by our teaching approach could make programming teaching more motivating, with an additional impact on the final software quality and scalability.
The key contributions of our study are the following:
- An overview of existing attempts to use VPL approaches for mobile applications
- A use case centered teaching approach based on a clear separation of user interface design and sensor-based program development
- A teaching case study enabling beginners a step by step creation of context-aware mobile applications based on the MIT App Inventor (a platform of Massachusetts Institute of Technology)
- Open research challenges and perspectives for further development of our teaching approach
References:
[1] Idrees, M., Aslam, F. (2022). A Comprehensive Survey and Analysis of Diverse Visual Programming Languages, VFAST Transactions on Software Engineering, 2022, Volume 10, Number 2, pp 47-60.
Seismic data processing relies on multiples attenuation to improve inversion and interpretation. Radon-based algorithms are often used for multiples and primaries discrimination. Deep learning, based on convolutional neural networks (CNNs), has shown encouraging applications for demultiple that could mitigate Radon-based challenges. In this work, we investigate new strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. Moreover, we investigate two distinctive training methods for all the strategies: UNet based on minimum absolute error (L1) training, and adversarial training (GAN-UNet). We test the trained models with the different strategies and methods on 400 synthetic data. We found that training to predict multiples, including the primaries …
In 4D printing an additively manufactured component is given the ability to change its shape or function under the influence of an external stimulus. To achieve this, special smart materials are used that are able to react to external stimuli in a specific way. So far, a number of different stimuli have already been investigated and initial applications have been impressively demonstrated, such as self-folding bodies and simple grippers. However, a methodical specification for the selection of the stimuli and their implementation was not yet in the foreground of the development.
The focus of this work is therefore to develop a methodical approach with which the technology of 4DP can be used in a solution- and application-oriented manner. The developed approach is based on the conventional design methodology for product development to solve given problems in a structured way. This method is extended by specific approaches under consideration of the 4D printing and smart materials.
To illustrate the developed method, it is implemented in practice using a problem definition in the form of an application example. In this example, which represents the recovery of an object from a difficult-to-access environment, the individual functions of positioning, gripping and extraction are implemented using 4D printing. The material extrusion process is used for additive manufacturing of all components of the example. Finally, the functions are successfully tested. The developed approach offers an innovative and methodical approach to systematically solve technical complex problems using 4DP and smart materials.
Voice User Experience
(2023)
Sprachassistenten wie Alexa, Google Assistant, Siri, Cortana, Magenta und Bixby erfreuen sich dank ihrer intuitiven, schnellen und bequemen Interaktionsmöglichkeiten zunehmender Beliebtheit und bieten deshalb spannende Möglichkeiten für die Weiterentwicklung des digitalen Kundendialogs. Doch ob die Technologie wirklich breite Akzeptanz finden wird, hängt nicht nur mit ihrer technischen Qualität oder Usability zusammen. Auch die User Experience, die neben den Reaktionen der Nutzer*innen während der Anwendung auch ihre Erwartungen und Wahrnehmungen vor und nach der Anwendung umfasst, spielt eine zentrale Rolle. Die Messung der Qualität der Voice User Experience (Voice UX) ist daher von großem Interesse für die Bewertung und Optimierung von Sprachapplikationen. Die Frage, wie die Voice UX von sprachgesteuerten Systemen gemessen werden kann, ist jedoch noch offen. Aktuelle Methoden stützen sich häufig auf UX-Forschung zu grafischen Benutzeroberflächen, obwohl die sprachbasierten Interaktionsformen in der Regel weder visuell noch haptisch greifbar sind. In unserem Beitrag möchten wir den aktuellen Status quo der deutschen Voice User Experience untersuchen. Folgende Fragen stehen dabei im Mittelpunkt: Wie können Sprachanwendungen zu einem erfolgreichen Kundendialog beitragen? Welche Nutzerirritationen treten aktuell bei der Anwendung von Sprachassistenten auf? Mit welchen Methoden lässt sich die Voice User Experience messen?
The present paper addresses the research question: What recommendations for action and potential adjustments should an online magazine for beauty and fashion implement in order to make affiliate articles in these sections even more appealing to the target group and provide added value for them?
To be able to answer this research question, three hypotheses were defined and tested with using qualitative and quantitative research. The qualitative research consisted of user experience testings, where four affiliate articles in the fields of beauty and fashion were tested with 13 participants. The quantitative research involved collecting, analyzing and evaluating data from the four affiliate articles conducted with the company's real-life target group. Based on these results, recommendations for action were derived, which should not only improve the quality of the content in the future, but also increase the efficiency of the implementation of those articles.
Kundendaten im E-Commerce – Optimierungspotenzial im Checkout-Prozess des deutschen Online-Handels
(2023)
Die Gestaltung eines benutzungsfreundlichen Checkout-Prozesses ist für den Erfolg des E-Commerce von großer Bedeutung. Die Abfrage der Kundendaten bildet einen wichtigen Teil der Customer Journey. Auf der einen Seite wollen die Handelsunternehmen so viel wie möglich über ihre Kundschaft erfahren, um möglichst zielgenaue Angebote und Marketingmaßnahmen ausspielen und das perfekte Einkaufserlebnis generieren zu können. Auf der anderen Seite möchten sich die Kundinnen und Kunden beim Online-Shopping auf den Kauf konzentrieren und erwarten einen reibungslosen Ablauf. Der Checkout-Prozess ist in diesem Zusammenhang ein kritischer Punkt. Dies spiegelt sich auch in den hohen Warenkorbabbruchraten wider. Um Online-Shoppende nachhaltig zu begeistern, gibt es noch viel Raum für Verbesserungen. Mit dem Ziel, den Status quo im deutschen Online-Handel besser zu verstehen und Usability und User Experience für eine höhere Konvertierungsrate zu optimieren, untersuchte die hier vorgestellte Forschungsarbeit den Anmelde- und Checkout-Prozess der 100 umsatzstärksten Online-Shops in Deutschland. Es werden die Ergebnisse der Studie präsentiert und aufgezeigt, an welchen Stellen Optimierungspotenzial besteht – bspw. bei zu komplizierten Formularen, unnötigen Datenabfragen oder erzwungenen Registrierungen – sowie Vorschläge für die Praxis des Online-Handels diskutiert.
In this paper, we propose an approach for gait phase detection for flat and inclined surfaces that can be used for an ankle-foot orthosis and the humanoid robot Sweaty. To cover different use cases, we use a rule-based algorithm. This offers the required flexibility and real-time capability. The inputs of the algorithm are inertial measurement unit and ankle joint angle signals. We show that the gait phases with the orthosis worn by a human participant and with Sweaty are reliably recognized by the algorithm under the condition of adapted transition conditions. E.g., the specificity for human gait on flat surfaces is 92 %. For the robot Sweaty, 95 % results in fully recognized gait cycles. Furthermore, the algorithm also allows the determination of the inclination angle of the ramp. The sensors of the orthosis provide 6.9 and that of the robot Sweaty 7.7 when walking onto the reference ramp with slope angle 7.9.
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
Robotic Process Automation (RPA) is a technology for automating business processes and connecting systems by means of software robots in organizations that is gaining traction and growing out of its infancy. Thus, it is no longer just a question of what is technologically feasible, but rather how this technology can be used most profitably. However, business models for RPA remain underinvestigated in literature. Existing work is highly heterogenous, lacking structure and applicability in practice. To close this gap, we present an approach to sustainably establish RPA as a driver of digitization and automation within a company based on an iterative, holistic view of business models with the Business Model Canvas as analysis tool.
The paper compares different anti-windup strategies for the current control of inverter-fed permanent magnet synchronous machines (PMSM) controlled by pulse-width modulation. In this respect, the focus is on the drive behavior with a relatively large product of stator frequency and sampling time. A requirement for dynamically high-quality anti-windup measures is, among other things, a sufficiently accurate decoupling of the stator current direct axis and quadrature axis components even at high stator frequencies. Discrete-time models of the electrical subsystem of the PMSM are well suited for this purpose, of which the method found to be the most accurate in a preliminary investigation is used as the basis for all anti-windup methods examined. Simulation studies and measurement results document the performance of the compared methods.
Cast aluminum cylinder blocks are frequently used in gasoline and diesel internal combustion engines because of their light-weight advantage. However, the disadvantage of aluminum alloys is their relatively low strength and fatigue resistance which make aluminum blocks prone to fatigue cracking. Engine blocks must withstand a combination of low-cycle fatigue (LCF) thermal loads and high-cycle fatigue (HCF) combustion and dynamic loads. Reliable computational methods are needed that allow for accurate fatigue assessment of cylinder blocks under this combined loading. In several publications, the mechanism-based thermomechanical fatigue (TMF) damage model DTMF describing the growth of short fatigue cracks has been extended to include the effect of both LCF thermal loads and superimposed HCF loadings. This approach is applied to the finite life fatigue assessment of an aluminum cylinder block. The required material properties related to LCF are determined from uniaxial LCF tests. The additional material properties required for the assessment of superimposed HCF are obtained from the literature for similar materials. The predictions of the model agree well with engine dyno test results. Finally, some improvements to the current process are discussed.
In order to attract new students, German universities must provide quick and easy access to relevant information. A chatbot can help increase the efficiency in academic advising for prospective students. In this study we evaluate the acceptance and effects of chatbots in German student-university communication. We conducted a qualitative UX-Study with the chatbot prototype of Offenburg University of Applied Sciences (HSO), in order to determine which features are particularly relevant and which requirements are made by the users. The results show that acceptance increases if the chatbot offers quick and adequate assistance, furthermore, our participants preferred an informal communication style and valued friendly and helpful personality traits for chatbots.
4D printing (4DP) is an evolutionary step of 3D printing, which includes the fourth dimension, in this case the time. In different time steps the printed structure shows different shapes, influenced by external stimuli like light, temperature, pH value, electric or magnetic field. The advantage of 4DP is the solution of technical problems without the need for complex internal energy supply via cables or pipes. Previous approaches to 4D printing with magnetoresponsive materials only use materials with limited usability (e.g. hydrogels) and complex programming during the manufacturing process (e.g. using magnets on the nozzle). The 4D printing using unmagnetized particles and the later magnetization allows the use of a standard 3D printer and has the advantage of being easily reproducible and relatively inexpensive for further application. Therefore, a magnetoresponsive feedstock filament is produced which shows elastic and magnetic properties. In a first step, pellets are produced by compounding polymer with magnetic particles. In a second step, those pellets are extruded in form of filament. This filament is printed using a conventional printing system for Material Extrusion (MEX-TRB/P). Various prototypes have been printed, deformed and magnetized, which is called programming. In comparison to shape memory polymers (SMP) the repeatability of the movement is better. The results show the possibilities of application and function of magnetoresponsive materials. In addition, an understanding of the behaviour of this novel material is achieved.
Current Harmonics Control Algorithm for inverter-fed Nonlinear Synchronous Electrical Machines
(2023)
Current harmonics are a well known challenge of electrical machines. They can be undesirable as they can cause instabilities in the control, generate additional losses and lead to torque ripples with noise. However, they can also be specifically generated in new methods in order to improve the machine behavior. In this paper, an algorithm for controlling current harmonics is proposed. It can be described as a combination of different PI controllers for defined angles of the machine with repetitive control characteristics for whole revolutions. The controller design is explained and important points where linearization is necessary are shown. Furthermore, the limits are analyzed and, for validation, measurement results with a permanently excited synchronous machine on the test bench are considered.
The nonlinear behavior of inverters is largely impacted by the interlocking and switching times. A method for online identifying the switching times of semiconductors in inverters is presented in the following work. By being able to identify these times, it is possible to compensate for the nonlinear behavior, reduce interlocking time, and use the information for diagnostic purposes. The method is first theoretically derived by examining different inverter switching cases and determining potential identification possibilities. It is then modified to consider the entire module for more robust identification. The methodology, including limitations and boundary conditions, is investigated and a comparison of two methods of measurement acquisition is provided. Subsequently the developed hardware is described and the implementation in an FPGA is carried out. Finally, the results are presented, discussed, and potential challenges are encountered.
The present work describes an extension of current slope estimation for parameter estimation of permanent magnet synchronous machines operated at inverters. The area of operation for current slope estimation in the individual switching states of the inverter is limited due to measurement noise, bandwidth limitation of the current sensors and the commutation processes of the inverter's switching operations. Therefore, a minimum duration of each switching state is necessary, limiting the final area of operation of a robust current slope estimation. This paper presents an extension of existing current slope estimation algorithms resulting in a greater area of operation and a more robust estimation result.
An important step in seismic data processing to improve inversion and interpretation is multiples attenuation. Radon-based algorithms are often used for discriminating primaries and multiples. Recently, deep learning (DL), based on convolutional neural networks (CNNs) has shown promising results in demultiple that could mitigate the challenges of Radon-based methods. In this work, we investigate new different strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. We evaluate the performance of the CNNs trained with the different strategies on 400 clean and noisy synthetic data, considering 3 metrics. We found that training a CNN to predict the multiples and then subtracting them from the input image is the most effective strategy for demultiple. Furthermore, including the primaries labels as a constraint during the training of multiples prediction improves the results. Finally, we test the strategies on a field dataset. The CNNs trained with different strategies report competitive results on real data compared with Radon demultiple. As a result, effectively trained CNN models can potentially replace Radon-based demultiple in existing workflows.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
Modern industrial production is heavily dependent on efficient workflow processes and automation. The steady flow of raw materials as well as the separation of vital parts and semi-finished products are at the core of these automated procedures. Commonly used systems for this work are bowl feeders, which separate the parts and material by a combination of mechanical vibration and friction. The production of these tools, especially the design of the ramping spiral, is delicate and time-consuming work, as the shape, slope, and material must be carefully adjusted for the corresponding parts. In this work, we propose an automated approach, making use of optimization procedures from artificial intelligence, to design the spiral ramps of the bowl feeders. Therefore, the whole system and considered parts are physically simulated and the optimized geometry is subsequently exported into a CAD system for the actual building, respectively printing. The employment of evolutionary optimization gives the need to develop a mathematical model for the whole setup and find an efficient representation of integral features.
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
Differentiation between human and non-human objects can increase efficiency of human-robot collaborative applications. This paper proposes to use convolutional neural networks for classifying objects in robotic applications. The body temperature of human beings is used to classify humans and to estimate the distance to the sensor. Using image classification with convolutional neural networks it is possible to detect humans in the surroundings of a robot up to five meters distance with low-cost and low-weight thermal cameras. Using transfer learning technique we trained the GoogLeNet and MobilenetV2. Results show accuracies of 99.48 % and 99.06 % respectively.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
In this paper we present the concept of the "KI-Labor Südbaden" to support regional companies in the use of AI technologies. The approach is based on the "Periodic Table of AI" and is extended with both new dimensions for sustainability, and the impact of AI on the working environment. It is illustrated on the basis of three real-world use cases: 1. The detection of humans with lowresolution infrared (IR) images for collaborative robotics; 2. The use of machine data from specifically designed vehicles; 3. State-of-the-art Large Language Models (LLMs) applied to internal company documents. We explain the use cases, thereby demonstrating how to apply the Periodic Table of AI to structure AI applications.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
Fused Filament Fabrication (FFF) is a widespread additive manufacturing technology, mostly in the field of printable polymers. The use of filaments filled with metal particles for the manufacture of metallic parts by FFF presents specific challenges regarding debinding and sintering. For aluminium and its alloys, the sintering temperature range overlaps with the temperature range of thermal decomposition of many commonly used “backbone” polymers, which provide stability to the green parts. Moreover, the high oxygen affinity of aluminium necessitates the use of special sintering regimes and alloying strategies. Therefore, it is challenging to achieve both low porosity and low levels of oxygen and carbon impurities at the same time. Feedstocks compatible with the special requirements of aluminium alloys were developed. We present results on the investigation of debinding/sintering regimes by Fourier Transform Infrared spectroscopy (FTIR) based In-Situ Process Gas Analysis and discuss optimized thermal treatment strategies for Al-based FFF.
This book constitutes the proceedings of the 23rd International TRIZ Future Conference on Towards AI-Aided Invention and Innovation, TFC 2023, which was held in Offenburg, Germany, during September 12–14, 2023. The event was sponsored by IFIP WG 5.4.
The 43 full papers presented in this book were carefully reviewed and selected from 80 submissions. The papers are divided into the following topical sections: AI and TRIZ; sustainable development; general vision of TRIZ; TRIZ impact in society; and TRIZ case studies.
Visual programming languages (VPL) let users develop software programs by combining visual program elements, like lists of objects, loops or conditional statements rather than by specifying them textually.
Humanoid robots programming is a very attractive and motivating application domain for students, especially for programming beginners. Humanoid robots are constructed in such a way that they mimic the human body by using actuators that perform like muscles. Typically, a humanoid robot consists of sensors and actuators, i.e. torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body, for example, from the waist up. In some cases, humanoid robots are equipped with heads designed to replicate additional human facial features such as eyes. Additional sensors are needed by a robot to gather information about the conditions of the environment to allow the robot to make necessary decisions about its position or certain actions that the situation requires, e.g. an arm movement or an open/close hand action. Other examples for sensor are reflective infrared sensors used to detect objects in proximity.
In this work, we introduce a use-case centered approach based on sensors and actors of a robot and a workflow model to visually describe the sequence of actions including conditional actions or concurrent actions. We provide an in-depth discussion of a new VPL based teaching method for programming humanoid robots based on VPLs. Open research challenges, limits and perspectives for further development of our teaching approach are discussed as well.
Public educational institutions are increasingly confronted with a decline in the number of applicants, which is why competition between colleges and universities is also intensifying. For this reason, it is important to position oneself as an institution in order to be perceived by the various target groups and to differentiate oneself from the competition. In this context, the brand and thus its perception and impact play a decisive role, especially in view of the desired communication of the institution's own values and its self-image, the brand identity. To this end, emotions serve as an approach to creating positive stimulation and brand loyalty.
Polyarticulated active prostheses constitute a promising solution for upper limb amputees. The bottleneck for their adoption though, is the lack of intuitive control. In this context, machine learning algorithms based on pattern recognition from electromyographic (EMG) signals represent a great opportunity for naturally operating prosthetic devices, but their performance is strongly affected by the selection of input features. In this study, we investigated different combinations of 13 EMG-derived features obtained from EMG signals of healthy individuals performing upper limb movements and tested their performance for movement classification using an Artificial Neural Network. We found that input data (i.e., the set of input features) can be reduced by more than 50% without any loss in accuracy, while diminishing the computing time required to train the classifier. Our results indicate that input features must be properly selected in order to optimize prosthetic control.
An international study summarizes the threat situation in the OT environment under the heading "Growing security threats" [1]. According to this study, attacks on automation systems are likely to increase in the future. Accordingly, an automation system must be able to protect the integrity of the transmitted information in the future. This requirement is motivated, among other things, by the fact that the network-side isolation of industrial communication systems is no longer considered sufficient as the sole protective measure. This paper uses the example of PROFINET to show how the future requirements for a real-time communication protocol can be met and how they can be derived from the IEC 62443 standard.
To improve the building’s energy efficiency many parameters should be assessed considering the building envelope, energy loads, occupation, and HVAC systems. Fenestration is among the most important variables impacting residential building indoor temperatures. So, it is crucial to use the most optimal energy-efficient window glazing in buildings to reduce energy consumption and at the same time provide visual daylight comfort and thermal comfort. Many studies have focused on the improvement of building energy efficiency focusing on the building envelope or the heating, ventilation, and cooling systems. But just a few studies have focused on studying the effect of glazing on building energy consumption. Thus, this paper aims to study the influence of different glazing types on the building’s heating and cooling energy consumption. A real case study building located under a semi-arid climate was used. The building energy model has been conducted using the OpenStudio simulation engine. Building indoor temperature was calibrated using ASHRAE’s statistical indices. Then a comparative analysis was conducted using seven different types of windows including single, double, and triple glazing filled with air and argon. Tripleglazed and double-glazed windows with argon space offer 37% and 32% of annual energy savings. It should be stressed that the methodology developed in this paper could be useful for further studies to improve building energy efficiency using optimal window glazing.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
Encapsulant-free N.I.C.E. modules have strong ecological advantages compared to conventional laminated modules but suffer generally from lower electrical performance. Via long-term outdoor monitoring of fullsize industrial modules of both types with identical solar cells, we investigated if the performance difference remains constant over time and which parameters influence its value. After assessing about a full year’s data, two obvious levers for N.I.C.E. optimization are identified: The usage of textured glass and transparent adhesives on the module rear side. Also, the performance loss could be alleviated using tracking systems due to lower AOI values. Our measurements show additionally that N.I.C.E. module surfaces are in average about 2.5°C cooler compared to laminated modules. With these findings, we lay out a roadmap to reduce today’s LIV gap of about 5%rel by different optimizations.
In this paper we report on further success of our work to develop a multi-method energy optimization which works with a digital twin concept. The twin concept serves to replicate production processes of different kinds of production companies, including complex energy systems and test market interactions to then use them for model predictive optimizing. The presented work finally reports about the performed flexibility assessment leading to a flexibility audit with a list of measures and the impact of energy optimizations made related to interactions with the local power grid i.e., the exchange node of the low voltage distribution grid. The analysis and continuous exploration of flexibilities as well as the exchange with energy markets require a “guide” leading to continuous optimization with a further tool like the Flexibility Survey and Control Panel helping decision-making processes on the day-ahead horizon for real production plants or the investment planning to improve machinery, staff schedules and production
infrastructure.
With recent developments in the Ukrainian-Russian conflict, many are discussing about Germany’s dependency on fossil fuel imports in its energy system, and how can the country proceed with reducing that dependency. With its wide-ranging consumption sectors, the electricity sector comes as the perfect choice to start with. Recent reports showed that the German federal government is already intending to have a fully renewable electricity by 2035 while exploiting all possible clean power options. This was published in the federal government’s climate emergency program (Easter Package) in early 2022. The aim of this package is to initiate a rapid transition and decarbonization of the electricity sector. The Easter Package expects an enormous growth of renewable energies to a completely new level, with already at least 80% renewable gross energy consumption, with extensive and broad deployment of different generation technologies on various scales. This paper will discuss this ambitious plan and outline some insights into this huge and rapidly increasing step, and show how much will Germany need in order to achieve this huge milestone towards a fully green supply of the electricity sector. Different scenarios and shares of renewables will be investigated in order to elaborate on preponed climate-neutral goal of the electricity sector by 2035. The results pointed out some promising aspects in achieving a 100% renewable power, with massive investments in both generation and storage technologies.
Hot forging dies are subjected to high cyclic thermo-mechanical loads. In critical areas, the occurring stresses can exceed the material’s yield limit. Additionally, loading at high temperatures leads to thermal softening of the used martensitic materials. These effects can result in an early crack initiation and unexpected failure of the dies, usually described as thermo-mechanical fatigue (TMF). In previous works, a temperature-dependent cyclic plasticity model for the martensitic hot forging tool steel 1.2367 (X38CrMoV5-3) was developed and implemented in the finite element (FE)-software Abaqus. However, in the forging industry, application-specific software is usually used to ensure cost-efficient numerical process design. Therefore, a new implementation for the FE-software Simufact Forming 16.0 is presented in this work. The results are compared and validated with the original implementation by means of a numerical compression test and a cyclic simulation is calculated with Simufact Forming.
The importance of machine learning (ML) has been increasing dramatically for years. From assistance systems to production optimisation to healthcare support, almost every area of daily life and industry is coming into contact with machine learning. Besides all the benefits ML brings, the lack of transparency and difficulty in creating traceability pose major risks. While solutions exist to make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge, as unnoticed modification of a model is also a danger when using ML. This paper proposes to create an ML Birth Certificate and ML Family Tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
Brand-related-user-generated-content allows companies to achieve several important objectives, such as increasing sales and creating higher user engagement. In this paper a research framework is developed that provides an overview of the necessary processes to successfully use brand-related-user-generated-content. The framework also helps managers to understand the main motives of users when posting brand-related-user-generated-content. Expert interviews were carried out to validate the research framework. The results from the interviews support the proposed framework. Brand-related-user-generated-content can increase purchase intention and the community engagement. From a user’s perspective the opportunity to interact with a brand and be featured on official brand channels could be seen as the main motivation for creating brand-related-user-generated-content.
Micronization of biochar (BC) may ease its application in agriculture. For example, fine biochar powders can be applied as suspensions via drip-irrigation systems or can be used to produce grnulated fertilizers. However, micronization may effect important physical biochar properties like the water holding capacity (WHC) or the porosity.
The aim of this study is to identify indicators at country level that could prove useful in improving the effectiveness of fraud detection in European Structural and Investment Funds. The chapter analyses EU funds, belonging to the period 2014–2020, from and the study suggests the convenience of tracking funds, especially in countries with higher GDP and higher transparency levels, and the lesser relevance of the number of irregularities for countries with higher GDP and those receiving larger funds. Fraud and fraud detection rates in individual funds vary significantly across states. Federal states, such as the Federal Republic of Germany, are comparatively successful in detecting fraud in EU funds.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
The authors set the focus in this paper on the description of polarization with the help of the Jones calculus and the application of polarization in photography. Furthermore, the effect of the circular polarization filter is described by using the Jones calculus. Also, an enhancement of artistic and creative possibilities in photography through quantization or parametrization by the Jones matrices is presented.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.
As a university it is more and more difficult to reach all target groups equally. Common problems like information overload, numerous institutions with same focuses or multi-channel-communication make it hard to gain the attention of the target group. This paper is four-fold: we present an overview of the state of art and the importance of the study (I), based on which we highlight the approach to user experience analysis. First, we identified the irritations in the course of an expert evaluation (II) and verified them within the test, including the target groups (III). Finally, based on the results, we were able to pro-vide recommendations for action to improve the UX and to be used for the conception of an intranet (IV).
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this work, we explore three deep learning algorithms apply to seismic interpolation: deep prior image (DPI), standard, and generative adversarial networks (GAN). The standard and GAN approaches rely on a dataset of complete and decimated seismic images for the training process, while the DPI method learns from a decimated image itself, without training images. We carry out two main experiments, considering 10%, 30%, and 50% of regular and irregular decimation. The first tests the optimal situation for the GAN and the standard approaches, where training and testing images are from the same dataset. The second tests the ability of GAN and standard methods to learn simultaneously from three datasets, and generalize to a fourth dataset not used during training. The standard method provides the best results in the first experiment, when the training distribution is similar to the testing one. In this situation, the DPI approach reports the second best results. In the second experiment, the standard method shows the ability to learn simultaneously and effectively three data distributions for the regular case. In the irregular case, the DPI approach is more effective. The GAN approach is the less effective of the three deep learning methods in both experiments.
In this study, various imaging algorithms for the localization of objects have been investigated. Therefore, an Ultra-Wideband (UWB) radar based experimental setup with a circular antenna array is designed as part of this work. This concept could be particularly useful in microwave medical imaging applications. In order to validate its applicability in microwave imaging, different imaging algorithms have been evaluated and compared by means of our experimental setup. Accurate imaging results have been achieved with our system under multiple test-scenarios.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
3D printing offers customisation capabilities regarding suspensions for oscillators of vibration energy harvesters. Adjusting printing parameters or geometry allows to influence dynamic properties like resonance frequency or bandwidth of the oscillator. This paper presents simulation results and measurements for a spiral shaped suspension printed with polylactic acid (PLA) and different layer heights. Eigenfrequencies have been simulated and measured and damping ratios have been experimentally determined.
This paper presents the development of a capacitive level sensor for robotics applications, which is designed for measurements of liquid levels during a pouring process. The proposed sensor design applies the advantages of guard electrodes in combination with passive shielding to increase resistance against external influences. This is important for reliable operations in rapidly changing measurement environments, as they occur in the field of robotics. The non-contact type sensor for liquid level measurement is the solution for avoiding contaminations and suit food guidelines. The designed sensor can be utilized in gastronomic applications. Two versions of the sensor were simulated, fabricated, and compared. The first version is based on copper electrodes, and the other type is fully 3D printed with electrodes made of conductive polylactic acid (PLA).
The development of a 3D printed force sensor for a gripper was studied applying an embedded constantan wire as sensing element. In the first section, the state of the art is explained. In the main section of the paper the modeling, simulation and verification of a sensor element are described for a three-point bending test made in accordance with the DIN EN ISO 178. The 3D printing process of the Fused Filament Fabrication (FFF) utilized for manufacturing the sensor samples in combination with an industrial robot are shown. A comparison between theory and practice are considered in detail. Finally, an outlook is given regarding the integration of the sensor element in gripper jaws.
Separation Estimation with Thermal Cameras for Separation Monitoring in Human-Robot Collaboration
(2022)
Human-Robot Collaborative applications have the drawback of being less efficient than their non-collaborative counterparts. One of the main reasons is, that the robot has to slow down when a human being is within the operating space of the robot. There are different approaches on dynamic speed and separation monitoring in human-robot collaborative applications. One approach additionally differentiates between human and non-human objects to increase efficiency in speed and separation monitoring. This paper proposes to estimate the separation distance by measuring the temperature of the approaching object. Measurements show that the measured temperature of a human being decreases with 1 deg C per meter distance from the sensor. This allows an estimation of separation between a robotic system and a human being.
A novelty solution for controls of assistive technology represent the usage of eye tracking devices such as for smart wheelchairs and robotic arms [10, 4]. In this context usage supporting methods like artificial feedback are not well explored. Vibrotactile feedback has shown to be helpful to decrease the cognitive load on the visual and auditive channels and can provide a perception of touch [17]. People with severe limitations of motor functions could benefit from eye tracking controls supported with vibrotactile feedback. In this study fundamental results will be presented in the design of an appropriate vibrotactile feedback system for eye tracking applications. We will show that a perceivable vibrotactile stimulus has no significant effect on the accuracy and precision of a head worn eye tracking device. It is anticipated that the results of this paper will lead to new insights in the design of vibrotactile feedback for eye tracking applications and eye tracking controls.
During the periods of social isolation to contain the advance of COVID-19 in 2020 and 2021, educational institutions have had the challenge to adopt technological strategies not only to ensure continuity in students’ classes, but also to support their mental health in a period of uncertainty and health risks. Loneliness is an emotional distress caused by the lack of meaningful social connections; it has increasingly affected young adults worldwide during the pandemic's social isolation and still bears psychological effects in the current post-pandemic period. In the light of this challenge, the Nonenliness App was developed as a way to bring together university communities to address issues related to loneliness and mental health disorders through a gamified and social online environment. In this paper, we present the app and its main functionalities (Beta version) and discuss the preliminary results of a pilot clinical study conducted with university students in Germany (N = 12) to verify the app's efficacy and usability, alongside the challenges faced and the next steps to be taken regarding the platform's improvement.
This work documents the rising acceptance of social robots for healthcare as well as their growing economic potential from 2017 to 2021. The comparison is based on two studies in the active assisted living (AAL) community. We first provide a brief overview of social robotics and a discussion of the economic potential of social health robots. We found that, despite the huge potential for robotic support in healthcare and domestic routines, social robots still lack the functionality to access that potential. At the same time, the study exemplifies a rise in acceptance: all health-related activities are more accepted in 2021 when in 2017, most of them with high statistical significance. When investigating the economic perspective, we found that persons are aware of the influence of cultural, spiritual, or religious beliefs. Most experts (57%), having a European background, expect the state or the government to be the key driver for establishing social robots in health and significantly prefer leasing or renting a social health robot to buying one. Nevertheless, we speculate that it might be a global financial elite which is first to adopt social robots.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
Featherweight Go (FG) is a minimal core calculus that includes essential Go features such as overloaded methods and interface types. The most straightforward semantic description of the dynamic behavior of FG programs is to resolve method calls based on run-time type information. A more efficient approach is to apply a type-directed translation scheme where interface-values are replaced by dictionaries that contain concrete method definitions. Thus, method calls can be resolved by a simple lookup of the method definition in the dictionary. Establishing that the target program obtained via the type-directed translation scheme preserves the semantics of the original FG program is an important task.
To establish this property we employ logical relations that are indexed by types to relate source and target programs. We provide rigorous proofs and give a detailed discussion of the many subtle corners that we have encountered including the need for a step index due to recursive inter- faces and method definitions.
Im Projekt „BioMeth“ wurden zwei neuartige und bislang noch nicht für die biologische Methanisierung beschriebene Anlagenkonzepte entwickelt. Der neuentwickelte Invers-Membranreaktor (IMR) ermöglicht es, den Eintrag der erforderlichen Eduktgase Wasserstoff H2 und Kohlendioxid CO2 über kommerziell erhältliche Ultrafiltrationsmembranen und den Entgasungsbereich für den Methanaustrag räumlich zu trennen und zusätzlich einen hydraulischen Druck zur Steigerung des Wasserstoffeintrages zu nutzen. Ein Vorteil des Verfahrens ist, dass perspektivisch sowohl das CO2 aus klassischem Biogas als auch CO2-Quellen aus industriellen Abluftströmen, z. B. aus der Zementindustrie als Kohlenstoffquelle genutzt werden können.
Über die biologische Methanisierung hinaus eignet sich der Invers-Membranreaktor der Einschätzung der Autoren nach auch generell zur biotechnologischen Herstellung nicht-flüchtiger Wertstoffe ausgehend von gasförmigen Substraten. Im IMR kann z. B. ein Membranmodul zum Eintrag der Eduktgase verwendet werden, während ein weiteres Hohlmembranmodul zur zyklischen oder kontinuierlichen Abtrennung der wertstoffhaltigen Reaktionslösung unter Rückhaltung der Mikrobiologie im Sinne eines In-situ Product Recovery (ISPR)-Konzeptes genutzt werden kann.
Als herausragendes Ergebnis erwies sich während der Untersuchung des IMR, dass mit dem Konzept der Membranbegasung CH4-Konzentrationen von > 90 Vol.-% über eine einjährige Versuchsreihe kontinuierlich und mit flexiblem Gaseintrag erzielt werden konnten. Nach Inbetriebnahme war dabei außer der Zugabe von H2 und CO2 als Energie- bzw. C-Quelle lediglich eine zweimalige Ergänzung von Supplementen erforderlich. Die maximal erreichte membranflächen-spezifische Methanbildungsrate ohne Gaszirkulation lag bei 83 LN Methan pro m2 Membranfläche und Tag bei einer Produktgaszusammensetzung von 94 Vol.% Methan, 2 Vol.% H2, und 4 Vol.% CO2.
Das zweite noch in der frühen Testphase befindliche Verfahren nutzt Druckunterschiede in einer 10 m hohen gepackten Gegenstromblasensäule, die mit einem ebenfalls 10 m hohen separaten Entgasungs-Reaktor kombiniert wurde. Diese Verfahrenskonzept soll es ermöglichen, eine hohe Wasserstofflöslichkeit aufgrund des am Säulenfuß vorliegenden hydrostatischen Druckes zu erreichen und dabei gleichzeitig den Energiebedarf zu minimieren, die Investitionskosten zu reduzieren und optimale zeitliche und räumlichen Bedingungen für die mikrobiologische Umsetzung von H2 und CO2 zu schaffen. Erste Untersuchungen am Gegenstromblasensäulenreaktor zum Stoffübergang von Luft bestätigten eine gute Anreicherung der im Kreislauf geführten Flüssigkeit bereits bei verhältnismäßig niedrigen Gasleerrohrgeschwindigkeiten. In der zweiten Säule des Reaktoraufbaus sollte am Kopf aufgrund der Druckentspannung ein Ausgasen der im Vergleich zu Atmosphärendruck mit Gas übersättigten Flüssigkeit erfolgen. Das Ausgasen der Flüssigkeit konnte ebenfalls am Beispiel des Lufteintrages bestätigt werden.
We consider large scale Peer-to-Peer Sensor Networks, which try to calculate and distribute the mean value of all sensor inputs. For this we design, simulate and evaluate distributed approximation algorithms which reduce the number of messages. The main difference of these algorithms is the underlying communication protocol which all use the random call model, where in discrete round model each node can call a random sensor node with uniform probability.The amount of data exchanged between sensor nodes and used in the calculation process affects the accuracy of the aggregation results leading to a trade-off situation. The key idea of our algorithms is to limit the sample size using the Finite Population Correction (FPC) method and collect the data using a distribution aggregation using Push-Pull Sampling, Pull Sampling, and Push Sampling communication protocols. It turns out that all methods show exponential improvement of Mean Squared Error (MSE) with the number of messages and rounds.
Biodegradable metals have entered the implant market in recent years, but still do not show fully satisfactory degradation behaviour and mechanical properties. In contrast, it has been shown that pure molybdenum has an excellent combination of the required properties in this respect. We report on PM based screen printing of thin-walled molybdenum tubes as a processing step for medical stent manufacture. We also present data on the in vivo degradation and biocompatibility of molybdenum. The degradation of molybdenum wires implanted in the aorta of rats was evaluated by SEM and EDX. Biocompatibility was assessed by histological investigation of organs and analysis of molybdenum levels in tissue extracts and body fluids. Degradation rates of up to 13.5 μm/y were observed after 12 months. No histological changes or elevated molybdenum levels in organ tissues were observed. In summary, the results further underline that molybdenum is a highly promising biodegradable metallic material.