Refine
Year of publication
Document Type
- Conference Proceeding (1253) (remove)
Conference Type
- Konferenzartikel (950)
- Konferenz-Abstract (156)
- Konferenzband (77)
- Sonstiges (42)
- Konferenz-Poster (32)
Language
- English (934)
- German (317)
- Multiple languages (1)
- Russian (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (453)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (286)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (213)
- Fakultät Wirtschaft (W) (164)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (113)
- INES - Institut für nachhaltige Energiesysteme (59)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Open Access (560)
- Closed Access (456)
- Closed (223)
- Bronze (214)
- Diamond (29)
- Grün (13)
- Gold (6)
- Hybrid (6)
Organized by the Fraunhofer Additive Manufacturing Alliance, the bi-annual Direct Digital Manufacturing Conference brings together researchers, educators and practitioners from around the world. The conference covers the entire range of topics in additive manufacturing, starting with methodologies, design and simulation, right up to more application-specific topics, e.g. from the realm of medical engineering and electronics.
In many application areas, Deep Reinforcement Learning (DRL) has led to breakthroughs. In Curriculum Learning, the Machine Learning algorithm is not randomly presented with examples, but in a meaningful order of increasing difficulty. This has been used in many application areas to further improve the results of learning systems or to reduce their learning time. Such approaches range from learning plans created manually by domain experts to those created automatically. The automated creation of learning plans is one of the biggest challenges.In this work, we investigate an approach in which a trainer learns in parallel and analogously to the student to automatically create a learning plan for the student during this Double Deep Reinforcement Learning (DDRL). Three Reward functions, Friendly, Adversarial, and Dynamic based on the learner’s reward are compared. The domain for evaluation is kicking with variable distance, direction and relative ball position in the SimSpark simulated soccer environment.As a result, Statistic Curriculum Learning (SCL) performs better than a random curriculum with respect to training time and result quality. DDRL reaches a comparable quality as the baseline and outperforms it significantly in shorter trainings in the distance-direction subdomain reducing the number of required training cycles by almost 50%.
Printed circuit boards (PCB) are a foundation of electronical devices in modern society. The fabrication of these boards requires various processes and machines. The utilisation of a robot with multiple tools can shorten the process chain compared to screen printing. In this paper a system is presented, which utilises an industrial six axis robot to manufacture
PCBs. The process flow and conversion process of the Gerber format into robot specific commands is presented. The advantages and challenges applying a robot to print circuits are discussed.
Plastics are used today in many areas of the automotive, aerospace and mechanical engineering industries due to their lightweight potential and ease of processing. Additive manufacturing is applied more and more frequently, as it offers a high degree of design freedom and eliminates the need for complex tools. However, the application of additively manufactured components made of plastics have so far been limited due to their comparatively low strength. For this reason, processes that offer additional reinforcement of the plastic matrix using fibers made of high-strength materials have been developed. However, these components represent a composite of different materials produced on the basis of fossil raw materials, which are difficult to recycle and generally not biodegradable.
Therefore, this paper will explore the potential for new composite materials whose matrix consists of a bio-based plastic. In this investigation, it is assumed that the matrix is reinforced with a fibrous material made of natural fiber to significantly increase the strength. This potential material should offer a lightweight yet strong structure and be biodegradable after use under controlled conditions. Therefore, the state of the art in the use of bio-based materials in 3D printing is first presented. In order to determine the economic boundary conditions, the growth potentials for bio-based materials are analyzed. Also, the recycling prospects for bio-based plastics will also be highlighted. The greenhouse gas emissions and land use to be expected when using bio-based materials are also estimated. Finally, the degradability of the composites is discussed.
Team description papers of magmaOffenburg are incremental in the sense that each year we address a different topic of our team and the tools around our team. In this year’s team description paper we focus on the architecture of the software. It is a main factor for being able to keep the code maintainable even after 15 years of development. We also describe how we make sure that the code follows this architecture.
Ensuring that software applications present their users the most recent version of data is not trivial. Self-adjusting computations are a technique for automatically and efficiently recomputing output data whenever some input changes.
This article describes the software architecture of a large, commercial software system built around a framework for coarse-grained self-adjusting computations in Haskell. It discusses advantages and disadvantages based on longtime experience. The article also presents a demo of the system and explains the API of the framework.
Eco-innovations in chemical processes should be designed to use raw materials, energy and water as efficiently and economically as possible to avoid the generation of hazardous waste and to conserve raw material reserves. Applying inventive principles identified in natural systems to chemical process design can help avoid secondary problems. However, the selection of nature-inspired principles to improve technological or environmental problems is very time-consuming. In addition, it is necessary to match the strongest principles with the problems to be solved. Therefore, the research paper proposes a classification and assignment of nature-inspired inventive principles to eco-parameters, eco-engineering contradictions and eco-innovation domains, taking into account environmental, technological and economic requirements. This classification will help to identify suitable principles quickly and also to realize rapid innovation. In addition, to validate the proposed classification approach, the study is illustrated with the application of nature-inspired invention principles for the development of a sustainable process design for the extraction of high-purity silicon dioxide from pyrophyllite ores. Finally, the paper defines a future research agenda in the field of nature-inspired eco-engineering in the context of AI-assisted invention and innovation.
The identification of vulnerabilities is an important element in the software development life cycle to ensure the security of software. While vulnerability identification based on the source code is a well studied field, the identification of vulnerabilities on basis of a binary executable without the corresponding source code is more challenging. Recent research [1] has shown how such detection can generally be enabled by deep learning methods, but appears to be very limited regarding the overall amount of detected vulnerabilities. We analyse to what extent we could cover the identification of a larger variety of vulnerabilities. Therefore, a supervised deep learning approach using recurrent neural networks for the application of vulnerability detection based on binary executables is used. The underlying basis is a dataset with 50,651 samples of vulnerable code in the form of a standardised LLVM Intermediate Representation. Te vectorised features of a Word2Vec model are used to train different variations of three basic architectures of recurrent neural networks (GRU, LSTM, SRNN). A binary classification was established for detecting the presence of an arbitrary vulnerability, and a multi-class model was trained for the identification of the exact vulnerability, which achieved an out-of-sample accuracy of 88% and 77%, respectively. Differences in the detection of different vulnerabilities were also observed, with non-vulnerable samples being detected with a particularly high precision of over 98%. Thus, our proposed technical approach and methodology enables an accurate detection of 23 (compared to 4 [1]) vulnerabilities.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Erlang is a functional programming language with dynamic typing. The language offers great flexibility for destructing values through pattern matching and dynamic type tests. Erlang also comes with a type language supporting parametric polymorphism, equi-recursive types, as well as union and a limited form of intersection types. However, type signatures only serve as documentation; there is no check that a function body conforms to its signature.
Set-theoretic types and semantic subtyping fit Erlang’s feature set very well. They allow expressing nearly all constructs of its type language and provide means for statically checking type signatures. This article brings set-theoretic types to Erlang and demonstrates how existing Erlang code can be statically type checked without or with only minor modifications to the code. Further, the article formalizes the main ingredients of the type system in a small core calculus, reports on an implementation of the system, and compares it with other static type checkers for Erlang.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
Selbsttests in Lernmanagementsystemen (LMS) ermöglichen es Studierenden, den eigenen Lernfortschritt einzuschätzen. Im Gegensatz zur Einreichung und Korrektur vollständig ausformulierter Aufgabenlösungen nutzen LMS überwiegend die Eingabe der Lösung im Antwort-Auswahl-Verfahren (Single-Choice). Nach didaktischen Ansatz „Physik durch Informatik“ geben die Lernenden stattdessen ihre Aufgabenlösungen in einer Programmiersprache ins LMS ein, was eine automatisierte Rückmeldung erleichtert und das Erreichen einer höheren Kompetenzstufe fördert. Es wurden zehn LMS-Selbsttests erstellt, bei denen die Lösungen zu einer Lehrbuch-Aufgabenstellung jeweils durch Eingabe in einer Programmiersprache und von einer Kontrollgruppe im Antwort-Auswahl-Verfahren abgefragt wurden. Ergebnisse aus dem ersten Einsatz dieser Selbsttests für die Lehrveranstaltung Physik im Studiengang Biotechnologie werden vorgestellt.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
Mathematik lässt sich in vielen Objekten finden. Sei es die lineare Steigung eines Handlaufs zum Schulgebäude oder die nahezu zylindrische Form einer Litfaßsäule in der Innenstadt. Das Bestreben, Schüler*innen diese Zusammenhänge entdecken zu lassen, steht im Zentrum des MathCityMap Projekts (Ludwig et al., 2013). Auf sogenannten mathematischen Wanderpfaden (bzw. Mathtrails) werden Schüler*innen durch eine App zu Mathematikaufgaben an realen Objekten bzw. in realen Situationen ihrer Umwelt geleitet. Um die Aufgaben zu lösen, werden Daten erhoben, z. B. durch Messungen oder Zählen. Entscheidend ist, dass die Aufgaben so gestellt sind, dass der Schritt der Datenbeschaffung nur vor Ort stattfinden kann und somit direkt mit dem Objekt bzw. der Situation verknüpft wird.
Turbocharger housings in internal combustion engines are subjected to severe mechanical and thermal cyclic loads throughout their life-time or during engine testing. The combination of thermal transients and mechanical load cycling results in a complex evolution of damage, leading to thermo-mechanical fatigue (TMF) of the material. For the computational TMF life assessment of high temperature components, the DTMF model can provide reliable TMF life predictions. The model is based on a short fatigue crack growth law and uses local finite-element (FE) results to predict the number of cycles to failure for a technical crack. In engine applications, it is nowadays often acceptable to have short cracks as long as they do not propagate and cause loss of function of the component. Thus, it is necessary to predict not only potential crack locations and the corresponding number of cycles for a technical crack, but also to determine subsequent crack growth or even a possible crack arrest. In this work, a method is proposed that allows the simulation of TMF crack growth in high temperature components using FE simulations and non-linear fracture mechanics (NLFM).
A NLFM based crack growth simulation method is described. This method starts with the FE analysis of a component. In this paper, the method is demonstrated for an automotive turbocharger housing subjected to TMF loading. A transient elastic-viscoplastic FE analysis is used to simulate four heating and cooling cycles of an engine test. The stresses, inelastic strains, and temperature histories from the FEA are then used to perform TMF life predictions using the standard DTMF model. The crack position and the crack plane of critical hotspots are then identified. Simulated cracks are inserted at the hotspots. For the model demonstrated, cracks were inserted at two hotspot locations. The ΔJ integral is computed as a fracture mechanics parameter at each point along the crack-front, and the crack extension of each point is then evaluated, allowing the crack to grow iteratively. The paper concludes with a comparison of the crack growth curves for both hotspots with experimental results.
Enhancing engineering creativity with automated formulation of elementary solution principles
(2023)
The paper describes a method for the automated formulation of elementary creative stimuli for product or process design at different levels of abstraction and in different engineering domains. The experimental study evaluates the impact of structured automated idea generation on inventive thinking in engineering design and compares it with previous experimental studies in educational and industrial settings. The outlook highlights the benefits of using automated ideation in the context of AI-assisted invention and innovation.
Learning programming fundamentals is considered as one of the most challenging and complex learning activities. Some authors have proposed visual programming language (VPL) approaches to address part of the inherent complexity [1]. A visual programming language lets users develop programs by combining program elements, like loops graphically rather than by specifying them textually. Visual expressions, spatial arrangements of text and graphic symbols are used either as syntax elements or secondary notation. VPLs are normally used for educational multimedia, video games, system development, and data warehousing/business analytics purposes. For example, Scratch, a platform of Massachusetts Institute of Technology, is designed for kids and after school programs.
Design of mobile software applications is considered as one of the most challenging application domains due to the build in sensors as part of a mobile device, like GPS, camera or Near Field Communication (NFC). Sensors enable creation of context-aware mobile applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects, and the current user activity. As a consequence, context-aware mobile applications can sense clues about the situational environment making mobile devices more intelligent, adaptive, and personalized. Such context aware mobile applications seem to be motivating and attractive case studies, especially for programming beginners (“my own first app”).
In this work, we introduce a use-case centered approach as well as clear separation of user interface design and sensor-based program development. We provide an in-depth discussion of a new VPL based teaching method, a step by step development process to enable programming beginners the creation of context aware mobile applications. Finally, we argue that addressing challenges for programming beginners by our teaching approach could make programming teaching more motivating, with an additional impact on the final software quality and scalability.
The key contributions of our study are the following:
- An overview of existing attempts to use VPL approaches for mobile applications
- A use case centered teaching approach based on a clear separation of user interface design and sensor-based program development
- A teaching case study enabling beginners a step by step creation of context-aware mobile applications based on the MIT App Inventor (a platform of Massachusetts Institute of Technology)
- Open research challenges and perspectives for further development of our teaching approach
References:
[1] Idrees, M., Aslam, F. (2022). A Comprehensive Survey and Analysis of Diverse Visual Programming Languages, VFAST Transactions on Software Engineering, 2022, Volume 10, Number 2, pp 47-60.
Neural networks have a number of shortcomings. Amongst the severest ones is the sensitivity to distribution shifts which allows models to be easily fooled into wrong predictions by small perturbations to inputs that are often imperceivable to humans and do not have to carry semantic meaning. Adversarial training poses a partial solution to address this issue by training models on worst-case perturbations. Yet, recent work has also pointed out that the reasoning in neural networks is different from humans. Humans identify objects by shape, while neural nets mainly employ texture cues. Exemplarily, a model trained on photographs will likely fail to generalize to datasets containing sketches. Interestingly, it was also shown that adversarial training seems to favorably increase the shift toward shape bias. In this work, we revisit this observation and provide an extensive analysis of this effect on various architectures, the common L_2-and L_-training, and Transformer-based models. Further, we provide a possible explanation for this phenomenon from a frequency perspective.
Seismic data processing relies on multiples attenuation to improve inversion and interpretation. Radon-based algorithms are often used for multiples and primaries discrimination. Deep learning, based on convolutional neural networks (CNNs), has shown encouraging applications for demultiple that could mitigate Radon-based challenges. In this work, we investigate new strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. Moreover, we investigate two distinctive training methods for all the strategies: UNet based on minimum absolute error (L1) training, and adversarial training (GAN-UNet). We test the trained models with the different strategies and methods on 400 synthetic data. We found that training to predict multiples, including the primaries …
In 4D printing an additively manufactured component is given the ability to change its shape or function under the influence of an external stimulus. To achieve this, special smart materials are used that are able to react to external stimuli in a specific way. So far, a number of different stimuli have already been investigated and initial applications have been impressively demonstrated, such as self-folding bodies and simple grippers. However, a methodical specification for the selection of the stimuli and their implementation was not yet in the foreground of the development.
The focus of this work is therefore to develop a methodical approach with which the technology of 4DP can be used in a solution- and application-oriented manner. The developed approach is based on the conventional design methodology for product development to solve given problems in a structured way. This method is extended by specific approaches under consideration of the 4D printing and smart materials.
To illustrate the developed method, it is implemented in practice using a problem definition in the form of an application example. In this example, which represents the recovery of an object from a difficult-to-access environment, the individual functions of positioning, gripping and extraction are implemented using 4D printing. The material extrusion process is used for additive manufacturing of all components of the example. Finally, the functions are successfully tested. The developed approach offers an innovative and methodical approach to systematically solve technical complex problems using 4DP and smart materials.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused coordinating the play of two robots. Moreover, we are working on stabilizing the gait by adding additional sensor information. An ongoing work is the optimization of the control strategy by balancing between impedance and position control. By minimizing the jerk, gait and overall gameplay should improve significantly.
Voice User Experience
(2023)
Sprachassistenten wie Alexa, Google Assistant, Siri, Cortana, Magenta und Bixby erfreuen sich dank ihrer intuitiven, schnellen und bequemen Interaktionsmöglichkeiten zunehmender Beliebtheit und bieten deshalb spannende Möglichkeiten für die Weiterentwicklung des digitalen Kundendialogs. Doch ob die Technologie wirklich breite Akzeptanz finden wird, hängt nicht nur mit ihrer technischen Qualität oder Usability zusammen. Auch die User Experience, die neben den Reaktionen der Nutzer*innen während der Anwendung auch ihre Erwartungen und Wahrnehmungen vor und nach der Anwendung umfasst, spielt eine zentrale Rolle. Die Messung der Qualität der Voice User Experience (Voice UX) ist daher von großem Interesse für die Bewertung und Optimierung von Sprachapplikationen. Die Frage, wie die Voice UX von sprachgesteuerten Systemen gemessen werden kann, ist jedoch noch offen. Aktuelle Methoden stützen sich häufig auf UX-Forschung zu grafischen Benutzeroberflächen, obwohl die sprachbasierten Interaktionsformen in der Regel weder visuell noch haptisch greifbar sind. In unserem Beitrag möchten wir den aktuellen Status quo der deutschen Voice User Experience untersuchen. Folgende Fragen stehen dabei im Mittelpunkt: Wie können Sprachanwendungen zu einem erfolgreichen Kundendialog beitragen? Welche Nutzerirritationen treten aktuell bei der Anwendung von Sprachassistenten auf? Mit welchen Methoden lässt sich die Voice User Experience messen?
The present paper addresses the research question: What recommendations for action and potential adjustments should an online magazine for beauty and fashion implement in order to make affiliate articles in these sections even more appealing to the target group and provide added value for them?
To be able to answer this research question, three hypotheses were defined and tested with using qualitative and quantitative research. The qualitative research consisted of user experience testings, where four affiliate articles in the fields of beauty and fashion were tested with 13 participants. The quantitative research involved collecting, analyzing and evaluating data from the four affiliate articles conducted with the company's real-life target group. Based on these results, recommendations for action were derived, which should not only improve the quality of the content in the future, but also increase the efficiency of the implementation of those articles.
Established robot manufacturers have developed methods to determine and optimize the accuracy of their robots. These methods vary from robot manufacturers to their competitors. Due to the lack of published data, a comparison of robot performance is difficult. The aim of this article is to find methods to evaluate important characteristics of a robot with an accurate and cost-effective setup. A laser triangulation sensor and geometric referenced spheres were used as a base to compare the robot performance.
Kundendaten im E-Commerce – Optimierungspotenzial im Checkout-Prozess des deutschen Online-Handels
(2023)
Die Gestaltung eines benutzungsfreundlichen Checkout-Prozesses ist für den Erfolg des E-Commerce von großer Bedeutung. Die Abfrage der Kundendaten bildet einen wichtigen Teil der Customer Journey. Auf der einen Seite wollen die Handelsunternehmen so viel wie möglich über ihre Kundschaft erfahren, um möglichst zielgenaue Angebote und Marketingmaßnahmen ausspielen und das perfekte Einkaufserlebnis generieren zu können. Auf der anderen Seite möchten sich die Kundinnen und Kunden beim Online-Shopping auf den Kauf konzentrieren und erwarten einen reibungslosen Ablauf. Der Checkout-Prozess ist in diesem Zusammenhang ein kritischer Punkt. Dies spiegelt sich auch in den hohen Warenkorbabbruchraten wider. Um Online-Shoppende nachhaltig zu begeistern, gibt es noch viel Raum für Verbesserungen. Mit dem Ziel, den Status quo im deutschen Online-Handel besser zu verstehen und Usability und User Experience für eine höhere Konvertierungsrate zu optimieren, untersuchte die hier vorgestellte Forschungsarbeit den Anmelde- und Checkout-Prozess der 100 umsatzstärksten Online-Shops in Deutschland. Es werden die Ergebnisse der Studie präsentiert und aufgezeigt, an welchen Stellen Optimierungspotenzial besteht – bspw. bei zu komplizierten Formularen, unnötigen Datenabfragen oder erzwungenen Registrierungen – sowie Vorschläge für die Praxis des Online-Handels diskutiert.
Additive manufacturing enables the production of lightweight and resilient components with extensive design freedom. In the low-cost sector, material extrusion (e.g. Fused Deposition Modeling - FDM) has been the main method used to date. Thus, robust 3D printers and inexpensive 3D materials (polymer filaments) can be used. However, the printing times for FDM are very long and the quality of the dimensions and surfaces is limited. Recently, new processes from the field of Vat polymerization have entered the market. For example, masked stereolithography (mSLA) offers a significant improvement in component quality and build speed through the use of resins and large-area curing at still reasonable costs. Currently, there is only limited knowledge available on the optimal design of components using this young process. In this contribution, design guidelines are developed to determine the possibilities and limitations of mSLA from a design point of view. For this purpose, a number of test geometries are designed and investigated to obtain systematic insights into important design features, such as wall thickness, grooves and holes. In addition, typical problems in additive manufacturing, such as the design of overhangs and fits or the hollowing of components, are investigated. The evaluation of practical 3D printing tests thus provides important parameters that can be transferred to design guidelines of components for additive manufacturing using mSLA.
In this paper, we propose an approach for gait phase detection for flat and inclined surfaces that can be used for an ankle-foot orthosis and the humanoid robot Sweaty. To cover different use cases, we use a rule-based algorithm. This offers the required flexibility and real-time capability. The inputs of the algorithm are inertial measurement unit and ankle joint angle signals. We show that the gait phases with the orthosis worn by a human participant and with Sweaty are reliably recognized by the algorithm under the condition of adapted transition conditions. E.g., the specificity for human gait on flat surfaces is 92 %. For the robot Sweaty, 95 % results in fully recognized gait cycles. Furthermore, the algorithm also allows the determination of the inclination angle of the ramp. The sensors of the orthosis provide 6.9 and that of the robot Sweaty 7.7 when walking onto the reference ramp with slope angle 7.9.
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
Robotic Process Automation (RPA) is a technology for automating business processes and connecting systems by means of software robots in organizations that is gaining traction and growing out of its infancy. Thus, it is no longer just a question of what is technologically feasible, but rather how this technology can be used most profitably. However, business models for RPA remain underinvestigated in literature. Existing work is highly heterogenous, lacking structure and applicability in practice. To close this gap, we present an approach to sustainably establish RPA as a driver of digitization and automation within a company based on an iterative, holistic view of business models with the Business Model Canvas as analysis tool.
The paper compares different anti-windup strategies for the current control of inverter-fed permanent magnet synchronous machines (PMSM) controlled by pulse-width modulation. In this respect, the focus is on the drive behavior with a relatively large product of stator frequency and sampling time. A requirement for dynamically high-quality anti-windup measures is, among other things, a sufficiently accurate decoupling of the stator current direct axis and quadrature axis components even at high stator frequencies. Discrete-time models of the electrical subsystem of the PMSM are well suited for this purpose, of which the method found to be the most accurate in a preliminary investigation is used as the basis for all anti-windup methods examined. Simulation studies and measurement results document the performance of the compared methods.
Cast aluminum cylinder blocks are frequently used in gasoline and diesel internal combustion engines because of their light-weight advantage. However, the disadvantage of aluminum alloys is their relatively low strength and fatigue resistance which make aluminum blocks prone to fatigue cracking. Engine blocks must withstand a combination of low-cycle fatigue (LCF) thermal loads and high-cycle fatigue (HCF) combustion and dynamic loads. Reliable computational methods are needed that allow for accurate fatigue assessment of cylinder blocks under this combined loading. In several publications, the mechanism-based thermomechanical fatigue (TMF) damage model DTMF describing the growth of short fatigue cracks has been extended to include the effect of both LCF thermal loads and superimposed HCF loadings. This approach is applied to the finite life fatigue assessment of an aluminum cylinder block. The required material properties related to LCF are determined from uniaxial LCF tests. The additional material properties required for the assessment of superimposed HCF are obtained from the literature for similar materials. The predictions of the model agree well with engine dyno test results. Finally, some improvements to the current process are discussed.
In order to attract new students, German universities must provide quick and easy access to relevant information. A chatbot can help increase the efficiency in academic advising for prospective students. In this study we evaluate the acceptance and effects of chatbots in German student-university communication. We conducted a qualitative UX-Study with the chatbot prototype of Offenburg University of Applied Sciences (HSO), in order to determine which features are particularly relevant and which requirements are made by the users. The results show that acceptance increases if the chatbot offers quick and adequate assistance, furthermore, our participants preferred an informal communication style and valued friendly and helpful personality traits for chatbots.
4D printing (4DP) is an evolutionary step of 3D printing, which includes the fourth dimension, in this case the time. In different time steps the printed structure shows different shapes, influenced by external stimuli like light, temperature, pH value, electric or magnetic field. The advantage of 4DP is the solution of technical problems without the need for complex internal energy supply via cables or pipes. Previous approaches to 4D printing with magnetoresponsive materials only use materials with limited usability (e.g. hydrogels) and complex programming during the manufacturing process (e.g. using magnets on the nozzle). The 4D printing using unmagnetized particles and the later magnetization allows the use of a standard 3D printer and has the advantage of being easily reproducible and relatively inexpensive for further application. Therefore, a magnetoresponsive feedstock filament is produced which shows elastic and magnetic properties. In a first step, pellets are produced by compounding polymer with magnetic particles. In a second step, those pellets are extruded in form of filament. This filament is printed using a conventional printing system for Material Extrusion (MEX-TRB/P). Various prototypes have been printed, deformed and magnetized, which is called programming. In comparison to shape memory polymers (SMP) the repeatability of the movement is better. The results show the possibilities of application and function of magnetoresponsive materials. In addition, an understanding of the behaviour of this novel material is achieved.
In 2015, Google engineer Alexander Mordvintsev presented DeepDream as technique to visualise the feature analysis capabilities of deep neural networks that have been trained on image classification tasks. For a brief moment, this technique enjoyed some popularity among scientists, artists, and the general public because of its capability to create seemingly hallucinatory synthetic images. But soon after, research moved on to generative models capable of producing more diverse and more realistic synthetic images. At the same time, the means of interaction with these models have shifted away from a direct manipulation of algorithmic properties towards a predominance of high level controls that obscure the model's internal working. In this paper, we present research that returns to DeepDream to assess its suit-ability as method for sound synthesis. We consider this research to be necessary for two reasons: it tackles a perceived lack of research on musical applications of DeepDream, and it addresses DeepDream's potential to combine data driven and algorithmic approaches. Our research includes a study of how the model architecture, choice of audio data-sets, and method of audio processing influence the acoustic characteristics of the synthesised sounds. We also look into the potential application of DeepDream in a live-performance setting. For this reason, the study limits itself to models consisting of small neural networks that process time-domain representations of audio. These models are resource-friendly enough to operate in real time. We hope that the results obtained so far highlight the attractiveness of Deep-Dream for musical approaches that combine algorithmic investigation with curiosity driven and open ended exploration.
Current Harmonics Control Algorithm for inverter-fed Nonlinear Synchronous Electrical Machines
(2023)
Current harmonics are a well known challenge of electrical machines. They can be undesirable as they can cause instabilities in the control, generate additional losses and lead to torque ripples with noise. However, they can also be specifically generated in new methods in order to improve the machine behavior. In this paper, an algorithm for controlling current harmonics is proposed. It can be described as a combination of different PI controllers for defined angles of the machine with repetitive control characteristics for whole revolutions. The controller design is explained and important points where linearization is necessary are shown. Furthermore, the limits are analyzed and, for validation, measurement results with a permanently excited synchronous machine on the test bench are considered.
The nonlinear behavior of inverters is largely impacted by the interlocking and switching times. A method for online identifying the switching times of semiconductors in inverters is presented in the following work. By being able to identify these times, it is possible to compensate for the nonlinear behavior, reduce interlocking time, and use the information for diagnostic purposes. The method is first theoretically derived by examining different inverter switching cases and determining potential identification possibilities. It is then modified to consider the entire module for more robust identification. The methodology, including limitations and boundary conditions, is investigated and a comparison of two methods of measurement acquisition is provided. Subsequently the developed hardware is described and the implementation in an FPGA is carried out. Finally, the results are presented, discussed, and potential challenges are encountered.
The present work describes an extension of current slope estimation for parameter estimation of permanent magnet synchronous machines operated at inverters. The area of operation for current slope estimation in the individual switching states of the inverter is limited due to measurement noise, bandwidth limitation of the current sensors and the commutation processes of the inverter's switching operations. Therefore, a minimum duration of each switching state is necessary, limiting the final area of operation of a robust current slope estimation. This paper presents an extension of existing current slope estimation algorithms resulting in a greater area of operation and a more robust estimation result.
Digital, virtual environments and the metaverse are rapidly taking shape and will generate disruptive changes in the areas of ethics, privacy, safety, and how the relationships between human beings will be developed. To uncover some of some of the implications that will impact those areas, this study investigates the perceptions of 101 younger people from the generations Y and Z. We present a first exploratory analysis of the findings, focusing on knowledge and self-perception. Results show that these young generations are seriously doubting their knowledge on the metaverse and virtual worlds – regarding both the definition and the usage. It is interesting to see only a medium confidence level, considering that the participants are young and from an academic environment, which should increase their interest in and the affinity towards virtual worlds. Males from both generations perceive themselves as significantly more knowledgeable than females. Regarding a fitting definition, almost 40% agreed on the metaverse as a “universal and immersive virtual world that is made accessible using virtual reality and augmented reality technologies”. Regarding the topic in general, several participants (almost 40%) considered themselves sceptics or “just” users (38%). Interestingly, generation Y participants were more likely than the younger generation Z participants to identify themselves as early adopters or innovators. In result, the considerable amount of “mixed feelings” regarding digital, virtual environments and the metaverse shows that in-depth studies on the perception of the metaverse as well as its ethical and integrity implications are required to create more accessible, inclusive, safe, and inclusive digital, virtual environments.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
An important step in seismic data processing to improve inversion and interpretation is multiples attenuation. Radon-based algorithms are often used for discriminating primaries and multiples. Recently, deep learning (DL), based on convolutional neural networks (CNNs) has shown promising results in demultiple that could mitigate the challenges of Radon-based methods. In this work, we investigate new different strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. We evaluate the performance of the CNNs trained with the different strategies on 400 clean and noisy synthetic data, considering 3 metrics. We found that training a CNN to predict the multiples and then subtracting them from the input image is the most effective strategy for demultiple. Furthermore, including the primaries labels as a constraint during the training of multiples prediction improves the results. Finally, we test the strategies on a field dataset. The CNNs trained with different strategies report competitive results on real data compared with Radon demultiple. As a result, effectively trained CNN models can potentially replace Radon-based demultiple in existing workflows.
In this contribution, we present a novel 3D printed multi-material, electromagnetic vibration harvester. The harvester is based on a cantilever design and utilizes an embedded constantan wire within a matrix of polyethylene terephthalate glycol (PETG). A prototype has been manufactured with a combination of a fused filament fabrication (FFF) printer and a robot with a custom-made tool.
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
It is common practice to apply padding prior to convolution operations to preserve the resolution of feature-maps in Convolutional Neural Networks (CNN). While many alternatives exist, this is often achieved by adding a border of zeros around the inputs. In this work, we show that adversarial attacks often result in perturbation anomalies at the image boundaries, which are the areas where padding is used. Consequently, we aim to provide an analysis of the interplay between padding and adversarial attacks and seek an answer to the question of how different padding modes (or their absence) affect adversarial robustness in various scenarios.
Modern industrial production is heavily dependent on efficient workflow processes and automation. The steady flow of raw materials as well as the separation of vital parts and semi-finished products are at the core of these automated procedures. Commonly used systems for this work are bowl feeders, which separate the parts and material by a combination of mechanical vibration and friction. The production of these tools, especially the design of the ramping spiral, is delicate and time-consuming work, as the shape, slope, and material must be carefully adjusted for the corresponding parts. In this work, we propose an automated approach, making use of optimization procedures from artificial intelligence, to design the spiral ramps of the bowl feeders. Therefore, the whole system and considered parts are physically simulated and the optimized geometry is subsequently exported into a CAD system for the actual building, respectively printing. The employment of evolutionary optimization gives the need to develop a mathematical model for the whole setup and find an efficient representation of integral features.
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
Artificial Intelligence (AI) can potentially transform many aspects of modern society in various ways, including automation of tasks, personalization of products and services, diagnosis of diseases and their treatment, transportation, safety, and security in public spaces, etc. Recently, AI technology has been transforming the financial industry, offering new ways to analyse data and automate processes, reduce costs, increase efficiency, and provide more personalized services to customers. However, it also raised important ethical and regulatory questions that need to be addressed by the industry and society as a whole. The aim of the Erasmus+ project Transversal Skills in Applied Artificial Intelligence - TSAAI (KA220-HED - Cooperation Partnerships in higher education) has been to establish a training platform that will incorporate teaching guidelines based on a curriculum covering different areas of application of AI technology. In this work, we will focus on applying AI models in the financial and insurance sectors.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
Following their success in visual recognition tasks, Vision Transformers(ViTs) are being increasingly employed for image restoration. As a few recent works claim that ViTs for image classification also have better robustness properties, we investigate whether the improved adversarial robustness of ViTs extends to image restoration. We consider the recently proposed Restormer model, as well as NAFNet and the "Baseline network" which are both simplified versions of a Restormer. We use Projected Gradient Descent (PGD) and CosPGD for our robustness evaluation. Our experiments are performed on real-world images from the GoPro dataset for image deblurring. Our analysis indicates that contrary to as advocated by ViTs in image classification works, these models are highly susceptible to adversarial attacks. We attempt to find an easy fix and improve their robustness through adversarial training. While this yields a significant increase in robustness for Restormer, results on other networks are less promising. Interestingly, we find that the design choices in NAFNet and Baselines, which were based on iid performance, and not on robust generalization, seem to be at odds with the model robustness.
Differentiation between human and non-human objects can increase efficiency of human-robot collaborative applications. This paper proposes to use convolutional neural networks for classifying objects in robotic applications. The body temperature of human beings is used to classify humans and to estimate the distance to the sensor. Using image classification with convolutional neural networks it is possible to detect humans in the surroundings of a robot up to five meters distance with low-cost and low-weight thermal cameras. Using transfer learning technique we trained the GoogLeNet and MobilenetV2. Results show accuracies of 99.48 % and 99.06 % respectively.
While most ultrafast time-resolved optical pump-probe experiments in magnetic materials reveal the spatially homogeneous magnetization dynamics of ferromagnetic resonance (FMR), here we explore the magneto-elastic generation of GHz-to-THz frequency spin waves (exchange magnons). Using analytical magnon oscillator equations, we apply time-domain and frequency-domain approaches to quantify the results of ultrafast time-resolved optical pump-probe experiments in free-standing ferromagnetic thin films. Simulations show excellent agreement with the experiment, provide acoustic and magnetic (Gilbert) damping constants and highlight the role of symmetry-based selection rules in phonon-magnon interactions. The analysis is extended to hybrid multilayer structures to explore the limits of resonant phonon-magnon interactions up to THz frequencies.
Variable refrigerant flow (VRF) and variable air volume (VAV) systems are considered among the best heating, ventilation, and air conditioning systems (HVAC) thanks to their ability to provide cooling and heating in different thermal zones of the same building. As well as their ability to recover the heat rejected from spaces requiring cooling and reuse it to heat another space. Nevertheless, at the same time, these systems are considered one of the most energy-consuming systems in the building. So, it is crucial to well size the system according to the building’s cooling and heating needs and the indoor temperature fluctuations. This study aims to compare these two energy systems by conducting an energy model simulation of a real building under a semi-arid climate for cooling and heating periods. The developed building energy model (BEM) was validated and calibrated using measured and simulated indoor air temperature and energy consumption data. The study aims to evaluate the effect of these HVAC systems on energy consumption and the indoor thermal comfort of the building. The numerical model was based on the Energy Plus simulation engine. The approach used in this paper has allowed us to reach significant quantitative energy saving along with a high level of indoor thermal comfort by using the VRF system compared to the VAV system. The findings prove that the VRF system provides 46.18% of the annual total heating energy savings and 6.14% of the annual cooling and ventilation energy savings compared to the VAV system.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
In this paper we present the concept of the "KI-Labor Südbaden" to support regional companies in the use of AI technologies. The approach is based on the "Periodic Table of AI" and is extended with both new dimensions for sustainability, and the impact of AI on the working environment. It is illustrated on the basis of three real-world use cases: 1. The detection of humans with lowresolution infrared (IR) images for collaborative robotics; 2. The use of machine data from specifically designed vehicles; 3. State-of-the-art Large Language Models (LLMs) applied to internal company documents. We explain the use cases, thereby demonstrating how to apply the Periodic Table of AI to structure AI applications.
Currently, immersive technologies are enjoying great popularity. This trend is reflected in technological advances and the emergence of new products for the mass market, such as augmented reality glasses. The range of applications for immersive technologies is growing with more efficient and affordable technologies and student adoption. Especially in education, the use will improve existing learning methods. Immersive application use visual, audio and haptic sensors to fully engage the user in a virtual environment. This impression is reinforced with the help of realistic visualizations and the opportunity for interaction. In particular, Augmented reality is characterized by a high degree of integration between reality and the inserted virtual objects. An augmented interactive simulation for the determination of the specific charge of an electron will be used as an example to demonstrate how such immersion can be created for users. A virtual Helmholtz coil is used to measure and calculate the e/m constant. The voltage at the cathode for generating the electron beam, but also the voltage of the homogeneous magnetic field for deflecting the electron beam, can be variably controlled by haptic user input. Based on these voltages, an immersive virtual electron beam is calculated and visualized. In this paper, the authors present the conceptual steps of this immersive application and address the challenges associated with designing and developing an augmented and interactive simulation.
Redesigning a curriculum for teaching media technology is a major challenge. Up-to-date teaching and learning concepts are necessary that meet the constant technological progress and prepare students specifically for their professional life. Teaching and studying should be characterized by a student-oriented teaching and learning culture. In order to achieve this goal, consistent evaluation is essential. The aim of the evaluation concept presented here is to generate structured information regarding the quality of content-related, didactic and organizational aspects of teaching. The exchange of opinions between students and lecturers should be encouraged in order to continuously improve the teaching and learning processes.
The paper will focus on the activities of the International Year of Light and Optical Technologies 2015 (IYL) with their impact in life, science, art, culture, education and outreach as well as the importance in promoting the objectives for sustainable development. It describes our activities carried out in the run-up to or during the IYL, as well as reports on the generic projects that led to the success of the IYL. The success of the IYL is illustrated by examples and statistics. Relating to the potential and success of the IYL, the impact and the genesis of the International Day of Light (IDL) is presented. Impressions from the opening ceremony of the IYL in Paris at UNESCO headquarters and the Inaugural Ceremony of the IDL will then be covered. A second focus is placed on the interdisciplinary media projects realized by the students of our university dedicated to these events. Finally, an analysis of the impact and legacy of IYL and IDL will be presented.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
Fused Filament Fabrication (FFF) is a widespread additive manufacturing technology, mostly in the field of printable polymers. The use of filaments filled with metal particles for the manufacture of metallic parts by FFF presents specific challenges regarding debinding and sintering. For aluminium and its alloys, the sintering temperature range overlaps with the temperature range of thermal decomposition of many commonly used “backbone” polymers, which provide stability to the green parts. Moreover, the high oxygen affinity of aluminium necessitates the use of special sintering regimes and alloying strategies. Therefore, it is challenging to achieve both low porosity and low levels of oxygen and carbon impurities at the same time. Feedstocks compatible with the special requirements of aluminium alloys were developed. We present results on the investigation of debinding/sintering regimes by Fourier Transform Infrared spectroscopy (FTIR) based In-Situ Process Gas Analysis and discuss optimized thermal treatment strategies for Al-based FFF.
This book constitutes the proceedings of the 23rd International TRIZ Future Conference on Towards AI-Aided Invention and Innovation, TFC 2023, which was held in Offenburg, Germany, during September 12–14, 2023. The event was sponsored by IFIP WG 5.4.
The 43 full papers presented in this book were carefully reviewed and selected from 80 submissions. The papers are divided into the following topical sections: AI and TRIZ; sustainable development; general vision of TRIZ; TRIZ impact in society; and TRIZ case studies.
3D Bin Picking with an innovative powder filled gripper and a torque controlled collaborative robot
(2023)
A new and innovative powder filled gripper concept will be introduced to a process to pick parts out of a box without the use of a camera system which guides the robot to the part. The gripper is a combination of an inflatable skin, and a powder inside. In the unjammed condition, the powder is soft and can adjust to the geometry of the part which will be handled. By applying a vacuum to the inflatable skin, the powder gets jammed and transforms to a solid shaped form in which the gripper was brought before applying the vacuum. This physical principle is used to pick parts. The flexible skin of the gripper adjusts to all kinds of shapes, and therefore, can be used to realize 3D bin picking. With the help of a force controlled robot, the gripper can be pushed with a consistent force on flexible positions depending of the filling level of the box. A Kuka LBR iiwa with joint torque sensors in all of its seven axis’ was used to achieve a constant contact pressure. This is the basic criteria to achieve a robust picking process.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
Landing heel first has been associated with elevated external knee abduction moments (KAM), thereby potentially increasing the risk of sustaining a non-contact ACL injury. Apart from the foot strike angle, knee valgus angle (VAL) and vertical center of mass velocity at initial ground contact (IC) have been associated with increased KAM in females across different sidestep cuts. While real-time biofeedback training has been proven effective for gait retraining [4], the highly dynamic, non-cyclical nature of cutting maneuvers makes real-time feedback unsuitable and alternative approaches necessary. This study aimed at assessing the efficacy of immediate software-aided feedback on cutting technique in reducing KAM during handball-specific cutting maneuvers.
Due to globalization and the resulting increase in competition on the market, products must be produced more and more cheaply, especially in series production, because buyers expect new variants or even completely new products in ever shorter cycles. Injection molding is the most important production process for manufacturing plastic components in large quantities. However, the conventional production of a mold is extremely time-consuming and costly, which creates a contradiction to the fast pace of the market. Additive tooling is an area of application of additive manufacturing, which in the field of injection molding is preferably used for the prototype production of mold inserts. This allows injection molding tools to be produced faster and more cheaply than through the subtractive manufacturing of metal tools. Material Jetting processes using polymers (MJT-UV/P), also called Polyjet Modeling (PJM), have a great potential for use in additive tooling. Due to the poorer mechanical and thermal properties compared to conventional mold insert materials, e.g. steel or aluminum, the previously used design principles cannot be applied. Accordingly, new design guidelines are necessary, which are developed in this paper. The necessary information is obtained with the help of a systematic literature research. The design guidelines are mapped in a uniform design guide, which is structured according to the design process of injection molds. The guidelines do not only refer to the constructive design of the injection mold or the polymer mold insert, but to the entire design process and describe the four phases of planning, conception, development and realization. Particular attention is paid to the special geometric designs of a polymer mold insert and the thermomechanical properties of the mold insert materials. As a result, design guidelines are available that are adapted to the special requirements of additive tooling of molds inserts made of plastics for injection molding.
Visual programming languages (VPL) let users develop software programs by combining visual program elements, like lists of objects, loops or conditional statements rather than by specifying them textually.
Humanoid robots programming is a very attractive and motivating application domain for students, especially for programming beginners. Humanoid robots are constructed in such a way that they mimic the human body by using actuators that perform like muscles. Typically, a humanoid robot consists of sensors and actuators, i.e. torso, a head, two arms, and two legs, though some humanoid robots may replicate only part of the body, for example, from the waist up. In some cases, humanoid robots are equipped with heads designed to replicate additional human facial features such as eyes. Additional sensors are needed by a robot to gather information about the conditions of the environment to allow the robot to make necessary decisions about its position or certain actions that the situation requires, e.g. an arm movement or an open/close hand action. Other examples for sensor are reflective infrared sensors used to detect objects in proximity.
In this work, we introduce a use-case centered approach based on sensors and actors of a robot and a workflow model to visually describe the sequence of actions including conditional actions or concurrent actions. We provide an in-depth discussion of a new VPL based teaching method for programming humanoid robots based on VPLs. Open research challenges, limits and perspectives for further development of our teaching approach are discussed as well.
The main advantage of mobile context-aware applications is to provide effective and tailored services by considering the environmental context, such as location, time, nearby objects and other data, and adapting their functionality according to the changing situations in the context information without explicit user interaction. The idea behind Location-Based Services (LBS) and Object-Based Services (OBS) is to offer fully-customizable services for user needs according to the location or the objects in a mobile user's vicinity. However, developing mobile context-aware software applications is considered as one of the most challenging application domains due to the built-in sensors as part of a mobile device. Visual Programming Languages (VPL) and hybrid visual programming languages are considered to be innovative approaches to address the inherent complexity of developing programs. The key contribution of our new development approach for location and object-based mobile applications is a use case driven development approach based on use case templates and visual code templates to enable even programming beginners to create context-aware mobile applications. An example of the use of the development approach is presented and open research challenges and perspectives for further development of our approach are formulated.
Sensors and actuators enable creation of context-aware applications in which applications can discover and take advantage of contextual information, such as user location, nearby people and objects. In this work, we use a general context definition, which can be applied to various devices, e.g., robots and mobile devices. Developing context-based software applications is considered as one of the most challenging application domains due to the sensors and actuators as part of a device. We introduce a new development approach for context-based applications by using use-case descriptions and Visual Programming Languages (VPL). The introduction of web-based VPLs, such as Scratch and Snap, has reinvigorated the usefulness of VPLs. We provide an in-depth discussion of our new VPL based method, a step by step development process to enable development of context-based applications. Two case studies illustrate how to apply our approach to different problem domains: Context-based mobile apps and context-based humanoid robot applications.
Public educational institutions are increasingly confronted with a decline in the number of applicants, which is why competition between colleges and universities is also intensifying. For this reason, it is important to position oneself as an institution in order to be perceived by the various target groups and to differentiate oneself from the competition. In this context, the brand and thus its perception and impact play a decisive role, especially in view of the desired communication of the institution's own values and its self-image, the brand identity. To this end, emotions serve as an approach to creating positive stimulation and brand loyalty.
Polyarticulated active prostheses constitute a promising solution for upper limb amputees. The bottleneck for their adoption though, is the lack of intuitive control. In this context, machine learning algorithms based on pattern recognition from electromyographic (EMG) signals represent a great opportunity for naturally operating prosthetic devices, but their performance is strongly affected by the selection of input features. In this study, we investigated different combinations of 13 EMG-derived features obtained from EMG signals of healthy individuals performing upper limb movements and tested their performance for movement classification using an Artificial Neural Network. We found that input data (i.e., the set of input features) can be reduced by more than 50% without any loss in accuracy, while diminishing the computing time required to train the classifier. Our results indicate that input features must be properly selected in order to optimize prosthetic control.
An international study summarizes the threat situation in the OT environment under the heading "Growing security threats" [1]. According to this study, attacks on automation systems are likely to increase in the future. Accordingly, an automation system must be able to protect the integrity of the transmitted information in the future. This requirement is motivated, among other things, by the fact that the network-side isolation of industrial communication systems is no longer considered sufficient as the sole protective measure. This paper uses the example of PROFINET to show how the future requirements for a real-time communication protocol can be met and how they can be derived from the IEC 62443 standard.
The variable refrigerant flow system is one of the best heating, ventilation, and air conditioning systems (HVAC) thanks to its ability to provide thermal comfort inside buildings. But, at the same time, these systems are considered one of the most energy-consuming systems in the building sector. Thus, it is crucial to well size the system according to the building’s cooling and heating needs and the indoor temperature fluctuations. Although many researchers have studied the optimization of the building energy performance considering heating or cooling needs, using air handling units, radiant floor heating, and direct expansion valves, few studies have considered the use of multi-objective optimization using only the thermostat setpoints of VRF systems for both cooling and heating needs. Thus, the main aim of this study is to conduct a sensitivity analysis and a multi-objective optimization strategy for a residential building containing a variable refrigerant flow system, to evaluate the effect of the building performance on energy consumption and improve the building energy efficiency. The numerical model was based on the EnergyPlus, jEPlus, and jEPlus+EA simulation engines. The approach used in this paper has allowed us to reach significant quantitative energy saving by varying the cooling and heating setpoints and scheduling scenarios. It should be stressed that this approach could be applied to several HVAC systems to reduce energy-building consumption.