Refine
Year of publication
Document Type
- Conference Proceeding (934) (remove)
Conference Type
- Konferenzartikel (734)
- Konferenz-Abstract (134)
- Sonstiges (34)
- Konferenz-Poster (22)
- Konferenzband (12)
Language
- English (934) (remove)
Keywords
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Kommunikation (9)
- Assistive Technology (8)
- TRIZ (8)
- Deep Leaning (7)
- Produktion (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (303)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (196)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (194)
- Fakultät Wirtschaft (W) (131)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (109)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (99)
- INES - Institut für nachhaltige Energiesysteme (51)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Closed Access (376)
- Open Access (361)
- Closed (185)
- Bronze (105)
- Diamond (26)
- Grün (11)
- Gold (6)
- Hybrid (6)
Most recently, the federal government in Germany published new climate goals in order reach climate neutrality by 2045. This paper demonstrates a path to a cost optimal energy supply system for the German power grid until the year 2050. With special regard to regionality, the system is based on yearly myopic optimization with the required energy system transformation measures and the associated system costs. The results point out, that energy storage systems (ESS) are fundamental for renewables integration in order to have a feasible energy transition. Moreover, the investment in storage technologies increased the usage of the solar and wind technologies. Solar energy investments were highly accompanied with the installation of short-term battery storage. Longer-term storage technologies, such as H2, were accompanied with high installations of wind technologies. The results pointed out that hydrogen investments are expected to overrule short-term batteries if their cost continues to decrease sharply. Moreover, with a strong presence of ESS in the energy system, biomass energy is expected to be completely ruled out from the energy mix. With the current emission reduction strategy and without a strong presence of large scale ESS into the system, it is unlikely that the Paris agreement 2° C target by 2050 will be achieved, let alone the 1.5° C.
In the railway technical centers, scheduling the maintenance activities is a very complex task, it consists in ordering, in the time, all the maintenance operations on the workstations, while respecting the number of resources, precedence constraints, and the workstations' availabilities. Currently, this process is not completely automatic. For improving this situation, this paper presents a mathematical model for the maintenance activities scheduling in the case of railway remanufacturing systems. The studied problem is modeled as a flexible job-shop, with the possibility for a job to be executed several times on a stage. MILP formulation is implemented with the Makespan as an objective, representing the time for remanufacturing the train. The aim is to create a generic model for optimizing the planning of the maintenance activities and improving the performance of the railway technical centers. At last, numerical results are presented, discussing the impact of the instances size on the computing time to solve the described problem.
Lithium-ion batteries show strongly nonlinear behaviour regarding the battery current and state of charge. Therefore, the modelling of lithium-ion batteries is complex. Combining physical and data-driven models in a grey-box model can simplify the modelling. Our focus is on using neural networks, especially neural ordinary differential equations, for grey-box modelling of lithium-ion batteries. A simple equivalent circuit model serves as a basis for the grey-box model. Unknown parameters and dependencies are then replaced by learnable parameters and neural networks. We use experimental full-cycle data and data from pulse tests of a lithium iron phosphate cell to train the model. Finally, we test the model against two dynamic load profiles: one consisting of half cycles and one dynamic load profile representing a home-storage system. The dynamic response of the battery is well captured by the model.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Data is ever increasing in the computing world. Due to advancement of cloud technology the dynamics of volumes of data and its capacity has increased within a short period of time and will keep increasing further. Providing transparency, privacy, and security to the cloud users is becoming more and more challenging along with the volume of data and use of cloud services. We propose a new approach to address the above mentioned challenge by recording the user events in the cloud ecosystem into log files and applying MAR principle namely 1) Monitoring 2) Analyzing and 3) Reporting.
To provide proper solutions to the problem of device dependant content delivery, a fine categorization of the application target devices is needed. Earlier attempts provided two different presentations for desktop and mobile platforms. The mobile platform presentation was divided into three categories, based on a general classification (PDA, Smartphone or mobile phone). In order to improve the on mobile device presentation a finer categorization is introduced. In this paper, our focus is to clarify the concept of this more flexible presentation module, in which the delivered content depends on the efficiency of the device based on a selected set of capabilities.
The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.
This paper shows the results of the evaluation of two sets of mobile web design guidelines concerning mobile learning. The first set of guidelines is concerned with the usage of text on mobile device screens. The second set is concerned with the usage of images on mobile devices. The evaluation is performed by eye tracking (objective) as well as questionnaires and interviews (subjective) respectively.
The idea of this game is to use a flashcard system to create a short story in a foreign language. The story is developed by a group of people by exchanging sentences via a flashcard system. This way, people can learn from each other without fear of making mistakes because the group members are anonymous.
Flashcards are a well known and proven method to learn and memorise. Such a way of learning is perfectly suited for “learning on the way,” but carrying all the flashcards could be awkward. In this scenario, a mobile device (mobile phone) is an adequate solution. The new mobile device operating system Android from Google allows for writing multimedia-enriched applications.
Electrode modelling and simulation of diagnostic and pulmonary vein isolation in atrial fibrillation
(2022)
Logging information is more precious as it contains the execution of a system; it is produced by millions of events from simple application logins to random system errors. Most of the security related problems in the cloud ecosystem like intruder attacks, data loss, and denial of service, etc. could be avoided if Cloud Service Provider (CSP) or Cloud User (CU) analyses the logging information. In this paper we introduced few challenges, which are place of monitoring, security, and ownership of the logging information between CSP and CU.
Also we proposed a logging architecture to analyze the behaviour of the cloud ecosystem, to avoid data breaches and other security related issues at the CSP space. So that we believe our proposed architecture can provide maximum trust between CU and CSP.
The advantages of the coupling-of-modes (COM) formalism and the transmission-matrix approach are combined to create exact and computationally efficient analysis and synthesis CAD tools for the design of SAW-resonator filters. The models for the filter components, especially gratings, interdigital transducers (IDTs). and multistrip couplers (MSCs), are based on the COM approach, which delivers closed-form expressions. In order to determine the relevant COM parameters, the integrated COM differential equations are compared with analytically derived expressions from the transmission-matrix approach. The most important second-order effects such as energy storage, propagation loss and mechanical and electrical loading are fully taken into account. As an example, the authors investigate a two-pole, acoustically coupled resonator filter at 914.5 MHz on AT quartz. Excellent agreement between theory and measurement is found.
Structures for interconnecting active microwave semiconductor-devices, e.g. FET's and MIC's, with the electrical surrounding or with each other have to be designed more and more carefully when increasing the desired upper frequency limit. Therefore, several connecting structures for device embedding have been examined. Mainly, their applicability for the frequency range from 10 GHz to 100 GHz was considered. Additionally, different equivalent circuits were developed to approximately describe their behaviour for CAD-applications.
In short-reach connections, large-diameter multimode fibres allow for robust and easy connections. Unfortunately, their propagation properties depend on the excitation conditions. We propose a launching technique using a fibre stub that can tolerate fabrication tolerances in terms of tilts and off-sets to a large extent. A study of the influence of displaced connectors along the transmission link shows that the power distributions approach a steady-state power distribution very similar to the initial distribution established by the proposed launching scheme.
In this work a set of nonlinear coupled COM equations at interacting frequencies is derived on the basis of nonlinear electro-elasticity. The formalism is presented with the aim of describing intermodulation distortion of third-order (IMD3) and triple beat. The resulting COM equations are translated to the P-matrix formalism, where care is taken to obtain the correct frequency dependence. The scheme depends on two frequency-independent constants for an effective third-order nonlinearity. One of these two constants is negligibly small in the systems considered here. The P-matrix approach is applied to single filters and duplexers on LiTaO 3 (YXl)/42° operating in different frequency ranges. Both IMD3 and triple beat show good agreement with measurement.
A Nonlinear FEM Model to Calculate Third-Order Harmonic and Intermodulation in TC-SAW Devices
(2018)
Nonlinearities in Temperature Compensated SAW (TC-SAW) devices in the 2 GHz range are investigated using a nonlinear finite element model by simultaneously considering both third-order intermodulation distortion (IMD3)and third harmonic (H3). In the employed perturbation approach, different contributions to the total H3, the direct and indirect contribution, are discussed. H3 and IMD3 measurements were fitted simultaneously using scaling factors for SiO 2 film and Cu electrode nonlinear material tensors in TC-SAW devices. We employ a P-Matrix simulation as intermediate step: Firstly, measurement and nonlinear P-Matrix calculations for finite devices are compared and coefficients of the P-Matrix simulation are determined. The nonlinear tensor data of the different materials involved in periodic nonlinear finite element method (FEM) computations are optimized to fit periodic P-Matrix calculations by introducing scaling factors. Thus, the contribution of different materials to the nonlinear behavior of TC-SAW devices is obtained and the role of materials is discussed.
Today's network landscape consists of quite different network technologies, wide range of end-devices with large scale of capabilities and power, and immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of efforts are being done in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including mobile considering individual situation of the end user. This is very difficult because various kinds of devices used by different users or in different times/parallel by the same user which are not predictable and have to be recognized by the system in order to identify device capabilities. Not only the devices but also Content and User Interfaces are big issues because they could include different kinds of data format like text, image, audio, video, 3D Virtual Reality data and other upcoming formats. A very suitable and useful example of the use of such a system is mobile learning because of the large amount of varying devices with significantly different features and functionalities. This is true not only to support different learners, e.g. all learners within one learning community, but also to support the same learner using different equipment parallel and/or at different times. Those applications may be significantly enhanced by including virtual reality content presentation. Whatever the purposes are, it is impossible to develop and adapt content for all kind of devices including mobiles individually due to different capabilities of the devices, cost issues and author‘s requirement. A solution should be found to enable the automation of the content adaptation process.
The concept of m-learning which differs from other forms of e-learning covers a wide range of possibilities opened up by the convergence of new mobile technologies, wireless communication structure and distance learning development. This process of converging has launched some new goals to support m-learning where heterogeneity of devices, their operating systems (Linux, Windows, Symbian, Android etc) and supported markup languages (WML, XHTML etc), adaptive content, preferences or characteristics of user have become some of the major problems to be solved. To facilitate the learning process even more and to establish literally anytime anywhere learning, learning material/content should be available to the user always even if the user is in offline. Multiple devices used by the same user should also be synchronized among themselves and with server to provide updated learning content and to give a freedom to the user to choose any device as per his/her convenience. In this paper software architecture has been proposed to solve these problems and has been implemented by using a multidimensional flashcard learning system which synchronizes among all the devices that are being used by the user.
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices, will increase their diversity and variety. In this paper software architecture has been proposed to establish device and content format independent communication including 3D imaging and virtual reality data as content. As experimental validation the concept is implemented in collaborative Language Learning Game (LLG), which is a learning tool for language acquisition.
In the field of smart metering it can be observed that standardized protocol, like Wireless M-Bus or ZigBee, enjoy a rapidly increasing popularity. For the protocol implementations, however, up to now, mostly legacy engineering processes and technologies are used, and modern approaches such as model driven design processes or open software platform are disregarded. Therefore, within the WiMBex project, it shall be demonstrated that it is possible to develop a commercial class Wireless M-Bus implementation following state-of-the art design process and using TinyOS as an open source platform. This contribution describes the overall approach of the project, as well as the state and the first experiences of the current work in progress.
During the last ten years the development of wireless sensing applications has become more and more attractive. A major reason for this trend is the large quantity of available wireless technologies. The progressing demand on wireless technologies is mainly driven through development from the industrial wireless sensors market. Especially requirements like low energy consumption, a resource saving simple protocol stack and short timing delays between different states of the wireless transceivers are very important for wireless sensors. Bluetooth Low Energy (BLE) is a rather new wireless standard in addition to the traditional Bluetooth standard (Basis rate and enhanced data rate, BR/EDR) [1]. The BLE will completely fulfill these fundamental requirements. First BLE transceiver chips and modules are available and have been tested and implemented in products. In this paper the performance analysis results of a BLE sensor system which is based on the TI transceiver CC2540F [5] will be presented. The results can be taken for further important investigations like lifetime calculations or BLE simulation models.
Today's network landscape consists of many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. In this paper software architecture has been proposed to establish device and content format independent communication, implemented in Language Learning Game (LLG).
Since cabling is very complex and often causes reliability problems in aircrafts new approaches which base on wireless technologies are highly desired. In this paper an innovative communication system is proposed that uses the essential elements of the airframe for data transfer. The communication is based on the wireless standard for Digital Video Broadcasting (DVB) and enables high data rates, which are required for the in-flight entertainment system as an example of use.
This paper analyzes the applicability of existing communication technology on the Smart Grid. In particular it evaluates how networks, e.g. Peer-to-Peer (P2P) and decentralized Virtual Private Network (VPN) can help set up an agent-based system. It is expected that applications on Smart Grid devices will become more powerful and be able to operate without a central control instance. We analyze which requirements agents and Smart Grid devices place on communication systems and validate promising approaches. The main focus is to create a logical overlay network that provides direct communication between network nodes. We provide a comparison of different approaches of P2P networks and mesh-VPNs. Finally the advantages of mesh-VPN for agent-based systems are worked out.
In large aircrafts the cabling is very complex and often causes reliability problems. This is specially true for modern In-flight Entertainment (IFE) systems, where every passenger can select a preferred movie, play computer games or be able to communicate with other travellers. Due to EMC problems, wireless communication systems (WiFi etc.) didn't succeed in solving these problems. In this paper an innovative communication system is proposed which perfectly supplements an aircraft IFE system. The key innovation of this system is to use structures that are essential parts of the airframe for data transfer, such as seat rails. Those rails consist of rectangular shapes and could easily be modified to fulfill the function of waveguides for microwaves. A waveguide as part of the seat rail would provide enormous benefits for aircrafts, such as a large bandwidth and consequently high data rates, no problems with EMC, unlimited flexibility of seat configuration, mechanical robustness with associated increase of reliability and a few additional advantages related to aircrafts such as reduction of weight and costs.
Mobile learning (m-learning) can be considered as a new paradigm of e-learning. The developed solution enables the presentation of animations and 3D virtual reality (VR) on mobile devices and is well suited for mobile learning. Difficult relations in physics as well as intricate experiments in optics can be visualised on mobile devices without need for a personal computer. By outsourcing the computational power to a server, the coverage is worldwide.
This paper explores the potential of an m-learning environment by introducing the concept of mLab, a remote laboratory environment accessible through the use of handheld devices.
We are aiming to enhance the existing e-learning platform and internet-assisted laboratory settings, where students are offered in-depth tutoring, by providing compact tuition and tools for controlling simulations that are made available to learners via handheld devices. In this way, students are empowered by having access totheir simulations from any place and at any time.
Brand-related-user-generated-content allows companies to achieve several important objectives, such as increasing sales and creating higher user engagement. In this paper a research framework is developed that provides an overview of the necessary processes to successfully use brand-related-user-generated-content. The framework also helps managers to understand the main motives of users when posting brand-related-user-generated-content. Expert interviews were carried out to validate the research framework. The results from the interviews support the proposed framework. Brand-related-user-generated-content can increase purchase intention and the community engagement. From a user’s perspective the opportunity to interact with a brand and be featured on official brand channels could be seen as the main motivation for creating brand-related-user-generated-content.
Transthoracic impedance cardiography (ICG) is a non-invasive method for determination of hemodynamic parameters. The basic principle of transthoracic ICG is the measurement of electrical conductivity of the thorax over the time. The aim of the study was the analysis of hemodynamic parameters from healthy individuals and the evaluation of various hemodynamic monitoring devices. Fourteen men (mean age 25 ± 4.59 years) and twelve women (mean age 24 ± 3.5 years) were measured during the cardiovascular engineering laboratory at Offenburg University of Applied Sciences, Offenburg, Germany. The ICG recordings were measured with the devices CardioScreen 1000, CardioScreen 2000 and TensoScreen with the corresponding Software Cardiovascular Lab 2.5 (Medis Medizinische Messtechnik GmbH, Illmenau, Germany). In order to create identical frame conditions, all measurements were recorded in the same position and for the same duration. Various positions were simulated from horizontal lying position to vertical standing position. Altogether, more than 30 hemodynamic parameters were measured.
In contrast to conventional aortic valve replacement, the Transcatheter Aortic Valve Implantation (TAVI) is a new highly specialist alternative to surgical valve replacement for patients with symptomatic severe aortic stenosis and high operative risk. The procedure was performed in a minimally invasive way and was introduced at the university heart centre, Freiburg – Bad Krozingen in 2008. The results have been getting better and better over the years. The aim of the investigation is the analysis of electrocardiogram conduction time and the electrocardiography changes recorded hours and days after the procedure depending on artificial heart valve models, which may lead to pacemaker implantation, even the analysis of the effectiveness of treatment.
In previous work we [1] and other authors (e.g. [2]) have shown that agent-based systems are successful in optimizing delivery plans of single logistics companies and are meanwhile successfully productive in industry. In this paper we show that agent-based systems are particularly useful to also optimize transport across logistics companies. In intercompany optimization, privacy is of major importance between the otherwise competing companies. Some data has to be treated strictly private like the cost model or the constraint model. Other data like order information has to be shared. However, typically the amount of orders released to other companies has also to be limited. We show that our agent-based approach can be easily fine tuned to trade off privacy against the benefit of cooperation.
The transition from college to university can have a variety of psychological effects on students who need to cope with daily obligations by themselves in a new setting, which can result in loneliness and social isolation. Mobile technology, specifically mental health apps (MHapps), have been seen as promising solutions to assist university students who are facing these problems, however, there is little evidence around this topic. My research investigates how a mobile app can be designed to reduce social isolation and loneliness among university students. The Noneliness app is being developed to this end; it aims to create social opportunities through a quest-based gamified system in a secure and collaborative network of local users. Initial evaluations with the target audience provided evidence on how an app should be designed for this purpose. These results are presented and how they helped me to plan the further steps to reach my research goals. The paper is presented at MobileHCI 2020 Doctoral Consortium.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.
Loneliness, an emotional distress caused by the lack of meaningful social connections, has been increasingly affecting university students who need to deal with everyday situations in a new setting, especially those who have come from abroad. Currently there is little work on digital solutions to reduce loneliness. Therefore, this work describes the general design considerations for mobile apps in this context and outlines a potential solution. The mobile app Noneliness is used to this end: it aims to reduce loneliness by creating social opportunities through a quest-based gamified system in a secure and collaborative network of local users. The results of initial evaluations with the target audience are described. The results informed a user interface redesign as well as a review of the features and the gamification principles adopted.
Wireless sensor networks have found their way into a wide range of applications, among which environmental monitoring systems have attracted increasing interests of researchers. Main challenges for these applications are scalability of the network size and energy efficiency of the spatially distributed nodes. Nodes are mostly battery-powered and spend most of their energy budget on the radio transceiver module. In normal operation modes most energy is spent waiting for incoming frames. A so-called Wake-On-Radio (WOR) technology helps to optimize trade-offs between energy consumption, communication range, complexity of the implementation and response time. We already proposed a new protocol called SmartMAC that makes use of such WOR technology. Furthermore, it gives the possibility to balance the energy consumption between sender and receiver nodes depending on the use case. Based on several calculations and simulations, it was predicted that the SmartMAC protocol was significantly more efficient than other schemes being proposed in recent publications, while preserving a certain backward compatibility with standard IEEE802.15.4 transceivers. To verify this prediction, we implemented the SmartMAC protocol for a given hardware platform. This paper compares the realtime performance of the SmartMAC protocol against simulation results, and proves the measured values are very close to the estimated values. Thus we believe that the proposed MAC algorithms outperforms all other Wake-on-Radio MACs.
Physically Unclonable Functions (PUFs) are hardware-based security primitives, which allow for inherent device fingerprinting. Therefore, intrinsic variation of imperfect manufactured systems is exploited to generate device-specific, unique identifiers. With printed electronics (PE) joining the internet of things (IoT), hardware-based security for novel PE-based systems is of increasing importance. Furthermore, PE offers the possibility for split-manufacturing, which mitigates the risk of PUF response readout by third parties, before commissioning. In this paper, we investigate a printed PUF core as intrinsic variation source for the generation of unique identifiers from a crossbar architecture. The printed crossbar PUF is verified by simulation of a 8×8-cells crossbar, which can be utilized to generate 32-bit wide identifiers. Further focus is on limiting factors regarding printed devices, such as increased parasitics, due to novel materials and required control logic specifications. The simulation results highlight, that the printed crossbar PUF is capable to generate close-to-ideal unique identifiers at the investigated feature size. As proof of concept a 2×2-cells printed crossbar PUF core is fabricated and electrically characterized.
Printed electronics (PE) offers flexible, extremely low-cost, and on-demand hardware due to its additive manufacturing process, enabling emerging ultra-low-cost applications, including machine learning applications. However, large feature sizes in PE limit the complexity of a machine learning classifier (e.g., a neural network (NN)) in PE. Stochastic computing Neural Networks (SC-NNs) can reduce area in silicon technologies, but still require complex designs due to unique implementation tradeoffs in PE. In this paper, we propose a printed mixed-signal system, which substitutes complex and power-hungry conventional stochastic computing (SC) components by printed analog designs. The printed mixed-signal SC consumes only 35% of power consumption and requires only 25% of area compared to a conventional 4-bit NN implementation. We also show that the proposed mixed-signal SC-NN provides good accuracy for popular neural network classification problems. We consider this work as an important step towards the realization of printed SC-NN hardware for near-sensor-processing.
Activities for rehabilitation and prevention are often lengthy and associated with pain and frustration. Their playful enrichment (hereafter: gamification) can counteract this, resulting in so-called “exergames”. However, in contrast to games designed solely for entertainment, the increased motivation and immersion in gamified training can lead to a reduced perception of pain and thus to health deterioration. Therefore, it is necessary to monitor activities continuously. However, only an AI-based system able to generate autonomous interventions could vacate the therapists’ costly time and allow better training at home. An automated adjustment of the movement training’s difficulty as well as individualized goal setting and control are essential to achieve such autonomy. This article’s contribution is two-fold: (1) We portray the potentials of gamification in the health area. (2) We present a framework for smart rehabilitation and prevention training allowing autonomous, dynamic, and gamified interactions.
The present work ties in with the problem of bicycle road assessment that is currently done using expensive special measuring vehicles. Our alternative approach for road condition assessment is to mount a sensor device on a bicycle which sends accelerometer and gyroscope data via WiFi to a classification server. There, a prediction model determines road type and condition based on the sensor data. For the classification task, we compare different machine learning methods with each other, whereby validation accuracies of 99% can be achieved with deep residual networks such as InceptionTime. The main contribution of this work with respect to comparable work is that we achieve excellent accuracies on a realistic dataset classifying road conditions into nine distinct classes that are highly relevant for practice.
Sustainable chemical processes should be designed to combine the technological advantages and progress with lower safety risks and minimization of environmental impact such as, for example, reduction of raw materials, energy and water consumption, and avoidance of hazardous waste and pollution with toxic chemical agents. A number of novel eco-friendly chemical technologies have been developed in the recent decades with the help of the eco-innovations approaches and methods such as Life Cycle Analysis, Green Process Engineering, Process Intensification, Process Design for Sustainability, and others. An emerging approach to the sustainable process design in process engineering builds on the innovative solutions inspired from nature. However, the implementation of the eco-friendly technologies often faces secondary ecological problems. The study postulates that the eco-inventive principles identified in natural systems allow to avoid secondary eco-problems and proposes to apply these principles for sustainable design in chemical process engineering. The research work critically examines how this approach differs from the biomimetics, as it is commonly used for copying natural systems. The application of nature-inspired eco-design principles is illustrated with an example of a sustainable technology for extraction of nickel from pyrophyllite.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
The paper describes the implementation of practical laboratory settings in a virtual environment. With the entry of VR glasses into the mass market, there is a chance to establish educational and training applications for displaying some teaching materials and practical works. Therefore our project focuses on the realization of virtual experiments and environments, which gives users a deep insight into selected subfields of Optics and Photonics. Our goal is not to substitute the hand on experiments rather to extend them. By means of VR glasses, the user is offered the possibility to view the experiment from several angles and to make changes through interactive control functions. During the VR application, additional context-related information is displayed. By using object recognition, the specific graphics and texts for the respective object are loaded and supplemented at the appropriate place. Thus, complex facts are supported in an informative way. The prototype is developed using the Unity Engine and can thus be exported to different platforms and end devices. Another major advantage of virtual simulations to the real situation is the high degree of controllability as well as the easy repeatability. With slight modifications, entire experiments can be reused. Our research aims to acquire new knowledge in the field of e-learning in association with VR technology. Here we try to answer a core question of the compatibility of the individual media components.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
Increasing power density causes increased self-generation of harmonics and intermodulation. As this leads to violations of the strict linearity requirements, especially for carrier aggregation (CA), the nonlinearity must be considered in the design process of RF devices. This raises the demand of accurate simulation models. Linear and nonlinear P-Matrix/COM models are used during the design due to their fast simulation times and accurate results. However, the finite element method (FEM) is useful to get a deeper insight in the device's nonlinearities, as the total field distributions can be visualized. The FE method requires complete sets of material tensors, which are unknown for most relevant materials in nonlinear micro-acoustics. In this work, we perform nonlinear FEM simulations, which allow the calculation of nonlinear field distributions of a lithium tantalate based layered SAW system up to third order. We aim at achieving good correspondence to measured data and determine the contributions of each material layer to the nonlinear signals. Therefore, we use approximations circumventing the issue of limited higher order tensor data. Experimental data for the third order nonlinearity is shown to validate the presented approach.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
To demonstrate how deep learning can be applied to industrial applications with limited training data, deep learning methodologies are used in three different applications. In this paper, we perform unsupervised deep learning utilizing variational autoencoders and demonstrate that federated learning is a communication efficient concept for machine learning that protects data privacy. As an example, variational autoencoders are utilized to cluster and visualize data from a microelectromechanical systems foundry. Federated learning is used in a predictive maintenance scenario using the C-MAPSS dataset.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
It seems to be a widespread impression that the use of strong cryptography inevitably imposes a prohibitive burden on industrial communication systems, at least inasmuch as real-time requirements in cyclic fieldbus communications are concerned. AES-GCM is a leading cryptographic algorithm for authenticated encryption, which protects data against disclosure and manipulations. We study the use of both hardware and software-based implementations of AES-GCM. By simulations as well as measurements on an FPGA-based prototype setup we gain and substantiate an important insight: for devices with a 100 Mbps full-duplex link, a single low-footprint AES-GCM hardware engine can deterministically cope with the worst-case computational load, i.e., even if the device maintains a maximum number of cyclic communication relations with individual cryptographic keys. Our results show that hardware support for AES-GCM in industrial fieldbus components may actually be very lightweight.
For the past few years Low Power Wide Area Networks (LPWAN) have emerged as key technologies for the connectivity of many applications in the Internet of Things (IoT) combining low-data rates with strict cost and energy restrictions. Especially LoRa/LoRaWAN enjoys a high visibility on today’s markets, because of its good performance and its open community. Originally LoRa was designed for operation within the Sub-GHz ISM bands for Industrial, Scientific and Medical applications. However, at the end of 2018, a LoRa-based solution in the 2.4GHz ISM-band was presented promising higher bandwidths and higher data rates. Furthermore, it overcomes the limited duty-cycle prescribed by the regulations in the ISM-bands and therefore also opens doors to many novel application fields. Also, due to higher bandwidths and shorter transmission times, the use of alternative MAC layer protocols becomes very interesting, i.e. for TDMA based-approaches. Within this paper, we propose a system architecture with 2.4GHz LoRa components combining two aspects. On the one hand, we present a design and an implementation of a 2.4GHz based LoRaWAN solution that can be seamlessly integrated into existing LoRaWAN back-hauls. On the other hand, we describe deterministic setup using a Time Slotted Channel Hopping (TSCH) approach as defined in the IEEE802.15.4-2015 standard for industrial applications. Finally, measurements show the performance of the system.
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
We describe a prototype for power line communi- cation for grid monitoring. The PLC receiver is used to gain information about the PLC channel and the current state of the power grid. The PLC receiver uses the communication signal to obtain an accurate estimate of the current channel and provides information which can be used as a basis for further processing with the aim to detect partial discharges and other anomalies in the grid. This monitoring of the power grid takes advantage of existing PLC infrastructure and uses the data signals, which are transmitted anyway to obtain a real-time measurement of the channel transfer function and the received noise signal. Since this signal is sampled at a high sampling rate compared to simpler measurement sensors, it contains valuable information about possible degradations in the grid which need to be addressed. While channel measurements are based on a received PLC signal, information about partial discharges or other sources of interference can be gathered by a PLC receiver in the absence of a transmit signal. A prototype based on Software Defined Radio has been developed, which implements the simultaneous communication and sensing for a power grid.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
This paper describes a thorough analysis of using PPO to learn kick behaviors with simulated NAO robots in the simspark environment. The analysis includes an investigation of the influence of PPO hyperparameters, network size, training setups and performance in real games. We believe to improve the state of the art mainly in four points: first, the kicks are learned with a toed version of the NAO robot, second, we improve the reliability with respect to kickable area and avoidance of falls, third, the kick can be parameterized with desired distance and direction as input to the deep network and fourth, the approach allows to integrate the learned behavior seamlessly into soccer games. The result is a significant improvement of the general level of play.
This study aims to investigate the individual response concerning BRFs for AT when the mid-sole hardness underneath the rearfoot was systematically altered. We first identified FGs based on the footwear condition that minimised the risk for AT across BRFs. We then tested the FGs for differences in anthropometrics, footwear comfort, and running characteristics.
This paper describes a taxonomy which allows to assess and compare different implementations of master data objects. A systematic breakdown of core entities provides a framework to tell apart four subdividing categories of master data objects: independent and dependent objects, relational objects, and reference objects that serve to attribute information. This supports the preparation of data migrations from one system to another.
In bimodal cochlear implant (CI) / hearing aid (HA) users a constant interaural time delay in the order of several milliseconds occurs due to differences in signal processing of the devices. For MED-EL CI systems in combination with different HA types, we have quantified the respective device delay mismatch (Zirn et al. 2015). In the current study, we investigate the effect of the device delay mismatch in simulated and actual bimodal listeners on sound localization accuracy.
To deal with the device delay mismatch in actual bimodal listeners we delayed the CI stimulation according to the measured HA processing delay and two other values. With all delay values highly significant improvements of the rms error in the localization task were observed compared to the test without the delay. The results help to narrow down the optimal patient-specific delay value.
Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as semantic mask labels, show our model’s capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification. For reproducibility, we provide an interactive open-source tool to perform facial manipulations, and the Pytorch implementation of the model.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
For some years now, additive manufacturing (AM) has offered an alternative to conventional manufacturing processes. The strengths of AM are primarily the rapid implementation of ideas into a usable product and the ability to produce geometrically complex shapes. It has also significantly advanced the lightweight design of products made of plastic. So far, the strength of printed components made of polymers is previously very limited.
Recently, new AM processes have become available that allow the embedding of short and also long fibers in polymer matrix. Thus, the manufacturing of components that provide a significant increase in strength becomes possible. In this way, both complex geometries and sophisticated applications can be implemented. This paper therefore investigates how this new technology can be implemented in product development, focusing on sports equipment. An extensive literature research shows that lightweight design plays a decisive role in sports equipment. In addition, the advantages of AM in terms of individualized products and low quantities can be fully exploited.
An example of this approach is the steering system for a seat sled used by paraplegic athletes in the Olympic discipline of Nordic paraskiing. A particular challenge here is the placement and alignment of the long carbon fibers within the polymer matrix and the verification of the strength by means of Finite-Element-Analysis (FEA). In addition, findings from bionics are used to optimize the lightweight design of the steering system. Using this example, it can be shown that the weight of the steering system can be drastically reduced compared to conventional manufacturing. At the same time, a number of parts can be saved through function integration and thus the manufacturing and assembly effort can be reduced significantly.
Today, Additive Manufacturing (AM) is an important part of teaching for the education of future engineers. Therefore, a variety of approaches have been developed in recent years on how to bring the design for additive manufacturing (DfAM) into university teaching. In a detailed literature review, the advantages and disadvantages of the previous approaches are considered and analysed. Based on this, an extended approach is presented in which students analyse and optimize a given product with respect to additive manufacturing. In doing so, the students have to solve challenging tasks in optimization in product development with the help of methodical approaches and practically implement their developed solutions with state-of-the-art additive processes. To work on this task, the students have two different 3D printers at their disposal, which work with different processes and materials. Thus, the students learn to adapt the design to different manufacturing processes and to consider the restrictions of different materials. The assessment of the results from this course is done through feedback and a written survey.
As a reaction to the increasing market dynamics and complex requirements, today’s products need to be developed quickly and customized to the customer’s individual needs. In the past, CAD systems are mainly used to visualize the model that the product designer creates. Generative Design shifts the task of the CAD program by actively participating in the shaping process. This results in more design options and the complexity of the shapes and geometries increases significantly. This potential can be optimally exploited through the combination of Generative Design with Additive Manufacturing (AM). Artificial intelligence and the input of target parameters generate geometries, for example, by creating material for stressed areas, which in turn develops biomorphic shapes and thus significantly reduces the consumption of resources. This contribution aims at the evaluation of existing applications in CAD systems for generative design. Special attention is paid to the requirements in design education and easy access for students. For this purpose, three representative CAD systems are selected and analyzed with the help of a comprehensive example of mass reduction. The aim is to perform an individual result analysis in order to assess the application based on various criteria. By using different materials, the influence of the material for the generation is investigated by comparing the material distribution. By comparing the generated models, differences of the CAD systems can be identified and possible fields of application can be presented. By specifying the manufacturing parameters for the generation of the models, the feasibility of AM can be guaranteed without having to modify the results. The physical implementation of the example by means of Fused Deposition Modeling demonstrates this in an exemplary way and examines the interface of the Generative Design and AM. The results of this contribution will enable an evaluation of the different CAD systems for Generative Design according to technical, visual and economic aspects.
Engineering, construction and operation of complex machines involves a wide range of complicated, simultaneous tasks, which potentially could be automated. In this work, we focus on perception tasks in such systems, investigating deep learning approaches for multi-task transfer learning with limited training data. We show an approach that takes advantage of a technical systems’ focus on selected objects and their properties. We create focused representations and simultaneously solve joint objectives in a system through multi-task learning with convolutional autoencoders. The focused representations are used as a starting point for the data-saving solution of the additional tasks. The efficiency of this approach is demonstrated using images and tasks of an autonomous circular crane with a grapple.
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (eg Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
We demonstrate how to exploit group sparsity in order to bridge the areas of network pruning and neural architecture search (NAS). This results in a new one-shot NAS optimizer that casts the problem as a single-level optimization problem and does not suffer any performance degradation from discretizating the architecture.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
The Go programming language is an increasingly popular language but some of its features lack a formal investigation. This article explains Go's resolution mechanism for overloaded methods and its support for structural subtyping by means of translation from Featherweight Go to a simple target language. The translation employs a form of dictionary passing known from type classes in Haskell and preserves the dynamic behavior of Featherweight Go programs.
The internal crowdsourcing-based ideation within a company can be defined as an involvement of its staff, specialists, managers, and other employees, to propose solution ideas for a pre-defined problem. This paper addresses a question, how many participants of the company-internal ideation process are required to nearly reach the ideation limit for the problems with a finite number of workable solutions. To answer the research question, the author proposes a set of metrics and a non-linear ideation performance function with a positive decreasing slope and ideation limit for the closed-ended problems. Three series of experiments helped to explore relationships between the metric attributes and resulted in a mathematical model which allows companies to predict the productivity metrics of their crowdsourcing ideation activities such as quantity of different ideas and ideation limit as a function of the number of contributors, their average personal creativity and ideation efficiency of a contributors’ group.
A coordinated operation of decentralised micro-scale hybrid energy systems within a locally managed network such as a district or neighbourhood will play a significant role in the sector-coupled energy grid of the future. A quantitative analysis of the effects of the primary energy factors, energy conversion efficiencies, load profiles, and control strategies on their energy-economic balance can aid in identifying important trends concerning their deployment within such a network. In this contribution, an analysis of the operational data from five energy laboratories in the trinational Upper-Rhine region is evaluated and a comparison to a conventional reference system is presented. Ten exemplary data-sets representing typical operation conditions for the laboratories in different seasons and the latest information on their national energy strategies are used to evaluate the primary energy consumption, CO2 emissions, and demand-related costs. Various conclusions on the ecologic and economic feasibility of hybrid building energy systems are drawn to provide a toe-hold to the engineering community in their planning and development.
In the field of network security, the detection of possible intrusions is an important task to prevent and analyse attacks. Machine learning has been adopted as a particular supporting technique over the last years. However, the majority of related published work uses post mortem log files and fails to address the required real-time capabilities of network data feature extraction and machine learning based analysis [1-5]. We introduce the network feature extractor library FEX, which is designed to allow real-time feature extraction of network data. This library incorporates 83 statistical features based on reassembled data flows. The introduced Cython implementation allows processing individual packets within 4.58 microseconds. Based on the features extracted by FEX, existing intrusion detection machine learning models were examined with respect to their real-time capabilities. An identified Decision-Tree Classifier model was thus further optimised by transpiling it into C Code. This reduced the prediction time of a single sample to 3.96 microseconds on average. Based on the feature extractor and the improved machine learning model an IDS system was implemented which supports a data throughput between 63.7 Mbit/s and 2.5 Gbit/s making it a suitable candidate for a real-time, machine-learning based IDS.
The nonlinear behavior of inverters is mainly influenced by the interlocking and switching times of the semiconductors. In the following work, a method is presented that enables the possibility of an online identification of the switching times of the semiconductors. This information allows a compensation of the non-linear behavior, a reduction of the locking time and can be used for diagnostic purposes. First, a theoretical derivation of the method is made by considering different cases when switching of the inverter and deriving identification possibilities. The method is then extended so that the entire module is taken into account. Furthermore, a possible theoretical implementation is shown. After the methodology has been investigated with possible limitations, boundary conditions and with respect to real hardware, an implementation in the FPGA is performed. Finally, the results are presented, discussed
and further improvements are presented in an outlook.
As one result of the digital transformation in the automotive industry, new digital business models comprising software-based solutions are demanded by OEMs. To adequately meet these new requirements, automotive suppliers implement interdisciplinary roles – called Customer Solution Designers. However, due to the novelty, the Customer Solution Design research field is not yet well developed, neither in theory nor in practice. Besides giving an overview of the current state of the Customer Solution Design research field, the core of this paper is two-fold: Based on the conduction of 14 guided expert interviews with selected experts of a large German automotive supplier, we establish a uniform understanding of the Customer Solution Design role by using the Role Model Canvas (I). In addition, a case study strategy comprising two software-based projects, which are executed by a large German automotive supplier, is used to derive a common approach for Customer Solution Design in the context of an agile business framework (II).
Due to the pandemic of 2020, many teaching and research institutions are confronted with extraordinary working conditions. In order to enable empirical data collection under these special circumstances, teachers and scientists need to respond flexibly and new concepts need to be developed. This paper deals with the challenges that arise in day-to-day teaching and provides different approaches to meet these challenges. It covers quantitative surveys, remote UX-testing methods as an alternative to eye tracking studies in the lab, as well as face-to-face user experience testings under strict hygiene measures.
In an experience economy market competition in software branches is becoming more and more intense. Technical innovations, global retail practices and the multidimensional conception of experiences provide both opportunities and challenges for companies worldwide. Retailers strive for an optimized conversion rate, but poor UX still abound. Particularly Germany-based companies are less evolved in an international comparison of industrialized economies. The value of integrating users in the development process is recognized, but methodologies must carefully be incorporated into existing agile workflows. The goal of this study is to bridge the gaps between internal agency and external client and user interests. The contribution is four-fold: an overview of the current status of customer centricity in the E-Commerce branch of trade is provided (I). Based on this corpus, a methodical framework, aiming to incorporate the experience logic in UX practices within an agile project team, is presented (II). The framework is applied by a single case study - the shop relaunch of a motorbike accessory store (III). Finally, all interest groups (UX, development and project management) are incorporated in the qualitative content analysis (IV).
Offenburg university of Applied Sciences offers pre-study extracurricular preparatory courses for future engineering students in mathematics and physics. Due to pandemic restrictions, the two-week preparatory physics course preceeding winter term 2020/21 was presented as an online -only course.
Students enrolled to the course attended eight online lect ures of approximately 90 minutes duration followed by a group assignment. Both lectures and tutoring to the group assignment used a videoconference system with group sizes of 120 (lecture) and 6 (peer instruction and group assignments). The eight lectures focused on the high school physics curriculum of mechanics, electricity, thermodynamics and optics. Each lecture included four “peer instruction” questions to improve student activation. Student responses were collected using an audience response online tool.
The “peer instruction” questions were discussed by the students in online groups of six students. These groups also received written group assignments consisting of common textbook exercises and additional problems with incomplete information. To solve these problems, groups were encouraged to discuss possible solutions. The on-line course attendance was monitored and showed a characteristic exponential “decay” curve with a half-life of approximately 18 lectures which is comparable to conventional courses: Around 73% of the students enrolled in the preparatory course attended all eight lectures. In addition to the attendance, the progress of the participants was monitored by two online tests: A pre-course online test the first course day and a post -course online test on the last day.
The completion of both tests was highly recommended, but not a formal requirement for the students. The fraction of students completing the pre-course, but not the post-course test was used as an estimate for the drop-out rate of (34±3)%.