Refine
Year of publication
- 2021 (85) (remove)
Document Type
- Conference Proceeding (85) (remove)
Conference Type
- Konferenzartikel (76)
- Konferenz-Abstract (6)
- Konferenz-Poster (2)
- Konferenzband (1)
Keywords
- Generative Adversarial Network (3)
- Machine Learning (3)
- neural networks (3)
- Education (2)
- Gamification (2)
- Stability (2)
- VR (2)
- cardiac ablation (2)
- convolutional neural networks (2)
- heart rhythm model (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (50)
- IMLA - Institute for Machine Learning and Analytics (16)
- Fakultät Wirtschaft (W) (13)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (11)
- Fakultät Medien (M) (ab 22.04.2021) (10)
- INES - Institut für nachhaltige Energiesysteme (6)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (6)
- ACI - Affective and Cognitive Institute (4)
- POIM - Peter Osypka Institute of Medical Engineering (3)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (1)
Open Access
- Closed Access (45)
- Open Access (40)
- Bronze (5)
- Grün (2)
- Diamond (1)
- Gold (1)
Most recently, the federal government in Germany published new climate goals in order reach climate neutrality by 2045. This paper demonstrates a path to a cost optimal energy supply system for the German power grid until the year 2050. With special regard to regionality, the system is based on yearly myopic optimization with the required energy system transformation measures and the associated system costs. The results point out, that energy storage systems (ESS) are fundamental for renewables integration in order to have a feasible energy transition. Moreover, the investment in storage technologies increased the usage of the solar and wind technologies. Solar energy investments were highly accompanied with the installation of short-term battery storage. Longer-term storage technologies, such as H2, were accompanied with high installations of wind technologies. The results pointed out that hydrogen investments are expected to overrule short-term batteries if their cost continues to decrease sharply. Moreover, with a strong presence of ESS in the energy system, biomass energy is expected to be completely ruled out from the energy mix. With the current emission reduction strategy and without a strong presence of large scale ESS into the system, it is unlikely that the Paris agreement 2° C target by 2050 will be achieved, let alone the 1.5° C.
Elektronische Türschilder zur Darstellung von Informationen sind insbesondere in öffentlichen Gebäuden zwischenzeitlich weit verbreitet. Die Varianz dieser elektronischen Türschilder reicht vom Tablet-basierten Türschild bis hin zum PC-basierten Türschild mit externem Bildschirm. Zumeist werden die Systeme mit 230 V betrieben. Bei einer großen Summe von Türschildern in öffentlichen Gebäuden kann dies zu einem signifikanten Umsatz an Energie führen. Im Rahmen dieses Papers wird die Entwicklung eines energieautarken arbeiten Türschildes vorgestellt, bei dem ein E-Paper-Display zum Einsatz kommt. Das Türschild lässt sich per Smartphone-App und NFC-Schnittstelle konfigurieren. Es wird insbesondere auf das Low-Power-Hardware-Design der Elektronik und energetische Aspekte eingegangen.
This paper describes the authors' first experiments in creating an artificial dancer whose movements are generated through a combination of algorithmic and interactive techniques with machine learning. This approach is inspired by the time honoured practice of puppeteering. In puppeteering, an articulated but inanimate object seemingly comes to live through the combined effects of a human controlling select limbs of a puppet while the rest of the puppet's body moves according to gravity and mechanics. In the approach described here, the puppet is a machine-learning-based artificial character that has been trained on motion capture recordings of a human dancer. A single limb of this character is controlled either manually or algorithmically while the machine-learning system takes over the role of physics in controlling the remainder of the character's body. But rather than imitating physics, the machine-learning system generates body movements that are reminiscent of the particular style and technique of the dancer who was originally recorded for acquiring training data. More specifically, the machine-learning system operates by searching for body movements that are not only similar to the training material but that it also considers compatible with the externally controlled limb. As a result, the character playing the role of a puppet is no longer passively responding to the puppeteer but makes movement decisions on its own. This form of puppeteering establishes a form of dialogue between puppeteer and puppet in which both improvise together, and in which the puppet exhibits some of the creative idiosyncrasies of the original human dancer.
Strings P
(2021)
Strings is an audiovisual performance for an acoustic violin and two generative instruments, one for creating synthetic sounds and one for creating synthetic imagery. The three instruments are related to each other conceptually , technically, and aesthetically by sharing the same physical principle, that of a vibrating string. This submission continues the work the authors have previously published at xCoAx 2020. The current submission briefly summarizes the previous publication and then describes the changes that have been made to Strings. The P in the title emphasizes, that most of these changes have been informed by experiences collected during rehearsals (in German Proben). These changes have helped Strings to progress from a predominantly technical framework to a work that is ready for performance.
The transition from college to university can have a variety of psychological effects on students who need to cope with daily obligations by themselves in a new setting, which can result in loneliness and social isolation. Mobile technology, specifically mental health apps (MHapps), have been seen as promising solutions to assist university students who are facing these problems, however, there is little evidence around this topic. My research investigates how a mobile app can be designed to reduce social isolation and loneliness among university students. The Noneliness app is being developed to this end; it aims to create social opportunities through a quest-based gamified system in a secure and collaborative network of local users. Initial evaluations with the target audience provided evidence on how an app should be designed for this purpose. These results are presented and how they helped me to plan the further steps to reach my research goals. The paper is presented at MobileHCI 2020 Doctoral Consortium.
Loneliness, an emotional distress caused by the lack of meaningful social connections, has been increasingly affecting university students who need to deal with everyday situations in a new setting, especially those who have come from abroad. Currently there is little work on digital solutions to reduce loneliness. Therefore, this work describes the general design considerations for mobile apps in this context and outlines a potential solution. The mobile app Noneliness is used to this end: it aims to reduce loneliness by creating social opportunities through a quest-based gamified system in a secure and collaborative network of local users. The results of initial evaluations with the target audience are described. The results informed a user interface redesign as well as a review of the features and the gamification principles adopted.
Duplikaterkennung, -suche und -konsolidierung für Kunden- und Geschäftspartnerdaten, sog. „Identity Resolution“, ist die Voraussetzung für erfolgreiches Customer Relationship Management und Customer Experience Management, aber auch für das Risikomanagement zur Minimierung von Betrugsrisiken und Einhaltung regulatorischer Vorschriften und viele weitere Anwendungsfälle. Diese Systeme sind jedoch hochkomplex und müssen individuell an die kundenspezifischen Anforderungen angepasst werden. Der Einsatz lernbasierter Verfahren bietet großes Potenzial zur automatisierten Anpassung. In diesem Beitrag präsentieren wir für ein KMU praxisfähige, lernbasierte Verfahren zur automatischen Konfiguration von Business-Regeln in Duplikaterkennungssystemen. Dabei wurden für Fachanwender Möglichkeiten entwickelt, um beispielgetrieben das Match-System an individuelle Business-Regeln (u.a. Umzugserkennung, Sperrlistenabgleich) anzupassen und zu konfigurieren. Die entwickelten Verfahren wurden evaluiert und in einer prototypischen Lösung integriert. Wir konnten zeigen, dass unser Machine-Learning-Verfahren, die von einem Domainexperten erstellten Business-Regeln für das Duplikaterkennungssystem „identity“ verbessern konnte. Zudem konnte der hierzu erforderliche Zeitaufwand verkürzt werden.
Grey-box modelling combines physical and data-driven models to benefit from their respective advantages. Neural ordinary differential equations (NODEs) offer new possibilities for grey-box modelling, as differential equations given by physical laws and neural networks can be combined in a single modelling framework. This simplifies the simulation and optimization and allows to consider irregularly-sampled data during training and evaluation of the model. We demonstrate this approach using two levels of model complexity; first, a simple parallel resistor-capacitor circuit; and second, an equivalent circuit model of a lithium-ion battery cell, where the change of the voltage drop over the resistor-capacitor circuit including its dependence on current and State-of-Charge is implemented as NODE. After training, both models show good agreement with analytical solutions respectively with experimental data.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
We demonstrate how to exploit group sparsity in order to bridge the areas of network pruning and neural architecture search (NAS). This results in a new one-shot NAS optimizer that casts the problem as a single-level optimization problem and does not suffer any performance degradation from discretizating the architecture.
The twin concept is increasingly used for optimization tasks in the context of Industry 4.0 and digitization. The twin concept can also help small and medium-sized enterprises (SME) to exploit their energy flexibility potential and to achieve added value by appropriate energy marketing. At the same time, this use of flexibility helps to realize a climate-neutral energy supply with high shares of renewable energies. The digital twin reflects real production, power flows and market influences as a computer model, which makes it possible to simulate and optimize on-site interventions and interactions with the energy market without disturbing the real production processes. This paper describes the development of a generic model library that maps flexibility-relevant components and processes of SME, thus simplifying the creation of a digital twin. The paper also includes the development of an experimental twin consisting of SME hardware components and a PLC-based SCADA system. The experimental twin provides a laboratory environment in which the digital twin can be tested, further developed and demonstrated on a laboratory scale. Concrete implementations of such a digital twin and experimental twin are described as examples.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as semantic mask labels, show our model’s capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification. For reproducibility, we provide an interactive open-source tool to perform facial manipulations, and the Pytorch implementation of the model.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (eg Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
When shopping online, it is usually not possible to view products in the same way as you are used to when shopping offline. With augmented reality (AR), it is not only possible to view the product in detail, but also to view it at home in the real environment. Such an AR application sets stimuli that can affect the users and their purchase decision and Word-of-mouth intention. In this work, we assume that when viewing a product in AR, not only affective internal states but also cognitive perception processes have an impact on purchase decision and Word-of-mouth intention. While positive affective reactions have already been studied in the context of AR, this paper will also describe inner cognitive perception processes, using the construct of AR authenticity. To test these assumptions, a study was conducted with 155 participants. The results show that both the purchase intention and the Word-of-mouth intention are influenced by the constructs of positive affective reactions and AR authenticity.
Increasing power density causes increased self-generation of harmonics and intermodulation. As this leads to violations of the strict linearity requirements, especially for carrier aggregation (CA), the nonlinearity must be considered in the design process of RF devices. This raises the demand of accurate simulation models. Linear and nonlinear P-Matrix/COM models are used during the design due to their fast simulation times and accurate results. However, the finite element method (FEM) is useful to get a deeper insight in the device's nonlinearities, as the total field distributions can be visualized. The FE method requires complete sets of material tensors, which are unknown for most relevant materials in nonlinear micro-acoustics. In this work, we perform nonlinear FEM simulations, which allow the calculation of nonlinear field distributions of a lithium tantalate based layered SAW system up to third order. We aim at achieving good correspondence to measured data and determine the contributions of each material layer to the nonlinear signals. Therefore, we use approximations circumventing the issue of limited higher order tensor data. Experimental data for the third order nonlinearity is shown to validate the presented approach.
The paper describes the implementation of practical laboratory settings in a virtual environment. With the entry of VR glasses into the mass market, there is a chance to establish educational and training applications for displaying some teaching materials and practical works. Therefore our project focuses on the realization of virtual experiments and environments, which gives users a deep insight into selected subfields of Optics and Photonics. Our goal is not to substitute the hand on experiments rather to extend them. By means of VR glasses, the user is offered the possibility to view the experiment from several angles and to make changes through interactive control functions. During the VR application, additional context-related information is displayed. By using object recognition, the specific graphics and texts for the respective object are loaded and supplemented at the appropriate place. Thus, complex facts are supported in an informative way. The prototype is developed using the Unity Engine and can thus be exported to different platforms and end devices. Another major advantage of virtual simulations to the real situation is the high degree of controllability as well as the easy repeatability. With slight modifications, entire experiments can be reused. Our research aims to acquire new knowledge in the field of e-learning in association with VR technology. Here we try to answer a core question of the compatibility of the individual media components.
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
Offenburg university of Applied Sciences offers pre-study extracurricular preparatory courses for future engineering students in mathematics and physics. Due to pandemic restrictions, the two-week preparatory physics course preceeding winter term 2020/21 was presented as an online -only course.
Students enrolled to the course attended eight online lect ures of approximately 90 minutes duration followed by a group assignment. Both lectures and tutoring to the group assignment used a videoconference system with group sizes of 120 (lecture) and 6 (peer instruction and group assignments). The eight lectures focused on the high school physics curriculum of mechanics, electricity, thermodynamics and optics. Each lecture included four “peer instruction” questions to improve student activation. Student responses were collected using an audience response online tool.
The “peer instruction” questions were discussed by the students in online groups of six students. These groups also received written group assignments consisting of common textbook exercises and additional problems with incomplete information. To solve these problems, groups were encouraged to discuss possible solutions. The on-line course attendance was monitored and showed a characteristic exponential “decay” curve with a half-life of approximately 18 lectures which is comparable to conventional courses: Around 73% of the students enrolled in the preparatory course attended all eight lectures. In addition to the attendance, the progress of the participants was monitored by two online tests: A pre-course online test the first course day and a post -course online test on the last day.
The completion of both tests was highly recommended, but not a formal requirement for the students. The fraction of students completing the pre-course, but not the post-course test was used as an estimate for the drop-out rate of (34±3)%.
The need for the logistics sector to timely respond to the increasing requirements of a globalised and digitalised world relies greatly on the com- petences and skills of its labour force. It becomes therefore essential to reinforce the cooperation between universities and business partners in the logistics and supply chain management fields across the European region and to build a logistics knowledge cluster supported by a communication and collaboration platform to foster continuous learning, skill acquisition and experience sharing anytime anywhere. In this paper we focus on designing the conceptual and technical framework for a communication and collaboration platform with the aim to establish the communication pipelines between the partner institutions, facilitating user interactions and exchange, leading to the creation of new knowledge and innovation in the logistics field. This framework is based on the requirements of the three main stakeholders: students, lecturers and companies, and consists of four functional areas defined according to the platform opera- tional requirements. A working prototype of the platform was developed using the Moodle learning management system and its core tools to determine its applicability and possible enhancement requirements. In the next stages of the project some additional tools like a knowledge base and the integration of the partners’ learning management systems to form the logistics knowledge cluster will be implemented.
The Human-Robot-Collaboration (HRC) has developed rapidly in recent years with the help of collaborative lightweight robots. An important prerequisite for HRC is a safe gripper system. This results in a new field of application in robotics, which spreads mainly in supporting activities in the assembly and in the care. Currently, there are a variety of grippers that show recognizable weaknesses in terms of flexibility, weight, safety and price.
By means of Additive manufacturing (AM) gripper systems can be developed which can be used multifunctionally, manufactured quickly and customized. In addition, the subsequent assembly effort can be reduced due to the integration of several components to a complex component. An important advantage of AM is the new freedom in designing products. Thus, components using lightweight design can be produced. Another advantage is the use of 3D multi-material printing, wherein a component with different material properties and also functions can be realized.
This contribution presents the possibilities of AM considering HRC requirements. First of all, the topic of Human-Robot-Interaction with regard to additive manufacturing will be explained on the basis of a literature review. In addition, the development steps of the HRI gripper through to assembly are explained. The acquired knowledge regarding the AM are especially emphasized here. Furthermore, an application example of the HRC gripper is considered in detail and the gripper and its components are evaluated and optimized with respect to their function. Finally, a technical and economic evaluation is carried out. As a result, it is possible to additively manufacture a multifunctional and customized human-robot collaboration gripping system. Both the costs and the weight were significantly reduced. Due to the low weight of the gripping system only a small amount of about 13% of the load of the robot used is utilized.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
The present work ties in with the problem of bicycle road assessment that is currently done using expensive special measuring vehicles. Our alternative approach for road condition assessment is to mount a sensor device on a bicycle which sends accelerometer and gyroscope data via WiFi to a classification server. There, a prediction model determines road type and condition based on the sensor data. For the classification task, we compare different machine learning methods with each other, whereby validation accuracies of 99% can be achieved with deep residual networks such as InceptionTime. The main contribution of this work with respect to comparable work is that we achieve excellent accuracies on a realistic dataset classifying road conditions into nine distinct classes that are highly relevant for practice.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Engineering, construction and operation of complex machines involves a wide range of complicated, simultaneous tasks, which potentially could be automated. In this work, we focus on perception tasks in such systems, investigating deep learning approaches for multi-task transfer learning with limited training data. We show an approach that takes advantage of a technical systems’ focus on selected objects and their properties. We create focused representations and simultaneously solve joint objectives in a system through multi-task learning with convolutional autoencoders. The focused representations are used as a starting point for the data-saving solution of the additional tasks. The efficiency of this approach is demonstrated using images and tasks of an autonomous circular crane with a grapple.
The nonlinear behavior of inverters is mainly influenced by the interlocking and switching times of the semiconductors. In the following work, a method is presented that enables the possibility of an online identification of the switching times of the semiconductors. This information allows a compensation of the non-linear behavior, a reduction of the locking time and can be used for diagnostic purposes. First, a theoretical derivation of the method is made by considering different cases when switching of the inverter and deriving identification possibilities. The method is then extended so that the entire module is taken into account. Furthermore, a possible theoretical implementation is shown. After the methodology has been investigated with possible limitations, boundary conditions and with respect to real hardware, an implementation in the FPGA is performed. Finally, the results are presented, discussed
and further improvements are presented in an outlook.
As a reaction to the increasing market dynamics and complex requirements, today’s products need to be developed quickly and customized to the customer’s individual needs. In the past, CAD systems are mainly used to visualize the model that the product designer creates. Generative Design shifts the task of the CAD program by actively participating in the shaping process. This results in more design options and the complexity of the shapes and geometries increases significantly. This potential can be optimally exploited through the combination of Generative Design with Additive Manufacturing (AM). Artificial intelligence and the input of target parameters generate geometries, for example, by creating material for stressed areas, which in turn develops biomorphic shapes and thus significantly reduces the consumption of resources. This contribution aims at the evaluation of existing applications in CAD systems for generative design. Special attention is paid to the requirements in design education and easy access for students. For this purpose, three representative CAD systems are selected and analyzed with the help of a comprehensive example of mass reduction. The aim is to perform an individual result analysis in order to assess the application based on various criteria. By using different materials, the influence of the material for the generation is investigated by comparing the material distribution. By comparing the generated models, differences of the CAD systems can be identified and possible fields of application can be presented. By specifying the manufacturing parameters for the generation of the models, the feasibility of AM can be guaranteed without having to modify the results. The physical implementation of the example by means of Fused Deposition Modeling demonstrates this in an exemplary way and examines the interface of the Generative Design and AM. The results of this contribution will enable an evaluation of the different CAD systems for Generative Design according to technical, visual and economic aspects.
Additive manufacturing is a rapidly growing manufacturing process for which many new processes and materials are currently being developed. The biggest advantage is that almost any shape can be produced, while conventional manufacturing methods reach their limits. Furthermore, a lot of material is saved because the part is created in layers and only as much material is used as necessary. In contrast, in the case of machining processes, it is not uncommon for more than half of the material to be removed and disposed of. Recently, new additive manufacturing processes have been on the market that enables the manufacturing of components using the FDM process with fiber reinforcement. This opens up new possibilities for optimizing components in terms of their strength and at the same time increasing sustainability by reducing materials consumption and waste. Within the scope of this work, different types of test specimens are to be designed, manufactured and examined. The test specimens are tensile specimens, which are used both for standardized tensile tests and for examining a practical component from automotive engineering used in student project. This project is a vehicle designed to compete in the Shell Eco-marathon, one of the world’s largest energy efficiency competitions. The aim is to design a vehicle that covers a certain distance with as little fuel as possible. Accordingly, it is desirable to manufacture the components with the lowest possible weight, while still ensuring the required rigidity. To achieve this, the use of fiber-reinforced 3D-printed parts is particularly suitable due to the high rigidity. In particular, the joining technology for connecting conventionally and additively manufactured components is developed. As a result, the economic efficiency was assessed, and guidelines for the design of components and joining elements were created. In addition, it could be shown that the additive manufacturing of the component could be implemented faster and more sustainably than the previous conventional manufacturing.
The use of architectural models is a long-proven method for the visualization of designs. More recently, powerful 3D printers have enabled the rapid and cost-effective additive manufacturing (AM) of textured architectural models. The use of AM technology to sample terraced houses in a specific use case (sampling center with more than 1200 customers per year) is examined within this contribution. The aim is to offer customers with limited spatial imagination assistance in the form of detailed architectural models of the whole house, which are divided into different modules. For this purpose, the structure of the terraced house is first analysed and examined for flexible design elements. The implementation of different variants of each floor should serve as a basis for the customer's decision on design and equipment. Thus, the architectural models are additively manufactured using Polyjet modeling. The necessary CAAD-data and interfaces, the technical possibilities and limits of this approach as well as the resulting costs are analyzed. The results of the AM process are evaluated to determine their applicability for the sampling of terraced houses. In addition, the evaluation will show that the additively manufactured architectural models will allow a more precise visualization of the building and thus a faster understanding of the design choices.
Active participation of industrial enterprises in electricity markets - a generic modeling approach
(2021)
Industrial enterprises represent a significant portion of electricity consumers with the potential of providing demand-side energy flexibility from their production processes and on-site energy assets. Methods are needed for the active and profitable participation of such enterprises in the electricity markets especially with variable prices, where the energy flexibility available in their manufacturing, utility and energy systems can be assessed and quantified. This paper presents a generic model library equipped with optimal control for energy flexibility purposes. The components in the model library represent the different technical units of an industrial enterprise on material, media, and energy flow levels with their process constraints. The paper also presents a case study simulation of a steel-powder manufacturing plant using the model library. Its energy flexibility was assessed when the plant procured its electrical energy at fixed and variable electricity prices. In the simulated case study, flexibility use at dynamic prices resulted in a 6% cost reduction compared to a fixed-price scenario, with battery storage and the manufacturing system making the largest contributions to flexibility.
This paper describes a taxonomy which allows to assess and compare different implementations of master data objects. A systematic breakdown of core entities provides a framework to tell apart four subdividing categories of master data objects: independent and dependent objects, relational objects, and reference objects that serve to attribute information. This supports the preparation of data migrations from one system to another.
Ein tiefgreifendes Verständnis des zyklischen Plastizitätsverhaltens metallischer Werkstoffe ist sowohl für die Optimierung der Materialeigenschaften als auch für die industrielle Auslegung und Fertigung von Bauteilen von hoher Relevanz. Insbesondere moderne Legierungen wie Duplex-Stähle zeigen unter Lastumkehr aufgrund des komplexen mehrphasigen Gefüges sowie der Neigung zu verschiedenen Ausscheidungsreaktionen einen ausgeprägten Bauschinger-Effekt, welcher bei technischen Umformvorgängen berücksichtigt werden muss. Der Bauschinger-Effekt begründet sich maßgeblich in der Entstehung von Rückspannungen, welche aus dem unterschiedlichen Plastizitätsverhalten der austenitischen und ferritischen Phase resultieren. Instrumentierte Mikroindenter-Versuche in ausgewählten Ferrit- und Austenitkörnern haben gezeigt, dass austenitische Gefügebestandteile durch einen deutlich früheren Fließbeginn sowie eine stärkere Rückplastifizierung während der Entlastung charakterisiert sind. Zudem wurde nachgewiesen, dass Ausscheidungen im Rahmen einer 475°C-Versprödung diesen Phasenunterschied verstärken und somit in einem höheren Bauschinger-Effekt resultieren.