Refine
Year of publication
Document Type
- Conference Proceeding (1253) (remove)
Conference Type
- Konferenzartikel (950)
- Konferenz-Abstract (156)
- Konferenzband (77)
- Sonstiges (42)
- Konferenz-Poster (32)
Language
- English (934)
- German (317)
- Multiple languages (1)
- Russian (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (453)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (286)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (213)
- Fakultät Wirtschaft (W) (164)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (113)
- INES - Institut für nachhaltige Energiesysteme (59)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Open Access (560)
- Closed Access (456)
- Closed (223)
- Bronze (214)
- Diamond (29)
- Grün (13)
- Gold (6)
- Hybrid (6)
Engineering, construction and operation of complex machines involves a wide range of complicated, simultaneous tasks, which potentially could be automated. In this work, we focus on perception tasks in such systems, investigating deep learning approaches for multi-task transfer learning with limited training data. We show an approach that takes advantage of a technical systems’ focus on selected objects and their properties. We create focused representations and simultaneously solve joint objectives in a system through multi-task learning with convolutional autoencoders. The focused representations are used as a starting point for the data-saving solution of the additional tasks. The efficiency of this approach is demonstrated using images and tasks of an autonomous circular crane with a grapple.
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (eg Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
We demonstrate how to exploit group sparsity in order to bridge the areas of network pruning and neural architecture search (NAS). This results in a new one-shot NAS optimizer that casts the problem as a single-level optimization problem and does not suffer any performance degradation from discretizating the architecture.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
The Go programming language is an increasingly popular language but some of its features lack a formal investigation. This article explains Go's resolution mechanism for overloaded methods and its support for structural subtyping by means of translation from Featherweight Go to a simple target language. The translation employs a form of dictionary passing known from type classes in Haskell and preserves the dynamic behavior of Featherweight Go programs.
The internal crowdsourcing-based ideation within a company can be defined as an involvement of its staff, specialists, managers, and other employees, to propose solution ideas for a pre-defined problem. This paper addresses a question, how many participants of the company-internal ideation process are required to nearly reach the ideation limit for the problems with a finite number of workable solutions. To answer the research question, the author proposes a set of metrics and a non-linear ideation performance function with a positive decreasing slope and ideation limit for the closed-ended problems. Three series of experiments helped to explore relationships between the metric attributes and resulted in a mathematical model which allows companies to predict the productivity metrics of their crowdsourcing ideation activities such as quantity of different ideas and ideation limit as a function of the number of contributors, their average personal creativity and ideation efficiency of a contributors’ group.
A coordinated operation of decentralised micro-scale hybrid energy systems within a locally managed network such as a district or neighbourhood will play a significant role in the sector-coupled energy grid of the future. A quantitative analysis of the effects of the primary energy factors, energy conversion efficiencies, load profiles, and control strategies on their energy-economic balance can aid in identifying important trends concerning their deployment within such a network. In this contribution, an analysis of the operational data from five energy laboratories in the trinational Upper-Rhine region is evaluated and a comparison to a conventional reference system is presented. Ten exemplary data-sets representing typical operation conditions for the laboratories in different seasons and the latest information on their national energy strategies are used to evaluate the primary energy consumption, CO2 emissions, and demand-related costs. Various conclusions on the ecologic and economic feasibility of hybrid building energy systems are drawn to provide a toe-hold to the engineering community in their planning and development.
In the field of network security, the detection of possible intrusions is an important task to prevent and analyse attacks. Machine learning has been adopted as a particular supporting technique over the last years. However, the majority of related published work uses post mortem log files and fails to address the required real-time capabilities of network data feature extraction and machine learning based analysis [1-5]. We introduce the network feature extractor library FEX, which is designed to allow real-time feature extraction of network data. This library incorporates 83 statistical features based on reassembled data flows. The introduced Cython implementation allows processing individual packets within 4.58 microseconds. Based on the features extracted by FEX, existing intrusion detection machine learning models were examined with respect to their real-time capabilities. An identified Decision-Tree Classifier model was thus further optimised by transpiling it into C Code. This reduced the prediction time of a single sample to 3.96 microseconds on average. Based on the feature extractor and the improved machine learning model an IDS system was implemented which supports a data throughput between 63.7 Mbit/s and 2.5 Gbit/s making it a suitable candidate for a real-time, machine-learning based IDS.
The nonlinear behavior of inverters is mainly influenced by the interlocking and switching times of the semiconductors. In the following work, a method is presented that enables the possibility of an online identification of the switching times of the semiconductors. This information allows a compensation of the non-linear behavior, a reduction of the locking time and can be used for diagnostic purposes. First, a theoretical derivation of the method is made by considering different cases when switching of the inverter and deriving identification possibilities. The method is then extended so that the entire module is taken into account. Furthermore, a possible theoretical implementation is shown. After the methodology has been investigated with possible limitations, boundary conditions and with respect to real hardware, an implementation in the FPGA is performed. Finally, the results are presented, discussed
and further improvements are presented in an outlook.
As one result of the digital transformation in the automotive industry, new digital business models comprising software-based solutions are demanded by OEMs. To adequately meet these new requirements, automotive suppliers implement interdisciplinary roles – called Customer Solution Designers. However, due to the novelty, the Customer Solution Design research field is not yet well developed, neither in theory nor in practice. Besides giving an overview of the current state of the Customer Solution Design research field, the core of this paper is two-fold: Based on the conduction of 14 guided expert interviews with selected experts of a large German automotive supplier, we establish a uniform understanding of the Customer Solution Design role by using the Role Model Canvas (I). In addition, a case study strategy comprising two software-based projects, which are executed by a large German automotive supplier, is used to derive a common approach for Customer Solution Design in the context of an agile business framework (II).
Due to the pandemic of 2020, many teaching and research institutions are confronted with extraordinary working conditions. In order to enable empirical data collection under these special circumstances, teachers and scientists need to respond flexibly and new concepts need to be developed. This paper deals with the challenges that arise in day-to-day teaching and provides different approaches to meet these challenges. It covers quantitative surveys, remote UX-testing methods as an alternative to eye tracking studies in the lab, as well as face-to-face user experience testings under strict hygiene measures.
In an experience economy market competition in software branches is becoming more and more intense. Technical innovations, global retail practices and the multidimensional conception of experiences provide both opportunities and challenges for companies worldwide. Retailers strive for an optimized conversion rate, but poor UX still abound. Particularly Germany-based companies are less evolved in an international comparison of industrialized economies. The value of integrating users in the development process is recognized, but methodologies must carefully be incorporated into existing agile workflows. The goal of this study is to bridge the gaps between internal agency and external client and user interests. The contribution is four-fold: an overview of the current status of customer centricity in the E-Commerce branch of trade is provided (I). Based on this corpus, a methodical framework, aiming to incorporate the experience logic in UX practices within an agile project team, is presented (II). The framework is applied by a single case study - the shop relaunch of a motorbike accessory store (III). Finally, all interest groups (UX, development and project management) are incorporated in the qualitative content analysis (IV).
Offenburg university of Applied Sciences offers pre-study extracurricular preparatory courses for future engineering students in mathematics and physics. Due to pandemic restrictions, the two-week preparatory physics course preceeding winter term 2020/21 was presented as an online -only course.
Students enrolled to the course attended eight online lect ures of approximately 90 minutes duration followed by a group assignment. Both lectures and tutoring to the group assignment used a videoconference system with group sizes of 120 (lecture) and 6 (peer instruction and group assignments). The eight lectures focused on the high school physics curriculum of mechanics, electricity, thermodynamics and optics. Each lecture included four “peer instruction” questions to improve student activation. Student responses were collected using an audience response online tool.
The “peer instruction” questions were discussed by the students in online groups of six students. These groups also received written group assignments consisting of common textbook exercises and additional problems with incomplete information. To solve these problems, groups were encouraged to discuss possible solutions. The on-line course attendance was monitored and showed a characteristic exponential “decay” curve with a half-life of approximately 18 lectures which is comparable to conventional courses: Around 73% of the students enrolled in the preparatory course attended all eight lectures. In addition to the attendance, the progress of the participants was monitored by two online tests: A pre-course online test the first course day and a post -course online test on the last day.
The completion of both tests was highly recommended, but not a formal requirement for the students. The fraction of students completing the pre-course, but not the post-course test was used as an estimate for the drop-out rate of (34±3)%.
The twin concept is increasingly used for optimization tasks in the context of Industry 4.0 and digitization. The twin concept can also help small and medium-sized enterprises (SME) to exploit their energy flexibility potential and to achieve added value by appropriate energy marketing. At the same time, this use of flexibility helps to realize a climate-neutral energy supply with high shares of renewable energies. The digital twin reflects real production, power flows and market influences as a computer model, which makes it possible to simulate and optimize on-site interventions and interactions with the energy market without disturbing the real production processes. This paper describes the development of a generic model library that maps flexibility-relevant components and processes of SME, thus simplifying the creation of a digital twin. The paper also includes the development of an experimental twin consisting of SME hardware components and a PLC-based SCADA system. The experimental twin provides a laboratory environment in which the digital twin can be tested, further developed and demonstrated on a laboratory scale. Concrete implementations of such a digital twin and experimental twin are described as examples.
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
Grey-box modelling combines physical and data-driven models to benefit from their respective advantages. Neural ordinary differential equations (NODEs) offer new possibilities for grey-box modelling, as differential equations given by physical laws and neural networks can be combined in a single modelling framework. This simplifies the simulation and optimization and allows to consider irregularly-sampled data during training and evaluation of the model. We demonstrate this approach using two levels of model complexity; first, a simple parallel resistor-capacitor circuit; and second, an equivalent circuit model of a lithium-ion battery cell, where the change of the voltage drop over the resistor-capacitor circuit including its dependence on current and State-of-Charge is implemented as NODE. After training, both models show good agreement with analytical solutions respectively with experimental data.
Disturbances of the cardiac conduction system causing reentry mechanisms above the atrioventricular (AV) node are induced by at least one accessory pathway with different conducting properties and refractory periods. This work aims to further develop the already existing and continuously expanding Offenburg heart rhythm model to visualise the most common supraventricular reentry tachycardias to provide a better understanding of the cause of the respective reentry mechanism.
Patients with focal ventricular tachycardia are at risk of hemodynamic failure and if no treatment is provided the mortality rate can exceed 30%. Therefore, medical professionals must be adequately trained in the management of these conditions. To achieve the best treatment, the origin of the abnormality should be known, as well as the course of the disease. This study provides an opportunity to visualize various focal ventricular tachycardias using the Offenburg cardiac rhythm model.
Active participation of industrial enterprises in electricity markets - a generic modeling approach
(2021)
Industrial enterprises represent a significant portion of electricity consumers with the potential of providing demand-side energy flexibility from their production processes and on-site energy assets. Methods are needed for the active and profitable participation of such enterprises in the electricity markets especially with variable prices, where the energy flexibility available in their manufacturing, utility and energy systems can be assessed and quantified. This paper presents a generic model library equipped with optimal control for energy flexibility purposes. The components in the model library represent the different technical units of an industrial enterprise on material, media, and energy flow levels with their process constraints. The paper also presents a case study simulation of a steel-powder manufacturing plant using the model library. Its energy flexibility was assessed when the plant procured its electrical energy at fixed and variable electricity prices. In the simulated case study, flexibility use at dynamic prices resulted in a 6% cost reduction compared to a fixed-price scenario, with battery storage and the manufacturing system making the largest contributions to flexibility.
When shopping online, it is usually not possible to view products in the same way as you are used to when shopping offline. With augmented reality (AR), it is not only possible to view the product in detail, but also to view it at home in the real environment. Such an AR application sets stimuli that can affect the users and their purchase decision and Word-of-mouth intention. In this work, we assume that when viewing a product in AR, not only affective internal states but also cognitive perception processes have an impact on purchase decision and Word-of-mouth intention. While positive affective reactions have already been studied in the context of AR, this paper will also describe inner cognitive perception processes, using the construct of AR authenticity. To test these assumptions, a study was conducted with 155 participants. The results show that both the purchase intention and the Word-of-mouth intention are influenced by the constructs of positive affective reactions and AR authenticity.
Ein tiefgreifendes Verständnis des zyklischen Plastizitätsverhaltens metallischer Werkstoffe ist sowohl für die Optimierung der Materialeigenschaften als auch für die industrielle Auslegung und Fertigung von Bauteilen von hoher Relevanz. Insbesondere moderne Legierungen wie Duplex-Stähle zeigen unter Lastumkehr aufgrund des komplexen mehrphasigen Gefüges sowie der Neigung zu verschiedenen Ausscheidungsreaktionen einen ausgeprägten Bauschinger-Effekt, welcher bei technischen Umformvorgängen berücksichtigt werden muss. Der Bauschinger-Effekt begründet sich maßgeblich in der Entstehung von Rückspannungen, welche aus dem unterschiedlichen Plastizitätsverhalten der austenitischen und ferritischen Phase resultieren. Instrumentierte Mikroindenter-Versuche in ausgewählten Ferrit- und Austenitkörnern haben gezeigt, dass austenitische Gefügebestandteile durch einen deutlich früheren Fließbeginn sowie eine stärkere Rückplastifizierung während der Entlastung charakterisiert sind. Zudem wurde nachgewiesen, dass Ausscheidungen im Rahmen einer 475°C-Versprödung diesen Phasenunterschied verstärken und somit in einem höheren Bauschinger-Effekt resultieren.
Photonics meet digital art
(2014)
The paper focuses on the work of an interdisciplinary project between photonics and digital art. The result is a poster collection dedicated to the International Year of Light 2015. In addition, an internet platform was created that presents the project. It can be accessed at http://www.magic-of-light.org/iyl2015/index.htm. From the idea to the final realization, milestones with tasks and steps will be presented in the paper. As an interdisciplinary project, students from technological degree programs were involved as well as art program students. The 2015 Anniversaries: Alhazen (1015), De Caus (1615), Fresnel (1815), Maxwell (1865), Einstein (1905), Penzias Wilson, Kao (1965) and their milestone contributions in optics and photonics will be highlighted.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Die angestrebten Klimaschutzziele erfordern, dass Erneuerbare Energien längerfristig zur Hauptenergiequelle der Energieversorgung werden. Um dieses ehrgeizige Ziel zu erreichen, ist es angebracht konventionelle und erneuerbare Energie oder noch besser nachhaltige Einzelprozesse intelligent miteinander zu verknüpfen.
Das Projekt EBIPREP wird von einer interdisziplinären Forschergruppe bestehend aus Chemikern, Prozessingenieuren und Bioprozessingenieuren sowie Physikern, die auf Sensoren und Prozesssteuerung spezialisiert sind durchgeführt. Das Ziel ist es, neue Lösungen für die Nutzungswege von Holzhackschnitzeln und den bei der mechanischen Trocknung anfallenden Holzpresssaft zu entwickeln. Neben der Hackschnitzelvergasung und der katalytischen Reinigung des Holzgases steht die Nutzung des Holzpresssafts in Biogasanlagen und bei der biotechnologischen Wertstofferzeugung, z.B. bei der Enzymherstellung, im Vordergrund.
Was wir tun?
Das EBIPREP-Projekt wird von einer interdisziplinären Forschungsgruppe durchgeführt, die sich aus Chemikern, Prozessingenieuren, Bioprozessingenieuren und Physikern zusammensetzt. Ziel ist es, neue Lösungen für den Einsatz von Hackschnitzeln und Holzpresssaft zu entwickeln, die durch ein innovatives mechanisches Trocknungsverfahren gewonnen werden. Neben der Holzvergasung und katalytischen Reinigung des Holzgases ist der Einsatz von Holzpresssaft in Biogasanlagen und in biotechnologischen Produktionsprozessen von Wertstoffen vorgesehen. Holzhackschnitzel werden thermisch vergast. Es werden Online-Sensoren entwickelt, um die relevanten Parameter der stabilisierten und optimierten Einzelprozesse auszuwerten. Die Verknüpfung von thermischen und biotechnologischer Konversionsprozessen könnte dazu beitragen, die Dimension von Biogasreaktoren erheblich zu reduzieren. Diese Tatsache wird folglich zu einer spürbaren Kostensenkung führen.
Ziele des EBIPREP-Projekts
• die Vorteile der thermischen und biologischen Umwandlung von Biomasse zu kombinieren;
• Entwicklung eines Verfahrens zur Reduzierung von Schadstoffemissionen mit innovativen Sensoren und katalytische Behandlung von Synthesegasen;
• nachhaltige Produktion biotechnologischer wertvoller Produkte
• wirtschaftliche und ökologische Analyse des Gesamtprozesses im Vergleich zu den Einzelprozessen
• Einsatz von Prozessabwässern zur Erzeugung regenerativer Energie oder biotechnologischer Wertstoffe
• Erwerb neuer Kenntnisse auf dem Gebiet der Rückgewinnungstechnik von Rückständen
• und Energieerzeugung;
• Erweiterung neuer Anwendungsfelder für innovative Sensoren und Keramik
• Schäume für Katalysatoren;
• Senkung der Kosten für die Biogasproduktion
Im geplanten Übersichtsvortrag werden die vernetzten Strukturen des Projekts EBIPREP und deren zentralen Ergebnisse vorgestellt.
Investigation of the Angle Dependency of Self-Calibration in Multiple-Input-Multiple-Output Radars
(2021)
Multiple-Input-Multiple-Output (MIMO) is a key technology in improving the angular resolution (spatial resolution) of radars. In MIMO radars the amplitude and phase errors in antenna elements lead to increase in the sidelobe level and a misalignment of the mainlobe. As the result the performance of the antenna channels will be affected. Firstly, this paper presents analysis of effect of the amplitude and phase errors on angular spectrum using Monte-Carlo simulations. Then, the results are compared with performed measurements. Finally, the error correction with a self-calibration method is proposed and its angle dependency is evaluated. It is shown that the values of the errors change with an incident angle, which leads to a required angle-dependent calibration.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
Duplikaterkennung, -suche und -konsolidierung für Kunden- und Geschäftspartnerdaten, sog. „Identity Resolution“, ist die Voraussetzung für erfolgreiches Customer Relationship Management und Customer Experience Management, aber auch für das Risikomanagement zur Minimierung von Betrugsrisiken und Einhaltung regulatorischer Vorschriften und viele weitere Anwendungsfälle. Diese Systeme sind jedoch hochkomplex und müssen individuell an die kundenspezifischen Anforderungen angepasst werden. Der Einsatz lernbasierter Verfahren bietet großes Potenzial zur automatisierten Anpassung. In diesem Beitrag präsentieren wir für ein KMU praxisfähige, lernbasierte Verfahren zur automatischen Konfiguration von Business-Regeln in Duplikaterkennungssystemen. Dabei wurden für Fachanwender Möglichkeiten entwickelt, um beispielgetrieben das Match-System an individuelle Business-Regeln (u.a. Umzugserkennung, Sperrlistenabgleich) anzupassen und zu konfigurieren. Die entwickelten Verfahren wurden evaluiert und in einer prototypischen Lösung integriert. Wir konnten zeigen, dass unser Machine-Learning-Verfahren, die von einem Domainexperten erstellten Business-Regeln für das Duplikaterkennungssystem „identity“ verbessern konnte. Zudem konnte der hierzu erforderliche Zeitaufwand verkürzt werden.
With major intellectual properties there is a long tradition of cross-media value chains -- usually starting with books and comics, then transgressing to film and TV and finally reaching interactive media like video games. In recent years the situation has changed: (1) smaller productions start to establish cross media value chains; (2) there is a trend from sequential towards parallel content production. In this work we describe how the production of a historic documentary takes a cross media approach right from the start. We analyze how this impacts the content creation pipelines with respect to story, audience and realization. The focus of the case study is the impact on the production of a documentary game. In a second step we reflect on the experiences gained so far and derive recommendations for future small-scale cross media productions.
Towards a gamification of industrial production: a comparative study in sheltered work environments
(2015)
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
With projectors and depth cameras getting cheaper, assistive systems in industrial manufacturing are becoming increasingly ubiquitous. As these systems are able to continuously provide feedback using in-situ projection, they are perfectly suited for supporting impaired workers in assembling products. However, so far little research has been conducted to understand the effects of projected instructions on impaired workers. In this paper, we identify common visualizations used by assistive systems for impaired workers and introduce a simple contour visualization. Through a user study with 64 impaired participants we compare the different visualizations to a control group using no visual feedback in a real world assembly scenario, i.e. assembling a clamp. Furthermore, we introduce a simplified version of the NASA-TLX questionnaire designed for impaired participants. The results reveal that the contour visualization is significantly better in perceived mental load and perceived performance of the participants. Further, participants made fewer errors and were able to assemble the clamp faster using the contour visualization compared to a video visualization, a pictorial visualization and a control group using no visual feedback.
Design approaches for the gamification of production environments: a study focusing on acceptance
(2015)
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes.
In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness.
The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
In this work we provide an overview of gamification, i.e. the application of methods from game design to enrich non-gaming processes. The contribution is divided into five subsections: an introduction focusing on the progression of gamification through the hype cycle in the recent years (1), a brief introduction to gamification mechanics (1) and an overview of the state of the art in established areas (3). The focus is a discussion of more recent attempts of gamification in service and production (4). We also discuss the ethical implications (5) and the future perspectives (6) of gamified business processes. Gamification has been successfully applied in the domains education (serious games) and health (exergames) and is spreading to other areas. In recent years there have been various attempts to “gamify” business processes. While the first efforts date back as far as the collection of miles in frequent flyer programs, we will portray some of the more recent and comprehensive software-based approaches in the service industry, e.g. the gamification of processes in sales and marketing. We discuss their accomplishments as well as their social and ethical implicatio. Finally a very recent approach is presented: the application of gamification in the domain of industrial production. We discuss the special requirements in this domain and the effects on the business level and on the users. We conclude with a prognosis on the future development of gamification.
Do you know that for each banana bunch the complete plant must be cut as well? Only in Brazil 440 million trees are planted annually. With an average weight of 30 kg per banana plant you can estimate about 13,5 million tons of banana residues per year. Although there exist some projects to use these residues for the production of valuable products (e.g fibers for textile and paper production) most of this organic waste material is unused and left for composting on the farmland.
The basic idea of this project is to evaluate this organic waste material for converting it to a renewable and CO2 neutral fuel. Therefore, the different parts of the banana plant (heart, leaves and pseudo stem) were analyzed regarding their biogas potential (specific biogas yield and biogas production kinetics). In further studies the effect of mechanical and enzymatic pretreatments of the different parts of the plants was investigated. This examination could then be the basis for an energetic usage of this organic residue.
The biogas batch experiments were performed according to the german guideline VDI 4630 in 2-L-Batch reactors at 37°C. As biogas substrates, the heart, the leaves and the pseudo stem of the banana plant residue with and without enzymatic/mechanical pretreatment were used.
The different parts of the banana plants result in a specific biogas production yield in the range of 260-470 norm liters per kg organic dry mass.
To determine the influence of the mechanical pretreatment (particle size 1-15 mm) on the biogas production kinetics, the kinetic constants were defined and calculated. The reduction of the particle size leads to an improved biogas production kinetics. Therefore experiments will demonstrate, if the results from the batch experiments can be converted in the continuous fed biogas reactor. The experiments of the enzymatic pretreatment are still under investigation.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
Covert- and side-channels as well as techniques to establish them in cloud computing are in focus of research for quite some time. However, not many concrete mitigation methods have been developed and even less have been adapted and concretely implemented by cloud providers. Thus, we recently conceptually proposed C 3 -Sched a CPU scheduling based approach to mitigate L2 cache covert-channels. Instead of flushing the cache on every context switch, we schedule trusted virtual machines to create noise which prevents potential covert-channels. Additionally, our approach aims on preserving performance by utilizing existing instead of artificial workload while reducing covert-channel related cache flushes to cases where not enough noise has been achieved. In this work we evaluate cache covert-channel mitigation and performance impact of our integration of C 3 -Sched in the XEN credit scheduler. Moreover, we compare it to naive solutions and more competitive approaches.
Elektronische Türschilder zur Darstellung von Informationen sind insbesondere in öffentlichen Gebäuden zwischenzeitlich weit verbreitet. Die Varianz dieser elektronischen Türschilder reicht vom Tablet-basierten Türschild bis hin zum PC-basierten Türschild mit externem Bildschirm. Zumeist werden die Systeme mit 230 V betrieben. Bei einer großen Summe von Türschildern in öffentlichen Gebäuden kann dies zu einem signifikanten Umsatz an Energie führen. Im Rahmen dieses Papers wird die Entwicklung eines energieautarken arbeiten Türschildes vorgestellt, bei dem ein E-Paper-Display zum Einsatz kommt. Das Türschild lässt sich per Smartphone-App und NFC-Schnittstelle konfigurieren. Es wird insbesondere auf das Low-Power-Hardware-Design der Elektronik und energetische Aspekte eingegangen.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.