Refine
Year of publication
Document Type
- Conference Proceeding (486) (remove)
Language
- English (486) (remove)
Keywords
- Gamification (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (7)
- Ausbildung (6)
- Design (6)
- Deafblindness (5)
- Eingebettetes System (5)
- Heart rhythm model (5)
- Human Computer Interaction (5)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (196)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (86)
- Fakultät Medien und Informationswesen (M+I) (86)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (61)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (61)
- Fakultät Betriebswirtschaft und Wirtschaftsingenieurwesen (B+W) (59)
- ACI - Affective and Cognitive Institute (32)
- INES - Institut für Energiesystemtechnik (19)
- IMLA - Institute for Machine Learning and Analytics (6)
- IaF - Institut für angewandte Forschung (4)
Photonics meet digital art
(2014)
The paper focuses on the work of an interdisciplinary project between photonics and digital art. The result is a poster collection dedicated to the International Year of Light 2015. In addition, an internet platform was created that presents the project. It can be accessed at http://www.magic-of-light.org/iyl2015/index.htm. From the idea to the final realization, milestones with tasks and steps will be presented in the paper. As an interdisciplinary project, students from technological degree programs were involved as well as art program students. The 2015 Anniversaries: Alhazen (1015), De Caus (1615), Fresnel (1815), Maxwell (1865), Einstein (1905), Penzias Wilson, Kao (1965) and their milestone contributions in optics and photonics will be highlighted.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Investigation of the Angle Dependency of Self-Calibration in Multiple-Input-Multiple-Output Radars
(2021)
Multiple-Input-Multiple-Output (MIMO) is a key technology in improving the angular resolution (spatial resolution) of radars. In MIMO radars the amplitude and phase errors in antenna elements lead to increase in the sidelobe level and a misalignment of the mainlobe. As the result the performance of the antenna channels will be affected. Firstly, this paper presents analysis of effect of the amplitude and phase errors on angular spectrum using Monte-Carlo simulations. Then, the results are compared with performed measurements. Finally, the error correction with a self-calibration method is proposed and its angle dependency is evaluated. It is shown that the values of the errors change with an incident angle, which leads to a required angle-dependent calibration.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
With major intellectual properties there is a long tradition of cross-media value chains -- usually starting with books and comics, then transgressing to film and TV and finally reaching interactive media like video games. In recent years the situation has changed: (1) smaller productions start to establish cross media value chains; (2) there is a trend from sequential towards parallel content production. In this work we describe how the production of a historic documentary takes a cross media approach right from the start. We analyze how this impacts the content creation pipelines with respect to story, audience and realization. The focus of the case study is the impact on the production of a documentary game. In a second step we reflect on the experiences gained so far and derive recommendations for future small-scale cross media productions.
Towards a gamification of industrial production: a comparative study in sheltered work environments
(2015)
Using video game elements to improve user experience and user engagement in non-game applications is called "gamification". This method of enriching human-computer interaction has been applied successfully in education, health and general business processes. However, it has not been established in industrial production so far.
After discussing the requirements specific for the production domain we present two workplaces augmented with gamification. Both implementations are based on a common framework for context-aware assistive systems but exemplify different approaches: the visualization of work performance is complex in System 1 and simple in System 2.
Based on two studies in sheltered work environments with impaired workers, we analyze and compare the systems' effects on work and on workers. We show that gamification leads to a speed-accuracy-tradeoff if no quality-related feedback is provided. Another finding is that there is a highly significant raise in acceptance if a straightforward visualization approach for gamification is used.
With projectors and depth cameras getting cheaper, assistive systems in industrial manufacturing are becoming increasingly ubiquitous. As these systems are able to continuously provide feedback using in-situ projection, they are perfectly suited for supporting impaired workers in assembling products. However, so far little research has been conducted to understand the effects of projected instructions on impaired workers. In this paper, we identify common visualizations used by assistive systems for impaired workers and introduce a simple contour visualization. Through a user study with 64 impaired participants we compare the different visualizations to a control group using no visual feedback in a real world assembly scenario, i.e. assembling a clamp. Furthermore, we introduce a simplified version of the NASA-TLX questionnaire designed for impaired participants. The results reveal that the contour visualization is significantly better in perceived mental load and perceived performance of the participants. Further, participants made fewer errors and were able to assemble the clamp faster using the contour visualization compared to a video visualization, a pictorial visualization and a control group using no visual feedback.
Design approaches for the gamification of production environments: a study focusing on acceptance
(2015)
Gamification is an ever more popular method to increase motivation and user experience in real-world settings. It is widely used in the areas of marketing, health and education. However, in production environments, it is a new concept. To be accepted in the industrial domain, it has to be seamlessly integrated in the regular work processes.
In this work we make the following contributions to the field of gamification in production: (1) we analyze the state of the art and introduce domain-specific requirements; (2) we present two implementations gamifying production based on alternative design approaches; (3) these are evaluated in a sheltered work organization. The comparative study focuses acceptance, motivation and perceived happiness.
The results reveal that a pyramid design showing each work process as a step on the way towards a cup at the top is strongly preferred to a more abstract approach where the processes are represented by a single circle and two bars.
In this work we provide an overview of gamification, i.e. the application of methods from game design to enrich non-gaming processes. The contribution is divided into five subsections: an introduction focusing on the progression of gamification through the hype cycle in the recent years (1), a brief introduction to gamification mechanics (1) and an overview of the state of the art in established areas (3). The focus is a discussion of more recent attempts of gamification in service and production (4). We also discuss the ethical implications (5) and the future perspectives (6) of gamified business processes. Gamification has been successfully applied in the domains education (serious games) and health (exergames) and is spreading to other areas. In recent years there have been various attempts to “gamify” business processes. While the first efforts date back as far as the collection of miles in frequent flyer programs, we will portray some of the more recent and comprehensive software-based approaches in the service industry, e.g. the gamification of processes in sales and marketing. We discuss their accomplishments as well as their social and ethical implicatio. Finally a very recent approach is presented: the application of gamification in the domain of industrial production. We discuss the special requirements in this domain and the effects on the business level and on the users. We conclude with a prognosis on the future development of gamification.
In a Semi-autonomic cloud auditing architecture we weaved in privacy enhancing mechanisms [15] by applying the public key version of the Somewhat homomorphic encryption (SHE) scheme from [4]. It turns out that the performance of the SHE can be significantly improved by carefully deriving relevant crypto parameters from the concrete cloud auditing use cases for which the scheme serves as a privacy enhancing approach. We provide a generic algorithm for finding good SHE parameters with respect to a given use case scenario by analyzing and taking into consideration security, correctness and performance of the scheme. Also, to show the relevance of our proposed algorithms we apply it to two predominant cloud auditing use cases.
Covert- and side-channels as well as techniques to establish them in cloud computing are in focus of research for quite some time. However, not many concrete mitigation methods have been developed and even less have been adapted and concretely implemented by cloud providers. Thus, we recently conceptually proposed C 3 -Sched a CPU scheduling based approach to mitigate L2 cache covert-channels. Instead of flushing the cache on every context switch, we schedule trusted virtual machines to create noise which prevents potential covert-channels. Additionally, our approach aims on preserving performance by utilizing existing instead of artificial workload while reducing covert-channel related cache flushes to cases where not enough noise has been achieved. In this work we evaluate cache covert-channel mitigation and performance impact of our integration of C 3 -Sched in the XEN credit scheduler. Moreover, we compare it to naive solutions and more competitive approaches.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
Cross-industry innovation is commonly understood as identification of analogies and interdisciplinary transfer or copying of technologies, processes, technical solutions, working principles or models between industrial sectors. In general, creative thinking in analogies belongs to the efficient ideation techniques. However, engineering graduates and specialists frequently lack the skills to think across the industry boundaries systematically. To overcome this drawback an easy-to-use method based on five analogies has been evaluated through its applications by students and engineers in numerous experiments and industrial case studies. The proposed analogies help to identify and resolve engineering contradictions and apply approaches of the Theory of Inventive Problem Solving TRIZ and biomimetics. The paper analyses the outcomes of the systematized analogies-based ideation and outlines that its performance continuously grows with the engineering experience. It defines metrics for ideation efficiency and ideation performance function.
This book constitutes the refereed proceedings of the 20th International TRIZ Future Conference, TFC 2020, held online at the University Cluj-Napoca, Romania, in October 2020 and sponsored by the International Federation for Information Processing.
34 chapters were carefully peer reviewed and selected from 91 conference submissions. They are organized in the following thematic sections: computing TRIZ; education and pedagogy; sustainable development; tools and techniques of TRIZ for enhancing design; TRIZ and system engineering; TRIZ and complexity; and cross-fertilization of TRIZ for innovation management.
Sustainable design of equipment for process intensification requires a comprehensive and correct identification of relevant stakeholder requirements, design problems and tasks crucial for innovation success. Combining the principles of the Quality Function Deployment with the Importance-Satisfaction Analysis and Contradiction Analysis of requirements gives an opportunity to define a proper process innovation strategy more reliably and to develop an optimal process intensification technology with less secondary engineering and ecological problems.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
RETIS – Real-Time Sensitive Wireless Communication Solution for Industrial Control Applications
(2020)
Ultra-Reliable Low Latency Communications (URLLC) has been always a vital component of many industrial applications. The paper proposes a new wireless URLLC solution called RETIS, which is suitable for factory automation and fast process control applications, where low latency, low jitter, and high data exchange rates are mandatory. In the paper, we describe the communication protocol as well as the hardware structure of the network nodes for implementing the required functionality. Many techniques enabling fast, reliable wireless transmissions are used – short Transmission Time Interval (TTI), Time-Division Multiple Access (TDMA), MIMO, optional duplicated data transfer, Forward Error Correction (FEC), ACK mechanism. Preliminary tests show that reliable end-to-end latency down to 350 μs and packet exchange rate up to 4 kHz can be reached (using quadruple MIMO and standard IEEE 802.15.4 PHY at 250 kbit/s).
The number of use cases for autonomous vehicles is increasing day by day especially in commercial applications. One important application of autonomous vehicles can be found within the parcel delivery section. Here, autonomous cars can massively help to reduce delivery efforts and time by supporting the courier actively. One important component of course is the autonomous vehicle itself. Nevertheless, beside the autonomous vehicle, a flexible and secure communication architecture also is a crucial key component impacting the overall performance of such system since it is required to allow continuous interactions between the vehicle and the other components of the system. The communication system must provide a reliable and secure architecture that is still flexible enough to remain practical and to address several use cases. In this paper, a robust communication architecture for such autonomous fleet-based systems is proposed. The architecture provides a reliable communication between different system entities while keeping those communications secure. The architecture uses different technologies such as Bluetooth Low Energy (BLE), cellular networks and Low Power Wide Area Network (LPWAN) to achieve its goals.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
The Metering Bus, also known as M-Bus, is a European standard EN13757-3 for reading out metering devices, like electricity, water, gas, or heat meters. Although real-life M-Bus networks can reach a significant size and complexity, only very simple protocol analyzers are available to observe and maintain such networks. In order to provide developers and installers with the ability to analyze the real bus signals easily, a web-based monitoring tool for the M-Bus has been designed and implemented. Combined with a physical bus interface it allows for measuring and recording the bus signals. For this at first a circuit has been developed, which transforms the voltage and current-modulated M-Bus signals to a voltage signal that can be read by a standard ADC and processed by an MCU. The bus signals and packets are displayed using a web server, which analyzes and classifies the frame fragments. As an additional feature an oscilloscope functionality is included in order to visualize the physical signal on the bus. This paper describes the development of the read-out circuit for the Wired M-Bus and the data recovery.
Partial substitution of Al atoms with Sc in wurtzite AlN crystals increases the piezoelectric constants. This leads to an increased electromechanical coupling, which is required for high bandwidths in piezo-acoustic filters. The crystal bonds in Ah-xScxN (AlScN) are softened as function of Sc atomic percentage x, leading to reduction of phase velocity in the film. Combining high Sc content AlScN films with high velocity substrates favors higher order guided surface acoustic wave (SAW) modes [1]. This study investigates higher order SAW modes in epitaxial AlScN on sapphire (Al2O3). Their dispersion for Pt metallized epitaxial AlScN films on Al2O3was computed for two different propagation directions. Computed phase velocity dispersion branches were experimentally verified by the characterization of fabricated SAW resonators. The results indicated four wave modes for the propagation direction (0°, 0°, 0°), featuring 3D polarized displacement fields. The sensitivity of the wave modes to the elastic constants of AlScN was investigated. It was shown that due to the 3D polarization of the waves, all elastic constants have an influence on the phase velocity and can be measured by suitable weighting functions in material constant extraction procedures.
Laser ultrasound was used to determine dispersion curves of surface acoustic waves on a Si (001) surface covered by AlScN films with a scandium content between 0 and 41%. By including off-symmetry directions for wavevectors, all five independent elastic constants of the film were extracted from the measurements. Results for their dependence on the Sc content are presented and compared to corresponding data in the literature, obtained by alternative experimental methods or by ab-initio calculations.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
In this work a method for the estimation of current slopes induced by inverters operating interior permanent magnet synchronous machines is presented. After the derivation of the estimation algorithm, the requirements for a suitable sensor setup in terms of accuracy, dynamic and electromagnetic interference are discussed. The boundary conditions for the estimation algorithm are presented with respect to application within high power traction systems. The estimation algorithm is implemented on a field programmable gateway array. This moving least-square algorithm offers the advantage that it is not dependent on vectors and therefore not every measured value has to be stored. The summation of all measured values leads to a significant reduction of the required storage units and thus decreases the hardware requirements. The algorithm is designed to be calculated within the dead time of the inverter. Appropriate countermeasures for disturbances and hardware restrictions are implemented. The results are discussed afterwards.
Automotive service suppliers are keen to invent products that help to reduce particulate matter pollution substantial, but governance worldwide are not yet ready to introduce this retrofitting of helpful devices statutory. To develop a strategy how to introduce these devices to the market based on user needs is the objective of our research. The contribution of this paper is three-fold: we will provide an overview of the current options of particulate matter pollution solutions (I). This corpus is used to come to a more precise description of the specific needs and wishes of target groups (II). Finally, a representative empirical study via social media channels with German car owners will help to develop a strategy to introduce retrofit devices into the German market (III).
To reach customers by dialog marketing campaigns is more and more difficult. This is a common problem of companies and marketing agencies worldwide: information overload, multi-channel-communication and a confusing variety of offers make it hard to gain the attention of the target group. The contribution of this paper is four-fold: we provide an overview of the current state of print dialog marketing activities and trends (I). Based on this corpus we identify the main key performance indicators of dialog marketing customer interaction (II). A qualitative user experience study identifies the customer wishes and needs, focusing on lottery offers for senior citizens (III). Finally, we evaluate the success of two different dialog marketing campaigns with 20,000 clients and compare the key performance indicators of the original hands-on experience-based print mailings with user experience tested and optimized mailings (IV).
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
(2020)
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.
This paper explains the realization of a concept for research-oriented photonics education. Using the example of the integration of an actual PhD project, it is shown how students are familiarized with the topic of research and scientific work in the first semesters. Typical research activities are included as essential parts of the learning process. Research should be made visible and tangible for the students. The authors will present all aspects of the learning environment, their impressions and experiences with the implemented scenario, as well as first evaluation results of the students.
Live streaming of events over an IP network as a catalyst in media technology education and training
(2020)
The paper describes how students are involved in applied research when setting up the technology and running a live event. Real-time IP transmission in broadcast environments via fiber optics will become increasingly important in the future. Therefore, it is necessary to create a platform in this area where students can learn how to handle IP infrastructure and fiber optics. With this in mind, we have built a fully functional TV control room that is completely IP-based. The authors present the steps in the development of the project and show the advantages of the proposed digital solutions. The IP network proves to be a synergy between the involved teams: participants of the robot competition and the members of the media team. These results are presented in the paper. Our activities aim to awaken enthusiasm for research and technology in young people. Broadcasts of live events are a good opportunity for "hands on" activities.
Astronomical phenomena fascinate people from the very beginning of mankind up to today. In this paper the authors will present their experience with photography of astronomical events. The main focus will be on aurora borealis, comet Neowise, total lunar eclipses and how mobile devices open up new possibilities to observe the green flash. Our efforts were motivated by the great impact and high number of viewers of these events. Visitors from over a hundred countries watched our live broadcasts.
Furthermore, we report on our experiences with the photography of optical phenomena such as polar lights Fig. 1, comet Neowise with a Delta Aquariids meteor Fig. 11, and lunar eclipses Fig. 12.
The Human-Robot-Collaboration (HRC) has developed rapidly in recent years with the help of collaborative lightweight robots. An important prerequisite for HRC is a safe gripper system. This results in a new field of application in robotics, which spreads mainly in supporting activities in the assembly and in the care. Currently, there are a variety of grippers that show recognizable weaknesses in terms of flexibility, weight, safety and price.
By means of Additive manufacturing (AM) gripper systems can be developed which can be used multifunctionally, manufactured quickly and customized. In addition, the subsequent assembly effort can be reduced due to the integration of several components to a complex component. An important advantage of AM is the new freedom in designing products. Thus, components using lightweight design can be produced. Another advantage is the use of 3D multi-material printing, wherein a component with different material properties and also functions can be realized.
This contribution presents the possibilities of AM considering HRC requirements. First of all, the topic of Human-Robot-Interaction with regard to additive manufacturing will be explained on the basis of a literature review. In addition, the development steps of the HRI gripper through to assembly are explained. The acquired knowledge regarding the AM are especially emphasized here. Furthermore, an application example of the HRC gripper is considered in detail and the gripper and its components are evaluated and optimized with respect to their function. Finally, a technical and economic evaluation is carried out. As a result, it is possible to additively manufacture a multifunctional and customized human-robot collaboration gripping system. Both the costs and the weight were significantly reduced. Due to the low weight of the gripping system only a small amount of about 13% of the load of the robot used is utilized.
Additive manufacturing (AM) or 3D printing (3DP) has become a widespread new technology in recent years and is now used in many areas of industry. At the same time, there is an increasing need for training courses that impart the knowledge required for product development in 3D printing. In this article, a workshop on “Rapid Prototyping” is presented, which is intended to provide students with the technical and creative knowledge for product development in the field of AM. Today, additive manufacturing is an important part of teaching for the training of future engineers. In a detailed literature review, the advantages and disadvantages of previous approaches to training students are examined and analyzed. On this basis, a new approach is developed in which the students analyze and optimize a given product in terms of additivie manufacturing. The students use two different 3D printers to complete this task. In this way, the students acquire the skills to work independently with different processes and materials. With this new approach, the students learn to adapt the design to different manufacturing processes and to observe the restrictions of different materials. The results of these courses are evaluated through feedback in a presentation and a questionnaire.
Efficient collaborative robotic applications need a combination of speed and separation monitoring, and power and force limiting operations. While most collaborative robots have built-in sensors for power and force limiting operations, there are none with built-in sensor systems for speed and separation monitoring. This paper proposes a system for speed and separation monitoring directly from the gripper of the robot. It can monitor separation distances of up to three meters. We used single-pixel Time-of-Flight sensors to measure the separation distance between the gripper and the next obstacle perpendicular to it. This is the first system capable of measuring separation distances of up to three meters directly from the robot's gripper.
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
This paper describes a comparative study of two tactile systems supporting navigation for persons with little or no visual and auditory perception. The efficacy of a tactile head-mounted device (HMD) was compared to that of a wearable device, a tactile belt. A study with twenty participants showed that the participants took significantly less time to complete a course when navigating with the HMD, as compared to the belt.
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
Short-term load forecasting (STLF) has been playing a key role in the electricity sector for several decades, due to the need for aligning energy generation with the demand and the financial risk connected with forecasting errors. Following the top-down approach, forecasts are calculated for aggregated load profiles, meaning the sum of singular loads from consumers belonging to a balancing group. Due to the emerging flexible loads, there is an increasing relevance for STLF of individual factories. These load profiles are typically more stochastic compared to aggregated ones, which imposes new requirements to forecasting methods and tools with a bottom-up approach. The increasing digitalization in industry with enhanced data availability as well as smart metering are enablers for improved load forecasts. There is a need for STLF tools processing live data with a high temporal resolution in the minute range. Furthermore, behin-the-meter (BTM) data from various sources like submetering and production planning data should be integrated in the models. In this case, STLF is becoming a big data problem so that machine learning (ML) methods are required. The research project “GaIN” investigates the improvement of the STLF quality of an energy utility using BTM data and innovative ML models. This paper describes the project scope, proposes a detailed definition for a benchmark and evaluates the readiness of existing STLF methods to fulfil the described requirements as a reviewing paper.
The review highlights that recent STLF investigations focus on ML methods. Especially hybrid models gain more and more importance. ML can outperform classical methods in terms of automation degree and forecasting accuracy. Nevertheless, the potential for improving forecasting accuracy by the use of ML models depends on the underlying data and the types of input variables. The described methods in the analyzed publications only partially fulfil the tool requirements for STLF on company level. There is still a need to develop suitable ML methods to integrate the expanded data base in order to improve load forecasts on company level.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
We propose in this work to solve privacy preserving set relations performed by a third party in an outsourced configuration. We argue that solving the disjointness relation based on Bloom filters is a new contribution in particular by having another layer of privacy on the sets cardinality. We propose to compose the set relations in a slightly different way by applying a keyed hash function. Besides discussing the correctness of the set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. We are in particular interested in how having bits overlapping in the Bloom filters impacts the privacy level of our approach. Finally, we present our results with real-world parameters in two concrete scenarios.