Refine
Document Type
- Conference Proceeding (183) (remove)
Conference Type
- Konferenzartikel (183) (remove)
Is part of the Bibliography
- yes (183)
Keywords
- Machine Learning (9)
- Deep Leaning (7)
- Robustness (4)
- machine learning (4)
- Generative Adversarial Network (3)
- Radar (3)
- cryptography (3)
- neural networks (3)
- printed electronics (3)
- Computer Vision (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (183) (remove)
Open Access
- Closed Access (79)
- Closed (53)
- Open Access (50)
- Bronze (11)
- Diamond (9)
- Grün (3)
- Gold (1)
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
Mit der Implementierung sowie einer anschließenden aussagekräftigen Evaluierung, soll das, visuelle-inertiale Kartierungs- und Lokalisierungssystem maplab analysiert werden. Hierbei basiert die Kartierung bzw. Lokalisierung auf der Detektion von Umgebungsmerkmalen. Neben der Möglichkeit der Kartenerstellung besteht ferner die Option, mehrere Karten zu fusionieren und somit weitreichende Gebiete zu kartieren sowie für weitere Datenauswertungen zu nutzen. Aufgrund der Durchführung und Bewertung der Ergebnisse in unterschiedlichen Anwendungsszenarien zeigt sich, dass maplab besonders zur Kartierung von Räumen bzw. kleinen Gebäudekomplexen geeignet ist. Die Möglichkeit der Kartenfusionierung bietet weiterhin die Option, den Informationsgehalt von Karten zu erhöhen, welches die Effektivität für eine anschließende Lokalisierung steigert. Bei wachsender Kartierungsgröße hingegen zeigt sich jedoch eine Vergrößerung geometrischer Inkonsistenzen.
Die Positionierung mobiler Systeme mit hoher Genauigkeit ist eine Voraussetzung für intelligentes autonomes Verhalten, sowohl in der Feldrobotik als auch in industriellen Umgebungen. Dieser Beitrag beschreibt den Aufbau einer Roboterplattform und ihre Verwendung für den Test und die Bewertung von Kalman-Filter-Konfigurationen. Der Aufbau wurde mit einem mobilen Roboter Husky A200 und einem LiDAR-Sensor (Light Detection and Ranging) realisiert. Zur Verifizierung des vorgeschlagenen Aufbaus wurden fünf verschiedene Szenarien ausgearbeitet. Mit denen wurden die Filter auf ihre Leistungsfähigkeit hinsichtlich der Genauigkeit der Positionsbestimmung getestet.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
One of the main requirements of spatially distributed Internet of Things (IoT) solutions is to have networks with wider coverage to connect many low-power devices. Low-Power Wide-Area Networks (LPWAN) and Cellular IoT(cIOT) networks are promising candidates in this space. LPWAN approaches are based on enhanced physical layer (PHY) implementations to achieve long range such as LoRaWAN, SigFox, MIOTY. Narrowband versions of cellular network offer reduced bandwidth and, simplified node and network management mechanisms, such as Narrow Band IoT (NB-IoT) and Long-Term Evolution for Machines (LTE-M). Since the underlying use cases come with various requirements it is essential to perform a comparative analysis of competing technologies. This article provides systematic performance measurement and comparison of LPWAN and NB-IoT technologies in a unified testbed, also discusses the necessity of future fifth generation (5G) LPWAN solutions.
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
Spatially Distributed Wireless Networks (SDWN) are one of the basic technologies for the Internet of Things (IoT) and (Industrial) Internet of Things (IIoT) applications. These SDWN for many of these applications has strict requirements such as low cost, simple installation and operations, and high potential flexibility and mobility. Among the different Narrowband Wireless Wide Area Networking (NBWWAN) technologies, which are introduced to address these categories of wireless networking requirements, Narrowband Internet of Things (NB-IoT) is getting more traction due to attractive system parameters, energy-saving mode of operation with low data rates and bandwidth, and its applicability in 5G use cases. Since several technologies are available and because the underlying use cases come with various requirements, it is essential to perform a systematic comparative analysis of competing technologies to choose the right technology. It is also important to perform testing during different phases of the system development life cycle. This paper describes the systematic test environment for automated testing of radio communication and systematic measurements of the performance of NB-IoT.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (eg Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.