Refine
Document Type
- Conference Proceeding (187)
- Article (reviewed) (48)
- Article (unreviewed) (21)
- Letter to Editor (16)
- Book (6)
- Doctoral Thesis (6)
- Patent (6)
- Part of a Book (3)
- Moving Images (1)
- Other (1)
- Report (1)
Conference Type
- Konferenzartikel (164)
- Konferenz-Abstract (17)
- Sonstiges (5)
- Konferenz-Poster (1)
Language
- English (296) (remove)
Has Fulltext
- no (296) (remove)
Is part of the Bibliography
- yes (296)
Keywords
- RoboCup (12)
- Machine Learning (10)
- Deep Leaning (9)
- Heart rhythm model (5)
- Modeling and simulation (5)
- Robustness (4)
- machine learning (4)
- neural networks (4)
- printed electronics (4)
- Generative Adversarial Network (3)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (296) (remove)
Open Access
- Open Access (121)
- Closed Access (110)
- Closed (60)
- Bronze (30)
- Diamond (18)
- Grün (3)
- Gold (2)
- Hybrid (1)
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robust networks. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks. From an image and signal processing point of view, this success might be a bit surprising as the inherent spatial pyramid design of most CNNs is apparently violating basic signal processing laws, i.e. Sampling Theorem in their down-sampling operations. However, since poor sampling appeared not to affect model accuracy, this issue has been broadly neglected until model robustness started to receive more attention. Recent work in the context of adversarial attacks and distribution shifts, showed after all, that there is a strong correlation between the vulnerability of CNNs and aliasing artifacts induced by poor down-sampling operations. This paper builds on these findings and introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture: FrequencyLowCut pooling. Our experiments show, that in combination with simple and Fast Gradient Sign Method (FGSM) adversarial training, our hyper-parameter free operator substantially improves model robustness and avoids catastrophic overfitting. Our code is available at https://github.com/GeJulia/flc_pooling
Fix your downsampling ASAP! Be natively more robust via Aliasing and Spectral Artifact free Pooling
(2023)
Convolutional neural networks encode images through a sequence of convolutions, normalizations and non-linearities as well as downsampling operations into potentially strong semantic embeddings. Yet, previous work showed that even slight mistakes during sampling, leading to aliasing, can be directly attributed to the networks' lack in robustness. To address such issues and facilitate simpler and faster adversarial training, [12] recently proposed FLC pooling, a method for provably alias-free downsampling - in theory. In this work, we conduct a further analysis through the lens of signal processing and find that such current pooling methods, which address aliasing in the frequency domain, are still prone to spectral leakage artifacts. Hence, we propose aliasing and spectral artifact-free pooling, short ASAP. While only introducing a few modifications to FLC pooling, networks using ASAP as downsampling method exhibit higher native robustness against common corruptions, a property that FLC pooling was missing. ASAP also increases native robustness against adversarial attacks on high and low resolution data while maintaining similar clean accuracy or even outperforming the baseline.
Motivated by the recent trend towards the usage of larger receptive fields for more context-aware neural networks in vision applications, we aim to investigate how large these receptive fields really need to be. To facilitate such study, several challenges need to be addressed, most importantly: (i) We need to provide an effective way for models to learn large filters (potentially as large as the input data) without increasing their memory consumption during training or inference, (ii) the study of filter sizes has to be decoupled from other effects such as the network width or number of learnable parameters, and (iii) the employed convolution operation should be a plug-and-play module that can replace any conventional convolution in a Convolutional Neural Network (CNN) and allow for an efficient implementation in current frameworks. To facilitate such models, we propose to learn not spatial but frequency representations of filter weights as neural implicit functions, such that even infinitely large filters can be parameterized by only a few learnable weights. The resulting neural implicit frequency CNNs are the first models to achieve results on par with the state-of-the-art on large image classification benchmarks while executing convolutions solely in the frequency domain and can be employed within any CNN architecture. They allow us to provide an extensive analysis of the learned receptive fields. Interestingly, our analysis shows that, although the proposed networks could learn very large convolution kernels, the learned filters practically translate into well-localized and relatively small convolution kernels in the spatial domain.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
Background: Transesophageal left atrial (LA) pacing and transesophageal LA ECG recording are semi-invasive techniques for diagnostic and therapy of supraventricular rhythm disturbance. Cardiac resynchronization therapy (CRT) with right atrial (RA) sensed biventricular pacing is an established therapy for heart failure patients with reduced left ventricular (LV) ejection fraction, sinus rhythm and interventricular electrical desynchronization.
Purpose: The aim of the study was to evaluate electromagnetic and voltage pacing fields of the combination of RA pacing, LA pacing and biventricular pacing in patients with long interatrial and interventricular electrical desynchronization.
Methods: The modelling and electromagnetic simulations of transesophageal LA pacing in combination with RA pacing and biventricular pacing would be staged and analyzed with the CST (Computer Simulation Technology) software. Different electrodes were modelled in order to simulate different types of bipolar pacing in the 3D-CAD Offenburg heart rhythm model: The bipolar Solid S (Biotronik) electrode where modelled for RA pacing and right ventricular (RV) pacing, Attain 4194 (Medtronic) for LV pacing and TO8 (Osypka) multipolar esophageal electrode with hemispheric electrodes for LA pacing.
Results: The pacemaker amplitudes for the electromagnetic pacing simulations were performed with 3 V for RA pacing, 1.5 V for RV pacing, 50 V for LA pacing and 3V for LV pacing with pacing impulse duration of 0.5 ms for RA, RV and LV pacing and 10 ms for LA pacing. The atrioventricular pacing delay after RA pacing was 140 ms. The different pacing modes AAI, VVI, DDD, DDD0V and DDD0D were evaluated for the analysis of the electric pacing field propagation of pacemaker, CRT and LA pacing. The pacing results were compared at minimum (LOW) and maximum (HIGH) parameter settings. While the LOW setting produced fewer tetrahedral and more inaccurate results, the HIGH setting produced many tetrahedral and therefore more accurate results.
Conclusions: The simulation of the combination of transesophageal LA pacing with RA sensed biventricular pacing is possible with the Offenburg heart rhythm model. The new temporary 4-chamber pacing method may be additional useful method in CRT non-responders with long interatrial electrical delay.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
We introduce an open source python framework named PHS-Parallel Hyperparameter Search to enable hyperparameter optimization on numerous compute instances of any arbitrary python function. This is achieved with minimal modifications inside the target function. Possible applications appear in expensive to evaluate numerical computations which strongly depend on hyperparameters such as machine learning. Bayesian optimization is chosen as a sample efficient method to propose the next query set of parameters.
Machine learning (ML) has become highly relevant in applications across all industries, and specialists in the field are sought urgently. As it is a highly interdisciplinary field, requiring knowledge in computer science, statistics and the relevant application domain, experts are hard to find. Large corporations can sweep the job market by offering high salaries, which makes the situation for small and medium enterprises (SME) even worse, as they usually lack the capacities both for attracting specialists and for qualifying their own personnel. In order to meet the enormous demand in ML specialists, universities now teach ML in specifically designed degree programs as well as within established programs in science and engineering. While the teaching almost always uses practical examples, these are somewhat artificial or outdated, as real data from real companies is usually not available. The approach reported in this contribution aims to tackle the above challenges in an integrated course, combining three independent aspects: first, teaching key ML concepts to graduate students from a variety of existing degree programs; second, qualifying working professionals from SME for ML; and third, applying ML to real-world problems faced by those SME. The course was carried out in two trial periods within a government-funded project at a university of applied sciences in south-west Germany. The region is dominated by SME many of which are world leaders in their industries. Participants were students from different graduate programs as well as working professionals from several SME based in the region. The first phase of the course (one semester) consists of the fundamental concepts of ML, such as exploratory data analysis, regression, classification, clustering, and deep learning. In this phase, student participants and working professionals were taught in separate tracks. Students attended regular classes and lab sessions (but were also given access to e-learning materials), whereas the professionals learned exclusively in a flipped classroom scenario: they were given access to e-learning units (video lectures and accompanying quizzes) for preparation, while face-to-face sessions were dominated by lab experiments applying the concepts. Prior to the start of the second phase, participating companies were invited to submit real-world problems that they wanted to solve with the help of ML. The second phase consisted of practical ML projects, each tackling one of the problems and worked on by a mixed team of both students and professionals for the period of one semester. The teams were self-organized in the ways they preferred to work (e.g. remote vs. face-to-face collaboration), but also coached by one of the teaching staff. In several plenary meetings, the teams reported on their status as well as challenges and solutions. In both periods, the course was monitored and extensive surveys were carried out. We report on the findings as well as the lessons learned. For instance, while the program was very well-received, professional participants wished for more detailed coverage of theoretical concepts. A challenge faced by several teams during the second phase was a dropout of student members due to upcoming exams in other subjects.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average R² score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
Restoring hand motion to people experiencing amputation, paralysis, and stroke is a critical area of research and development. While electrode-based systems that use input from the brain or muscle have proven successful, these systems tend to be expensive and di¨cult to learn. One group of researchers is exploring the use of augmented reality (AR) as a new way of controlling hand prostheses. A camera mounted on eyeglasses tracks LEDs on a prosthetic to execute opening and closing commands using one of two different AR systems. One system uses a rectangular command window to control motion: crossing horizontally signals “open” along one direction and “close” in the opposite direction. The second system uses a circular command window: once control is enabled, gripping strength can be controlled by the direction of head motion. While the visual system remains to be tested with patients, its low cost, ease of use, and lack of electrodes make the device a promising solution for restoring hand motion.
Neuroprosthetics 2.0
(2019)
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
Device and method for monitoring and optimising a temporal trigger stability (WO2023094554A1)
(2023)
The present invention relates to devices for monitoring and optimising a temporal trigger stability of an extracorporeal circulatory support means, and to open-loop and closed-loop control units for the extracorporeal circulatory support means comprising such a device, and to corresponding methods. A device (10) for monitoring a temporal trigger stability of an extracorporeal circulatory support means is accordingly proposed, which device is designed to receive a first dataset (14) of a measurement of an ECG signal of a supported patient over a predefined period of time. The device (10) comprises an evaluation unit (16), which is designed to determine or identify a plurality of R triggers (26) from the first dataset (14), wherein the evaluation unit (16) is also designed to receive or provide a second dataset (20) having evaluated ECG signals and a plurality of R triggers (28) and to selectively map the second dataset (20) on the first dataset (14). The device is also designed to emit a signal (22) that characterises a temporal gap between successive R triggers (26) from the first dataset (14) and successive R triggers (28) from the second dataset (20) which are mapped on the first dataset.
Oesophageal Electrode Probe and Device for Cardiological Treatment and/or Diagnosis (US20200261024)
(2020)
An oesophageal electrode probe for bioimpedance measurement and/or for neurostimulation is provided; a device for transoesophageal cardiological treatment and/or cardiological diagnosis is also provided; a method for the open-loop or closed-loop control of a cardiological catheter ablation device and/or a cardiological, circulatory and/or respiratory support device is also provided. The oesophageal electrode probe comprises a bioimpedance measuring device for measuring the bioimpedance of at least one part of tissue surrounding the oesophageal electrode probe. The bioimpedance device comprises at least one first and one second electrode. The at least one first electrode is arranged on a side of the oesophageal electrode probe facing towards the heart. The at least one second electrode is arranged on a side of the oesophageal electrode probe facing away from the heart. The device comprises the oesophageal electrode probe and a control and/or evaluation device.
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
Modern industrial production is heavily dependent on efficient workflow processes and automation. The steady flow of raw materials as well as the separation of vital parts and semi-finished products are at the core of these automated procedures. Commonly used systems for this work are bowl feeders, which separate the parts and material by a combination of mechanical vibration and friction. The production of these tools, especially the design of the ramping spiral, is delicate and time-consuming work, as the shape, slope, and material must be carefully adjusted for the corresponding parts. In this work, we propose an automated approach, making use of optimization procedures from artificial intelligence, to design the spiral ramps of the bowl feeders. Therefore, the whole system and considered parts are physically simulated and the optimized geometry is subsequently exported into a CAD system for the actual building, respectively printing. The employment of evolutionary optimization gives the need to develop a mathematical model for the whole setup and find an efficient representation of integral features.
The precise positioning of mobile systems is a prerequisite for any autonomous behavior, in an industrial environment as well as for field robotics. The paper describes the set up for an experimental platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. Two approaches are compared. First, a local method based on point cloud matching and integration of inertial measurement units is evaluated. Subsequent matching makes it possible to create a three-dimensional point cloud that can be used as a map in subsequent runs. The second approach is a full SLAM algorithm, based on graph relaxation models, incorporating the full sensor suite of odometry, inertial sensors, and 3D laser scan data.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
This paper presents an approach for implementing an automated hit detection and score calculation system for a steel dartboard using a standard webcam. First, the rectilinear field separations of the dartboard are described mathematically by means of line slopes and are than stored. These slopes serve as a basis for later score calculation. In addition, thrown darts have to be detected and the pixel at which the dart cuts the dartboard has to be determined. When this information is known, a comparison is made using the line slopes, allowing the field number of the hit to be detected. The decision for single, double or triple hit is made by evaluating the defined colors on the dartboard. All these functions are then packaged in a Matlab GUI.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused coordinating the play of two robots. Moreover, we are working on stabilizing the gait by adding additional sensor information. An ongoing work is the optimization of the control strategy by balancing between impedance and position control. By minimizing the jerk, gait and overall gameplay should improve significantly.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
(2022)
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, \ie the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Engineering, construction and operation of complex machines involves a wide range of complicated, simultaneous tasks, which potentially could be automated. In this work, we focus on perception tasks in such systems, investigating deep learning approaches for multi-task transfer learning with limited training data. We show an approach that takes advantage of a technical systems’ focus on selected objects and their properties. We create focused representations and simultaneously solve joint objectives in a system through multi-task learning with convolutional autoencoders. The focused representations are used as a starting point for the data-saving solution of the additional tasks. The efficiency of this approach is demonstrated using images and tasks of an autonomous circular crane with a grapple.
Method for controlling a device, in particular, a prosthetic hand or a robotic arm (US20200327705A1)
(2020)
A method for controlling a device, in particular a prosthetic hand or a robotic arm, includes using an operator-mounted camera to detect at least one marker positioned on or in relation to the device. Starting from the detection of the at least one marker, a predefined movement of the operator together with the camera is detected and is used to trigger a corresponding action of the device. The predefined movement of the operator is detected in the form of a line of sight by means of camera tracking. A system for controlling a device, in particular a prosthetic hand or a robotic arm, includes a pair of AR glasses adapted to detect the at least one marker and to detect the predefined movement of the operator.
A versatile liquid metal (LM) printing process enabling the fabrication of various fully printed devices such as intra- and interconnect wires, resistors, diodes, transistors, and basic circuit elements such as inverters which are process compatible with other digital printing and thin film structuring methods for integration is presented. For this, a glass capillary-based direct-write method for printing LMs such as eutectic gallium alloys, exploring the potential for fully printed LM-enabled devices is demonstrated. Examples for successful device fabrication include resistors, p–n diodes, and field effect transistors. The device functionality and easiness of one integrated fabrication flow shows that the potential of LM printing is far exceeding the use of interconnecting conventional electronic devices in printed electronics.
The nonlinear behavior of inverters is largely impacted by the interlocking and switching times. A method for online identifying the switching times of semiconductors in inverters is presented in the following work. By being able to identify these times, it is possible to compensate for the nonlinear behavior, reduce interlocking time, and use the information for diagnostic purposes. The method is first theoretically derived by examining different inverter switching cases and determining potential identification possibilities. It is then modified to consider the entire module for more robust identification. The methodology, including limitations and boundary conditions, is investigated and a comparison of two methods of measurement acquisition is provided. Subsequently the developed hardware is described and the implementation in an FPGA is carried out. Finally, the results are presented, discussed, and potential challenges are encountered.
Current Harmonics Control Algorithm for inverter-fed Nonlinear Synchronous Electrical Machines
(2023)
Current harmonics are a well known challenge of electrical machines. They can be undesirable as they can cause instabilities in the control, generate additional losses and lead to torque ripples with noise. However, they can also be specifically generated in new methods in order to improve the machine behavior. In this paper, an algorithm for controlling current harmonics is proposed. It can be described as a combination of different PI controllers for defined angles of the machine with repetitive control characteristics for whole revolutions. The controller design is explained and important points where linearization is necessary are shown. Furthermore, the limits are analyzed and, for validation, measurement results with a permanently excited synchronous machine on the test bench are considered.
The nonlinear behavior of inverters is mainly influenced by the interlocking and switching times of the semiconductors. In the following work, a method is presented that enables the possibility of an online identification of the switching times of the semiconductors. This information allows a compensation of the non-linear behavior, a reduction of the locking time and can be used for diagnostic purposes. First, a theoretical derivation of the method is made by considering different cases when switching of the inverter and deriving identification possibilities. The method is then extended so that the entire module is taken into account. Furthermore, a possible theoretical implementation is shown. After the methodology has been investigated with possible limitations, boundary conditions and with respect to real hardware, an implementation in the FPGA is performed. Finally, the results are presented, discussed
and further improvements are presented in an outlook.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
In this study, a facile method to fabricate a cohesive ion‐gel based gate insulator for electrolyte‐gated transistors is introduced. The adhesive and flexible ion‐gel can be laminated easily on the semiconducting channel and electrode manually by hand. The ion‐gel is synthesized by a straightforward technique without complex procedures and shows a remarkable ionic conductivity of 4.8 mS cm−1 at room temperature. When used as a gate insulator in electrolyte‐gated transistors (EGTs), an on/off current ratio of 2.24×104 and a subthreshold swing of 117 mV dec−1 can be achieved. This performance is roughly equivalent to that of ink drop‐casted ion‐gels in electrolyte‐gated transistors, indicating that the film‐attachment method might represent a valuable alternative to ink drop‐casting for the fabrication of gate insulators.
The fast and cost-effective manufacturing of tools for thermoforming is an essential requirement to shorten the development time of products. Thus, additive processes are used increasingly in tooling for thermoforming of plastic sheets. However, a disadvantage of many additive methods is that they are highly cost-intensive, since complex systems based on laser technology and expensive metal powders are needed. Therefore, this paper examines how to work with favorable additive methods, e.g. Binder Jetting, to manufacture tools, which provide sufficient strength for thermoforming. The use of comparatively low-priced inkjet technology for the layer construction and a polymer plaster as material can be expected to result in significant cost reductions. Based on a case study using a cowling (engine bonnet) for an Unmanned Aerial Vehicle (UAV), the development of a complex tool for thermoforming is demonstrated. The object in this study is to produce a tool for a complex-shaped component in small numbers and high quality in a short time and at reasonable costs. Within the tooling process, integrated vacuum channels are implemented in additive tooling without the need for additional post-processing (for example, drilling). In addition, special technical challenges, such as the demolding of undercuts or the parting of the tool are explained. All process steps from tool design to the use of the additively manufactured tool are analyzed. Based on the manufacturing of a small series of cowlings for a UAV made of plastic sheets (ABS), it is shown, that the Binder Jetting offers sufficient mechanical and thermal strength for additive tooling. In addition, an economic evaluation of the tool manufacturing and a detailed consideration of the required manufacturing times for the different process steps are carried out. Finally, a comparison is made with conventional and alternative additive methods of tooling.
The monitoring of industrial environments ensures that highly automated processes run without interruption. However, even if the industrial machines themselves are monitored, the communication lines are currently not continuously monitored in todays installations. They are checked usually only during maintenance intervals or in case of error. In addition, the cables or connected machines usually have to be removed from the system for the duration of the test. To overcome these drawbacks, we have developed and implemented a cost-efficient and continuous signal monitoring of Ethernet-based industrial bus systems. Several methods have been developed to assess the quality of the cable. These methods can be classified to either passive or active. Active methods are not suitable if interruption of the communication is undesired. Passive methods, on the other hand, require oversampling, which calls for expensive hardware. In this paper, a novel passive method combined with undersampling targeting cost-efficient hardware is proposed.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present an unsupervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without superivison. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an autoencoder to generate suitable latent representation. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features could be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
Due to the rapidly increasing storage consumption worldwide, as well as the expectation of continuous availability of information, the complexity of administration in today’s data centers is growing permanently. Integrated techniques for monitoring hard disks can increase the reliability of storage systems. However, these techniques often lack intelligent data analysis to perform predictive maintenance. To solve this problem, machine learning algorithms can be used to detect potential failures in advance and prevent them. In this paper, an unsupervised model for predicting hard disk failures based on Isolation Forest is proposed. Consequently, a method is presented that can deal with the highly imbalanced datasets, as the experiment on the Backblaze benchmark dataset demonstrates.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
Diffracted waves carry high‐resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. However, the diffraction energy tends to be weak compared to the reflected energy and is also sensitive to inaccuracies in the migration velocity, making the identification of its signal challenging. In this work, we present an innovative workflow to automatically detect scattering points in the migration dip angle domain using deep learning. By taking advantage of the different kinematic properties of reflected and diffracted waves, we separate the two types of signals by migrating the seismic amplitudes to dip angle gathers using prestack depth imaging in the local angle domain. Convolutional neural networks are a class of deep learning algorithms able to learn to extract spatial information about the data in order to identify its characteristics. They have now become the method of choice to solve supervised pattern recognition problems. In this work, we use wave equation modelling to create a large and diversified dataset of synthetic examples to train a network into identifying the probable position of scattering objects in the subsurface. After giving an intuitive introduction to diffraction imaging and deep learning and discussing some of the pitfalls of the methods, we evaluate the trained network on field data and demonstrate the validity and good generalization performance of our algorithm. We successfully identify with a high‐accuracy and high‐resolution diffraction points, including those which have a low signal to noise and reflection ratio. We also show how our method allows us to quickly scan through high dimensional data consisting of several versions of a dataset migrated with a range of velocities to overcome the strong effect of incorrect migration velocity on the diffraction signal.
Extracting horizon surfaces from key reflections in a seismic image is an important step of the interpretation process. Interpreting a reflection surface in a geologically complex area is a difficult and time-consuming task, and it requires an understanding of the 3D subsurface geometry. Common methods to help automate the process are based on tracking waveforms in a local window around manual picks. Those approaches often fail when the wavelet character lacks lateral continuity or when reflections are truncated by faults. We have formulated horizon picking as a multiclass segmentation problem and solved it by supervised training of a 3D convolutional neural network. We design an efficient architecture to analyze the data over multiple scales while keeping memory and computational needs to a practical level. To allow for uncertainties in the exact location of the reflections, we use a probabilistic formulation to express the horizons position. By using a masked loss function, we give interpreters flexibility when picking the training data. Our method allows experts to interactively improve the results of the picking by fine training the network in the more complex areas. We also determine how our algorithm can be used to extend horizons to the prestack domain by following reflections across offsets planes, even in the presence of residual moveout. We validate our approach on two field data sets and show that it yields accurate results on nontrivial reflectivity while being trained from a workable amount of manually picked data. Initial training of the network takes approximately 1 h, and the fine training and prediction on a large seismic volume take a minute at most.
In many application areas, Deep Reinforcement Learning (DRL) has led to breakthroughs. In Curriculum Learning, the Machine Learning algorithm is not randomly presented with examples, but in a meaningful order of increasing difficulty. This has been used in many application areas to further improve the results of learning systems or to reduce their learning time. Such approaches range from learning plans created manually by domain experts to those created automatically. The automated creation of learning plans is one of the biggest challenges.In this work, we investigate an approach in which a trainer learns in parallel and analogously to the student to automatically create a learning plan for the student during this Double Deep Reinforcement Learning (DDRL). Three Reward functions, Friendly, Adversarial, and Dynamic based on the learner’s reward are compared. The domain for evaluation is kicking with variable distance, direction and relative ball position in the SimSpark simulated soccer environment.As a result, Statistic Curriculum Learning (SCL) performs better than a random curriculum with respect to training time and result quality. DDRL reaches a comparable quality as the baseline and outperforms it significantly in shorter trainings in the distance-direction subdomain reducing the number of required training cycles by almost 50%.
Significant progress in the development and commercialization of electrically conductive adhesives has been made. This makes shingling a very attractive approach for solar cell interconnection. In this study, we investigate the shading tolerance of two types of solar modules based on shingle interconnection: first, the already commercialized string approach, and second, the matrix technology where solar cells are intrinsically interconnected in parallel and in series. An experimentally validated LTspice model predicts major advantages for the power output of the matrix layout under partial shading. Diagonal as well as random shading of a 1.6-m2 solar module is examined. Power gains of up to 73.8 % for diagonal shading and up to 96.5 % for random shading are found for the matrix technology compared to the standard string approach. The key factor is an increased current extraction due to lateral current flows. Especially under minor shading, the matrix technology benefits from an increased fill factor as well. Under diagonal shading, we find the probability of parts of the matrix module being bypassed to be reduced by 40 % in comparison to the string module. In consequence, the overall risk of hotspot occurrence in matrix modules is decreased significantly.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
Entity Matching (EM) defines the task of learning to group objects by transferring semantic concepts from example groups (=entities) to unseen data. Despite the general availability of image data in the context of many EM-problems, most currently available EM-algorithms solely rely on (textual) meta data. In this paper, we introduce the first publicly available large-scale dataset for "visual entity matching", based on a production level use case in the retail domain. Using scanned advertisement leaflets, collected over several years from different European retailers, we provide a total of ~786k manually annotated, high resolution product images containing ~18k different individual retail products which are grouped into ~3k entities. The annotation of these product entities is based on a price comparison task, where each entity forms an equivalence class of comparable products. Following on a first baseline evaluation, we show that the proposed "visual entity matching" constitutes a novel learning problem which can not sufficiently be solved using standard image based classification and retrieval algorithms. Instead, novel approaches which allow to transfer example based visual equivalent classes to new data are needed to address the proposed problem. The aim of this paper is to provide a benchmark for such algorithms.
Information about the dataset, evaluation code and download instructions are provided under https://www.retail-786k.org/.
During the day-to-day exploitation of localization systems in mines, the technical staff tends to incorrectly rearrange radio equipment: positions of devices may not be accurately marked on a map or their positions may not correspond to the truth. This situation may lead to positioning inaccuracies and errors in the operation of the localization system.This paper presents two Bayesian algorithms for the automatic corrections of positions of the equipment on the map using trajectories restored by the inertial measurement units mounted to mobile objects, like pedestrians and vehicles. As a basis, a predefined map of the mine represented as undirected weighted graph was used as input. The algorithms were implemented using the Simultaneous Localization and Mapping (SLAM) approach.The results prove that both methods are capable to detect misplacement of access points and to provide corresponding corrections. The discrete Bayesian filter outperforms the unscented Kalman filter, which, however, requires more computational power.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Analysis of Miniaturized Printed Flexible RFID/NFC Antennas Using Different Carrier Substrates
(2020)
Antennas for Radio Frequency Identification (RFID) provide benefits for high frequencies (HF) and wireless data transmission via Near Field Communication (NFC) and many other applications. In this case, various requirements for the design of the reader and transmitter antennas must be met in order to achieve a suitable transmission quality. In this work, a miniaturized cost-effective RFID/NFC antenna for a microelectronic measurement system is designed and printed on different flexible carrier substrates using a new and low-cost Direct Ink Writing (DIW) technology. Various practical aspects such as reflection and impedance magnitude as well as the behavior of the printed RFID/NFC antennas are analyzed and compared to an identical copper-based antenna of the same size. The results are presented in this paper. Furthermore, the problems during the printing process itself on the different substrates are evaluated. The effects of the characteristics on the antenna under kink-free bending tests are examined and subsequently long-term measurements are carried out.
Radio frequency identification (RFID) antennas are popular for high frequency (HF) RFID, energy transfer and near field communication (NFC) applications. Particularly for wireless measurement systems the RFID/NFC technology is a good option to implement a wireless communication interface. In this context, the design of corresponding reader and transmitter antennas plays a major role for achieving suitable transmission quality. This work proves the feasibility of the rapid prototyping of a RFID/NFC antenna, which is used for the wireless communication and energy harvesting at the required frequency of 13.56 MHz. A novel and low-cost direct ink writing (DIW) technology utilizing highly viscous silver nanoparticle ink is used for this process. This paper describes the development and analysis of low-cost printed flexible RFID/NFC antennas on cost-effective substrates for a microelectronic vital parameter measurement system. Furthermore, we compare the measured technical parameters with existing copper-based counterparts on a FR4 substrate.
Analysing and predicting the advance rate of a tunnel boring machine (TBM) in hard rock is integral to tunnelling project planning and execution. It has been applied in the industry for several decades with varying success. Most prediction models are based on or designed for large-diameter TBMs, and much research has been conducted on related tunnelling projects. However, only a few models incorporate information from projects with an outer diameter smaller than 5 m and no penetration prediction model for pipe jacking machines exists to date. In contrast to large TBMs, small-diameter TBMs and their projects have been considered little in research. In general, they are characterised by distinctive features, including insufficient geotechnical information, sometimes rather short drive lengths, special machine designs and partially concurring lining methods like pipe jacking and segment lining. A database which covers most of the parameters mentioned above has been compiled to investigate the performance of small-diameter TBMs in hard rock. In order to provide sufficient geological and technical variance, this database contains 37 projects with 70 geotechnically homogeneous areas. Besides the technical parameters, important geotechnical data like lithological information, unconfined compressive strength, tensile strength and point load index is included and evaluated. The analysis shows that segment lining TBMs have considerably higher penetration rates in similar geological and technical settings mostly due to their design parameters. Different methodologies for predicting TBM penetration, including state-of-the-art models from the literature as well as newly derived regression and machine learning models, are discussed and deployed for backward modelling of the projects contained in the database. New ranges of application for small-diameter tunnelling in several industry-standard penetration models are presented, and new approaches for the penetration prediction of pipe jacking machines in hard rock are proposed.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant margin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image
classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers.
We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.
Neural networks tend to overfit the training distribution and perform poorly on out-ofdistribution data. A conceptually simple solution lies in adversarial training, which introduces worst-case perturbations into the training data and thus improves model generalization to some extent. However, it is only one ingredient towards generally more robust models and requires knowledge about the potential attacks or inference time data corruptions during model training. This paper focuses on the native robustness of models that can learn robust behavior directly from conventional training data without out-of-distribution examples. To this end, we study the frequencies in learned convolution filters. Clean-trained models often prioritize high-frequency information, whereas adversarial training enforces models to shift the focus to low-frequency details during training. By mimicking this behavior through frequency regularization in learned convolution weights, we achieve improved native robustness to adversarial attacks, common corruptions, and other out-of-distribution tests. Additionally, this method leads to more favorable shifts in decision-making towards low-frequency information, such as shapes, which inherently aligns more closely with human vision.
Smart Home or Smart Building applications are a growing market. An increasing challenge is to design energy efficient Smart Home applications to achieve sustainable and green homes. Using the example of the development of an Indoor Smart Gardening system with wireless monitoring and automated watering this paper is discussing in particular the design issue of energy autonomous working sensors and actuators for home automation. Most important part of the presented Smart Gardening system is a 3D printed smart flower pot for single plants. The smart flower pot has integrated a water reservoir for automated plant irrigation and an electronic for monitoring important plant parameters and the water level of the water reservoir. Energy harvesting with solar cells enables energy autonomous working of the flower pot. A low-power wireless interface also integrated in the flowerpot and an external gateway based on a Raspberry Pi 3 enables wireless networking of multiple of those flower pots. The gateway is used for evaluating the plant parameters and as a user interface. Particularly the architecture of the energy autonomous wireless flower pot will be considered, because fully energy autonomous sensors and actuators for home automation could not be implemented without special concepts for the energy supply and the overall electronic.
High-performance Ag–Se-based n-type printed thermoelectric (TE) materials suitable for room-temperature applications have been developed through a new and facile synthesis approach. A high magnitude of the Seebeck coefficient up to 220 μV K–1 and a TE power factor larger than 500 μW m–1 K–2 for an n-type printed film are achieved. A high figure-of-merit ZT ∼0.6 for a printed material has been found in the film with a low in-plane thermal conductivity κF of ∼0.30 W m–1 K–1. Using this material for n-type legs, a flexible folded TE generator (flexTEG) of 13 thermocouples has been fabricated. The open-circuit voltage of the flexTEG for temperature differences of ΔT = 30 and 110 K is found to be 71.1 and 181.4 mV, respectively. Consequently, very high maximum output power densities pmax of 6.6 and 321 μW cm–2 are estimated for the temperature difference of ΔT = 30 K and ΔT = 110 K, respectively. The flexTEG has been demonstrated by wearing it on the lower wrist, which resulted in an output voltage of ∼72.2 mV for ΔT ≈ 30 K. Our results pave the way for widespread use in wearable devices.
Amongst all the major hazard aspects for the health of people in big conglomerates is the increase of the particulate matter concentration. Traditional systems for particulate matter (PM) monitoring have a great number of drawbacks but the main issues are economical and are related to the installation costs and never ending periodical maintenance expenses. After all there are installations of such systems but their number is limited and having in mind the growth of population, cities and industry areas, there is even a bigger need for more information on air quality because PM changes non-linearly, has a wide range and different sources. In this paper, we propose an approach, based on low-cost sensor nodes, for real-time measuring and obtaining information about the PM concentration. The adoption of that approach allows for a detailed study of the intensities of pollution and its sources. The system power supply is powered by a PV module. The power supply unit is designed using a model-based design that is a new approach to prototyping power-operated electronic devices with guaranteed performance.
Sustainable chemical processes should be designed to combine the technological advantages and progress with lower safety risks and minimization of environmental impact such as, for example, reduction of raw materials, energy and water consumption, and avoidance of hazardous waste and pollution with toxic chemical agents. A number of novel eco-friendly chemical technologies have been developed in the recent decades with the help of the eco-innovations approaches and methods such as Life Cycle Analysis, Green Process Engineering, Process Intensification, Process Design for Sustainability, and others. An emerging approach to the sustainable process design in process engineering builds on the innovative solutions inspired from nature. However, the implementation of the eco-friendly technologies often faces secondary ecological problems. The study postulates that the eco-inventive principles identified in natural systems allow to avoid secondary eco-problems and proposes to apply these principles for sustainable design in chemical process engineering. The research work critically examines how this approach differs from the biomimetics, as it is commonly used for copying natural systems. The application of nature-inspired eco-design principles is illustrated with an example of a sustainable technology for extraction of nickel from pyrophyllite.
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
Investigation of the Angle Dependency of Self-Calibration in Multiple-Input-Multiple-Output Radars
(2021)
Multiple-Input-Multiple-Output (MIMO) is a key technology in improving the angular resolution (spatial resolution) of radars. In MIMO radars the amplitude and phase errors in antenna elements lead to increase in the sidelobe level and a misalignment of the mainlobe. As the result the performance of the antenna channels will be affected. Firstly, this paper presents analysis of effect of the amplitude and phase errors on angular spectrum using Monte-Carlo simulations. Then, the results are compared with performed measurements. Finally, the error correction with a self-calibration method is proposed and its angle dependency is evaluated. It is shown that the values of the errors change with an incident angle, which leads to a required angle-dependent calibration.
In automotive parking scenario, where the curb shall be detected and classified to be traversable or not, radars play an important role. There are different approaches already proposed in other works to estimate the target height. This paper assesses and compares two methods. The first is based on Angle of Arrival (AoA) estimation of input signals of multiple antennas using the Multiple-Input-Multiple-Output (MIMO) principle. The second method uses the geometry in multipath propagation of the radar echo signal for one antenna input. In this work a modified method of calculation of the curb height based on the second method is proposed. The theory of approach is mathematically proved and effectiveness is demonstrated by evaluation of measurements with a 77 GHz Frequency Modulated Continuous Wave (FMCW) radar. In order to evaluate the performance of the introduced method the mean square error (MSE) is used in the proposed scenario. This method, using only one antenna input, produced up to 3.4 times better results for curb height detection in comparison with former methods.
Apache Hadoop is a well-known open-source framework for storing and processing huge amounts of data. This paper shows the usage of the framework within a project of the university in cooperation with a semiconductor company. The goal of this project was to supplement the existing data landscape by the facilities of storing and analyzing the data on a new Apache Hadoop based platform.
Background: Pulmonary vein isolation (PVI) using cryoballoon catheters are a recognized method for the treatment of atrial fibrillation (AF). This method offers shorter treatment duration in contrast to the classical therapy with high-frequency (HF) ablation.
Purpose: The aim of this study was to integrate different cryoballoon catheters and a HF catheter into a heart rhythm model and to compare them by means of static and dynamic electromagnetic and thermal simulation in use under AF.
Methods: The cryoballoon catheters from Medtronic and the HF ablation catheter from Osypka were modelled virtually with the aid of manufacturer specifications and the CST (Computer Simulation Technology, Darmstadt) simulation program. The cryoballoon catheter was located in the lower left pulmonary vein of the virtual heart rhythm model for the realization of pulmonary vein isolation (PVI) by cryoenergy. The simulated temperature at the balloon surface was -50°C during the simulation.
Results: During a simulated 20 second application of a cryoballoon catheter at -50°C, a temperature of -24°C was measured at a depth of 0.5 mm in the myocardium. At a depth of 1 mm the temperature was -3°C, at 2 mm depth 18°C and at 3 mm depth 29°C. Under the 15 second application of a RF catheter with a 8 mm electrode and a power of 5 W at 420 kHz, the temperature at the tip of the electrode was 110°C. At a depth of 0.5 mm in the myocardium, the temperature was 75°C, at a depth of 1 mm 58°C, at 2 mm depth 45°C and at 3 mm depth 38°C.
Conclusions: The simulation of temperature profiles during the virtual application of several catheter models in the heart rhythm model allows the static and dynamic simulation of PVI by cryoballoon ablation and RF ablation. The three-dimensional simulation can be used to improve ablation applications by creating a model in personalized cardiac rhythm therapy from MRI or CT data of a heart and finding a favourable position for ablation of AF.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
In this report, we have studied field-effect transistors (FETs) using low-density alumina for electrolytic gating. Device layers have been prepared starting from the structured ITO glasses by printing the In 2 O 3 channels, low-temperature atomic layer deposition (ALD) of alumina (Al 2 O 3 ), and printing graphene top gates. The transistor performance could be deliberately changed by alternating the ambient humidity; furthermore, ID,ON/ID,OFF-ratios of up to seven orders of magnitude and threshold voltages between 0.66 and 0.43 V, decreasing with an increasing relative humidity between 40% and 90%, could be achieved. In contrast to the common usage of Al 2 O 3 as the dielectric in the FETs, our devices show electrolyte-typegating behavior. This is a result from the formation of protons on the Al 2 O 3 surfaces at higher humidities. Due to the very high local capacitances of the Helmholtz double layers at the channel surfaces, the operation voltage can be as low as 1 V. At low humidities (≤30%), the solid electrolyte dries out and the performance breaks down; however, it can fully reversibly be regained upon a humidity increase. Using ALD-derived alumina as solid electrolyte gating material, thus, allows low-voltage operation and provides a chemically stable gating material while maintaining low process temperatures. However, it has proven to be highly humidity-dependent in its performance.
Printed systems spark immense interest in industry, and for several parts such as solar cells or radio frequency identification antennas, printed products are already available on the market. This has led to intense research; however, printed field-effect transistors (FETs) and logics derived thereof still have not been sufficiently developed to be adapted by industry. Among others, one of the reasons for this is the lack of control of the threshold voltage during production. In this work, we show an approach to adjust the threshold voltage (Vth) in printed electrolyte-gated FETs (EGFETs) with high accuracy by doping indium-oxide semiconducting channels with chromium. Despite high doping concentrations achieved by a wet chemical process during precursor ink preparation, good on/off-ratios of more than five orders of magnitude could be demonstrated. The synthesis process is simple, inexpensive, and easily scalable and leads to depletion-mode EGFETs, which are fully functional at operation potentials below 2 V and allows us to increase Vth by approximately 0.5 V.
We have developed a methodology for the systematic generation of a large image dataset of macerated wood references, which we used to generate image data for nine hardwood genera. This is the basis for a substantial approach to automate, for the first time, the identification of hardwood species in microscopic images of fibrous materials by deep learning. Our methodology includes a flexible pipeline for easy annotation of vessel elements. We compare the performance of different neural network architectures and hyperparameters. Our proposed method performs similarly well to human experts. In the future, this will improve controls on global wood fiber product flows to protect forests.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
Low latency communication is essential to enable mission-critical machine-type communication (mMTC) use cases in cellular networks. Factory and process automation are major areas that require such low latency communication. In this paper, we investigate the potential of adopting the semi-persistent scheduling (SPS) latency reduction technique in narrowband LTE (NB-LTE) networks and provide a comprehensive performance evaluation. First, we investigate and implement SPS in an open-source network simulator (NS3). We perform simulations with a focus on LTE-M and Narrowband IoT (NB-IoT) systems and evaluate the impact of the SPS technique on the uplink latency of these narrowband systems in real industrial automation scenarios. The performance gain of adopting SPS is analyzed and the results is compared with the legacy dynamic scheduling. Our results show that SPS has the potential to reduce the latency of cellular Internet of Things (cIoT) networks. We believe that SPS can be integrated into LTE-M and NB-IoT systems to support low-latency industrial applications.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
eLetter zum Artikel "Plague Through History" von Nils Chr. Stenseth, veröffentlicht in Science, Vol. 321, Issue 5890, Seite 773-774 (doi.org/10.1126/science.1161496)