Refine
Year of publication
- 2022 (50) (remove)
Document Type
- Conference Proceeding (38)
- Letter to Editor (3)
- Patent (3)
- Article (reviewed) (2)
- Book (2)
- Article (unreviewed) (2)
Conference Type
- Konferenzartikel (33)
- Konferenz-Abstract (4)
- Sonstiges (1)
Language
- English (50) (remove)
Has Fulltext
- no (50) (remove)
Is part of the Bibliography
- yes (50)
Keywords
- Machine Learning (5)
- Robustness (4)
- Radar (3)
- RoboCup (3)
- Aliasing (2)
- CNNs (2)
- Geophysik (2)
- NB-IoT (2)
- UWB radars (2)
- catheter ablation (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (50) (remove)
Open Access
- Open Access (23)
- Closed (21)
- Bronze (15)
- Closed Access (6)
- Diamond (3)
- Gold (2)
- Hybrid (1)
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
A circuit arrangement of a motor vehicle includes a high-voltage battery for storing electrical energy, an electric machine for driving the motor vehicle, a converter via which high-voltage direct current voltage provided by the high-voltage battery is convertible into high-voltage alternating current voltage for operating the electric machine, and a charging connection for providing electrical energy for charging the high-voltage battery. The converter is a three-stage converter having a first switch unit which is assigned to a first phase of the electric machine. The first switch unit has two switch groups connected in series which each have two insulated-gate bipolar transistors (IGBTs) connected in series, where a connection is disposed between the IGBTs of one of the two switch groups, which connection is electrically connected directly to a line of the charging connection.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
In this paper, we propose a unified approach for network pruning and one-shot neural architecture search (NAS) via group sparsity. We first show that group sparsity via the recent Proximal Stochastic Gradient Descent (ProxSGD) algorithm achieves new state-of-the-art results for filter pruning. Then, we extend this approach to operation pruning, directly yielding a gradient-based NAS method based on group sparsity. Compared to existing gradient-based algorithms such as DARTS, the advantages of this new group sparsity approach are threefold. Firstly, instead of a costly bilevel optimization problem, we formulate the NAS problem as a single-level optimization problem, which can be optimally and efficiently solved using ProxSGD with convergence guarantees. Secondly, due to the operation-level sparsity, discretizing the network architecture by pruning less important operations can be safely done without any performance degradation. Thirdly, the proposed approach finds architectures that are both stable and well-performing on a variety of search spaces and datasets.
In this study, various imaging algorithms for the localization of objects have been investigated. Therefore, an Ultra-Wideband (UWB) radar based experimental setup with a circular antenna array is designed as part of this work. This concept could be particularly useful in microwave medical imaging applications. In order to validate its applicability in microwave imaging, different imaging algorithms have been evaluated and compared by means of our experimental setup. Accurate imaging results have been achieved with our system under multiple test-scenarios.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
Due to the Covid-19 pandemic, the RoboCup WorldCup 2021 was held completely remotely. For this competition the Webots simulator (https://cyberbotics.com/) was used, so all teams needed to transfer their robot to the simulation. This paper describes our experiences during this process as well as a genetic learning approach to improve our walk engine to allow a more stable and faster movement in the simulation. Therefore we used a docker setup to scale easily. The resulting movement was one of the outstanding features that finally led to the championship title.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused on stabilizing the gait. Moreover, we would like to overcome the constraints of a ZMP-algorithm that has a horizontal footplate as precondition for the simplification of the equations. In addition we would like to switch between impedance and position control with a fuzzy-like algorithm that might help to minimize jerks when Sweaty’s feet touch the ground.
Spatially Distributed Wireless Networks (SDWN) are one of the basic technologies for the Internet of Things (IoT) and (Industrial) Internet of Things (IIoT) applications. These SDWN for many of these applications has strict requirements such as low cost, simple installation and operations, and high potential flexibility and mobility. Among the different Narrowband Wireless Wide Area Networking (NBWWAN) technologies, which are introduced to address these categories of wireless networking requirements, Narrowband Internet of Things (NB-IoT) is getting more traction due to attractive system parameters, energy-saving mode of operation with low data rates and bandwidth, and its applicability in 5G use cases. Since several technologies are available and because the underlying use cases come with various requirements, it is essential to perform a systematic comparative analysis of competing technologies to choose the right technology. It is also important to perform testing during different phases of the system development life cycle. This paper describes the systematic test environment for automated testing of radio communication and systematic measurements of the performance of NB-IoT.
Electrode modelling and simulation of diagnostic and pulmonary vein isolation in atrial fibrillation
(2022)
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this work, we explore three deep learning algorithms apply to seismic interpolation: deep prior image (DPI), standard, and generative adversarial networks (GAN). The standard and GAN approaches rely on a dataset of complete and decimated seismic images for the training process, while the DPI method learns from a decimated image itself, without training images. We carry out two main experiments, considering 10%, 30%, and 50% of regular and irregular decimation. The first tests the optimal situation for the GAN and the standard approaches, where training and testing images are from the same dataset. The second tests the ability of GAN and standard methods to learn simultaneously from three datasets, and generalize to a fourth dataset not used during training. The standard method provides the best results in the first experiment, when the training distribution is similar to the testing one. In this situation, the DPI approach reports the second best results. In the second experiment, the standard method shows the ability to learn simultaneously and effectively three data distributions for the regular case. In the irregular case, the DPI approach is more effective. The GAN approach is the less effective of the three deep learning methods in both experiments.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
Recent work has investigated the distributions of learned convolution filters through a large-scale study containing hundreds of heterogeneous image models. Surprisingly, on average, the distributions only show minor drifts in comparisons of various studied dimensions including the learned task, image domain, or dataset. However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains. Following this observation, we study the collected medical imaging models in more detail. We show that instead of fundamental differences, the outliers are due to specific processing in some architectures. Quite the contrary, for standardized architectures, we find that models trained on medical data do not significantly differ in their filter distributions from similar architectures trained on data from other domains. Our conclusions reinforce previous hypotheses stating that pre-training of imaging models can be done with any kind of diverse image data.
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robust networks. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks. From an image and signal processing point of view, this success might be a bit surprising as the inherent spatial pyramid design of most CNNs is apparently violating basic signal processing laws, i.e. Sampling Theorem in their down-sampling operations. However, since poor sampling appeared not to affect model accuracy, this issue has been broadly neglected until model robustness started to receive more attention. Recent work in the context of adversarial attacks and distribution shifts, showed after all, that there is a strong correlation between the vulnerability of CNNs and aliasing artifacts induced by poor down-sampling operations. This paper builds on these findings and introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture: FrequencyLowCut pooling. Our experiments show, that in combination with simple and Fast Gradient Sign Method (FGSM) adversarial training, our hyper-parameter free operator substantially improves model robustness and avoids catastrophic overfitting. Our code is available at https://github.com/GeJulia/flc_pooling
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. To reveal model weaknesses, adversarial attacks are specifically optimized to generate small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained by using adversarial examples during training, which in most cases reduces the measurable model attackability. Unfortunately, this technique can lead to robust overfitting, which results in non-robust models. In this paper, we analyze adversarially trained, robust models in the context of a specific network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from downsampling artifacts, aka. aliasing, than baseline models. In the case of robust overfitting, we observe a strong increase in aliasing and propose a novel early stopping approach based on the measurement of aliasing.
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail due to computational constraints. To represent key processes, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM (European Center for Medium-Range Weather Forecast-Hamburg-Hamburg) global climate aerosol model using the M7 microphysics, but high computational costs make it very expensive to run with finer resolution or for a longer time. We aim to use machine learning to emulate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input–output pairs to train a neural network (NN) on it. We are able to learn the variables’ tendencies achieving an average R² score of 77.1%. We further explore methods to inform and constrain the NN with physical knowledge to reduce mass violation and enforce mass positivity. On a Graphics processing unit (GPU), we achieve a speed-up of up to over 64 times faster when compared to the original model.
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
The present invention relates to open-loop and closed-loop control units for extracorporeal circulatory support, to systems comprising such an open-loop and closed-loop control unit, and to corresponding methods. An open-loop and closed-loop control unit (10) for extracorporeal circulatory support is proposed, which is configured to receive a measurement of an ECG signal (12) of a supported patient over a predefined period of time, wherein the ECG signal (12) comprises multiple data points for each time point within a heart cycle. The open-loop and closed-loop control unit (10) comprises an evaluation unit (100) which is configured to evaluate the data points for at least one time point in a spatial and/or temporal manner and to determine at least one amplitude change (14) within the heart cycle based on the evaluated data points. The open-loop and closed-loop control unit (10) is further configured to output an open-loop and/or closed-loop signal (16) for extracorporeal circulatory support at a predefined point in time after the at least one amplitude change (14).
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
(2022)
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, \ie the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.
This paper presents an extended version of a previously published Bayesian algorithm for the automatic correction of the positions of the equipment on the map with simultaneous mobile object trajectory localization (SLAM) in underground mine environment represented by undirected graph. The proposed extended SLAM algorithm requires much less preliminary data on possible equipment positions and uses an additional resample move algorithm to significantly improve the overall performance.
Subspace clustering aims to find all clusters in all subspaces of a high-dimensional data space. We present a massively data-parallel approach that can be run on graphics processing units. It extends a previous density-based method that scales well with the number of dimensions. Its main computational bottleneck consists of (sequentially) generating a large number of minimal cluster candidates in each dimension and using hash collisions in order to find matches of such candidates across multiple dimensions. Our approach parallelizes this process by removing previous interdependencies between consecutive steps in the sequential generation process and by applying a very efficient parallel hashing scheme optimized for GPUs. This massive parallelization gives up to 70x speedup for
the bottleneck computation when it is replaced by our approach and run on current GPU hardware. We note that depending on data size and choice of parameters, the parallelized part of the algorithm can take different percentages of the overall runtime of the clustering process, and thus, the overall clustering speedup may vary significantly between different cases. However, even
in our ”worst-case” test, a small dataset where the computation makes up only a small fraction of the overall clustering time, our parallel approach still yields a speedup of more than 3x for the complete run of the clustering process. Our method could also be combined with parallelization of other parts of the clustering algorithm, with an even higher potential gain in processing speed.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant margin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image
classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers.
We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
In automotive parking scenario, where the curb shall be detected and classified to be traversable or not, radars play an important role. There are different approaches already proposed in other works to estimate the target height. This paper assesses and compares two methods. The first is based on Angle of Arrival (AoA) estimation of input signals of multiple antennas using the Multiple-Input-Multiple-Output (MIMO) principle. The second method uses the geometry in multipath propagation of the radar echo signal for one antenna input. In this work a modified method of calculation of the curb height based on the second method is proposed. The theory of approach is mathematically proved and effectiveness is demonstrated by evaluation of measurements with a 77 GHz Frequency Modulated Continuous Wave (FMCW) radar. In order to evaluate the performance of the introduced method the mean square error (MSE) is used in the proposed scenario. This method, using only one antenna input, produced up to 3.4 times better results for curb height detection in comparison with former methods.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
This book, now in its third, completely revised and updated edition, offers a critical approach to the challenging interpretation of the latest research data obtained using functional neuroimaging in whiplash injury. Such a comprehensive guide to recent and current international research in the field is more necessary than ever, given that the confusion regarding the condition and the medicolegal discussions surrounding it have increased further despite the publication of much literature on the subject. In recent decades especially the functional imaging methods – such as single-photon emission tomography, positron emission tomography, functional MRI, and hybrid techniques – have demonstrated a variety of significant brain alterations. Functional Neuroimaging in Whiplash Injury - New Approaches covers all aspects, including the imaging tools themselves and the various methods of image analysis. Details on biomechanics, including the finite element method and facts on historical whiplash experiments and crash tests have now been added to this new edition. The book will continue to help physicians, patients and their relatives and friends, and others to understand this condition as a disease.
Biodegradable metals have entered the implant market in recent years, but still do not show fully satisfactory degradation behaviour and mechanical properties. In contrast, it has been shown that pure molybdenum has an excellent combination of the required properties in this respect. We report on PM based screen printing of thin-walled molybdenum tubes as a processing step for medical stent manufacture. We also present data on the in vivo degradation and biocompatibility of molybdenum. The degradation of molybdenum wires implanted in the aorta of rats was evaluated by SEM and EDX. Biocompatibility was assessed by histological investigation of organs and analysis of molybdenum levels in tissue extracts and body fluids. Degradation rates of up to 13.5 μm/y were observed after 12 months. No histological changes or elevated molybdenum levels in organ tissues were observed. In summary, the results further underline that molybdenum is a highly promising biodegradable metallic material.
Featherweight Go (FG) is a minimal core calculus that includes essential Go features such as overloaded methods and interface types. The most straightforward semantic description of the dynamic behavior of FG programs is to resolve method calls based on run-time type information. A more efficient approach is to apply a type-directed translation scheme where interface-values are replaced by dictionaries that contain concrete method definitions. Thus, method calls can be resolved by a simple lookup of the method definition in the dictionary. Establishing that the target program obtained via the type-directed translation scheme preserves the semantics of the original FG program is an important task.
To establish this property we employ logical relations that are indexed by types to relate source and target programs. We provide rigorous proofs and give a detailed discussion of the many subtle corners that we have encountered including the need for a step index due to recursive inter- faces and method definitions.
Featherweight Generic Go (FGG) is a minimal core calculus modeling the essential features of the programming language Go. It includes support for overloaded methods, interface types, structural subtyping and generics. The most straightforward semantic description of the dynamic behavior of FGG programs is to resolve method calls based on runtime type information of the receiver.
This article shows a different approach by defining a type-directed translation from FGG to an untyped lambda-calculus. The translation of an FGG program provides evidence for the availability of methods as additional dictionary parameters, similar to the dictionary-passing approach known from Haskell type classes. Then, method calls can be resolved by a simple lookup of the method definition in the dictionary.
Every program in the image of the translation has the same dynamic semantics as its source FGG program. The proof of this result is based on a syntactic, step-indexed logical relation. The step-index ensures a well-founded definition of the relation in the presence of recursive interface types and recursive methods.
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.
Narrowband Internet-of-Things (NB-IoT) is a 3rd generation partnership project (3GPP) standardized cellular technology, adopted for 5G and optimized for massive Machine Type Communication (mMTC). Applications are anticipated around infrastructure monitoring, asset management, smart city and smart energy applications. In this paper, we evaluate the suitability of NB-IoT for private (campus) networks in industrial environments, including complex cloud-based applications around process automation. An end-to-end system has been developed, comprising of a sensor unit connected to a NB-IoT modem, a base station (gNodeB) equipped with a beamforming array and a local (private) network architecture comprising a sensor management system in the edge cloud. The experimental study includes field tests in realistic industrial environments with latency, reliability and coverage measurements. The results show a good suitability of NB-IoT for process automation with high scalability, low-power requirements and moderate latency requirements.