Refine
Document Type
Conference Type
- Konferenzartikel (11)
Has Fulltext
- no (17)
Is part of the Bibliography
- yes (17)
Keywords
- Deep Leaning (3)
- Robustness (3)
- Machine Learning (2)
- Adversarial Robustness (1)
- Artificial Intelligence (1)
- Bluetooth Low Energy (1)
- CNN (1)
- Challenges in Action Recognition (1)
- Collaboration of Academia and Industry (1)
- Computer Vision (1)
Institute
Open Access
- Open Access (12)
- Bronze (5)
- Closed Access (3)
- Diamond (3)
- Closed (2)
- Grün (2)
Harnessing the overall benefits of the latest advancements in artificial intelligence (AI) requires the extensive collaboration of academia and industry. These collaborations promote innovation and growth while enforcing the practical usefulness of newer technologies in real life. The purpose of this article is to outline the challenges faced during cross-collaboration between academia and industry. These challenges are also inspected with the help of an ongoing project titled “Quality Assurance of Machine Learning Applications” (Q-AMeLiA), in which three universities cooperate with five industry partners to make the product risk of AI-based products visible. Further, we discuss the hurdles and the key challenges in machine learning (ML) technology transformation from academia to industry based on robustness, simplicity, and safety. These challenges are an outcome of the lack of common standards, metrics, and missing regulatory considerations when state-of-the-art (SOTA) technology is developed in academia. The use of biased datasets involves ethical concerns that might lead to unfair outcomes when the ML model is deployed in production. The advancement of AI in small and medium sized enterprises (SMEs) requires more in terms of common tandardization of concepts rather than algorithm breakthroughs. In this paper, in addition to the general challenges, we also discuss domain specific barriers for five different domains i.e., object detection, hardware benchmarking, continual learning, action recognition, and industrial process automation, and highlight the steps necessary for successfully managing the cross-sectoral collaborations between academia and industry.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
In der Forschungsgruppe um Prof. Dr. Thomas Wendt werden Themen in unterschiedlichsten Bereichen von Automatisierungstechnik über funktionale Sicherheit bis hin zur 3D-gedruckten Elektronik / Sensorik behandelt. Insgesamt arbeiten vier Doktoranden und vier Mitarbeiter an der Weiterentwicklung der verschiedenen Technologien, die in diesem Artikel zusammengefasst dargestellt sind.
This work compares the performance of Bluetooth Mesh implementations on real chipsets against the ideal implementation of the specification. Measurements are taken in experimental settings and reveal non-idealities in the underlying Bluetooth Low Energy specification in real chipsets and in the implementation of Mesh, which introduces an unruly transmission as well as reception behavior. These effects lead to an impact on transmission rate, reception rate, latency, as well as a more significant impact on the average power consumption.
A novel Bluetooth Low Energy advertising scan algorithm is presented for hybrid radios that are additionally capable to measure energy on Bluetooth channels, e.g. as they would need to be compliant with IEEE 802.15.4. Scanners applying this algorithm can achieve a low latency whilst consuming only a fraction of the power that existing mechanisms can achieve at a similar latency. Furthermore, the power consumption can scale with the incoming network traffic and in contrast to the existing mechanisms, scanners can operate without any frame loss given ideal network conditions. The algorithm does not require any changes to advertisers, hence, stays compatible with existing devices. Performance evaluated via simulation and experiments on real hardware shows a 37 percent lower power consumption compared to the best existing scan setting while even achieving a slightly lower latency which proves that this algorithm can be used to improve the quality of service of connection-less Bluetooth communication or reduce the connection establishment time of connection-oriented communication.
We present a novel approach that utilizes BLE packets sent from generic BLE capable radios to synthesize an FSK-(like) addressable wake-up packet. A wake-up receiver system was developed from off-the-shelf components to detect these packets. It makes use of two differential signal paths separated by passive band-pass filters. After the rectification of each channel a differential amplifier compares the signals and the resulting wake-up signal is evaluated by an AS3933 wake-up receiver IC. Overall, the combination of these techniques contributes to a BLE compatible wake-up system which is more robust than traditional OOK wake-up systems. Thus, increasing wake-up range, while still maintaining a low energy budget. The proof-of-concept setup achieved a sensitivity of -47.8 dBm at a power consumption of 18.5 uW during passive listening. The system has a latency of 31.8 ms with a symbol rate of 1437 Baud.