Refine
Year of publication
- 2022 (83) (remove)
Document Type
- Conference Proceeding (83) (remove)
Conference Type
- Konferenzartikel (68)
- Konferenz-Abstract (11)
- Konferenz-Poster (2)
- Sonstiges (2)
Language
- English (83) (remove)
Has Fulltext
- no (83) (remove)
Is part of the Bibliography
- yes (83)
Keywords
- injury (10)
- Machine Learning (5)
- biomechanics (5)
- running (5)
- ACL (4)
- Robustness (4)
- Radar (3)
- RoboCup (3)
- sport (3)
- 3D printing (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (38)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (18)
- Fakultät Wirtschaft (W) (18)
- Fakultät Medien (M) (ab 22.04.2021) (11)
- IMLA - Institute for Machine Learning and Analytics (10)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (9)
- INES - Institut für nachhaltige Energiesysteme (5)
- ACI - Affective and Cognitive Institute (3)
- IUAS - Institute for Unmanned Aerial Systems (3)
- POIM - Peter Osypka Institute of Medical Engineering (2)
Open Access
- Closed (40)
- Open Access (34)
- Bronze (25)
- Closed Access (9)
- Diamond (6)
- Gold (1)
- Hybrid (1)
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
Synthesizing voice with the help of machine learning techniques has made rapid progress over the last years. Given the current increase in using conferencing tools for online teaching, we question just how easy (i.e. needed data, hardware, skill set) it would be to create a convincing voice fake. We analyse how much training data a participant (e.g. a student) would actually need to fake another participants voice (e.g. a professor). We provide an analysis of the existing state of the art in creating voice deep fakes and align the identified as well as our own optimization techniques in the context of two different voice data sets. A user study with more than 100 participants shows how difficult it is to identify real and fake voice (on avg. only 37% can recognize a professor’s fake voice). From a longer-term societal perspective such voice deep fakes may lead to a disbelief by default.
The majority of anterior cruciate ligament (ACL) injuries in team sports are non-contact injuries, with cutting maneuvers identified as high-risk tasks. Young female handball players have been shown to be at greater risk for ACL injuries than males. One risk factor for ACL injuries is the magnitude of the knee abduction moment (KAM). Cutting technique variables on foot placement, overall approach and knee kinematics have been shown to influence the KAM. Since injury risk is believed to increase with increasing task complexity, the purpose of the study was to test the effect of task complexity on technique variables that influence the KAM in female handball players during fake-and-cut tasks.
The purpose of this study was to 1) compare knee joint kinematics and kinetics of fake-and-cut tasks of varying complexity in 51 female handball players and 2) present a case study of one athlete who ruptured her ACL three weeks post data collection. External knee joint moments and knee joint angles in all planes at the instance of the peak external knee abduction moment (KAM) as well as moment and angle time curves were analyzed. Peak KAMs and knee internal rotation moments were substantially higher than published values obtained during simple change-of-direction tasks and, along with flexion angles, differed significantly between the tasks. Introducing a ball reception and a static defender increased joint loads while they partially decreased again when anticipation was lacking. Our results suggest to use game-specific assessments of injury risk while complexity levels do not directly increase knee loading. Extreme values of several risk factors for a post-test injured athlete highlight the need and usefulness of appropriate screenings.
The isolation measures adopted during the COVID-19 pandemic brought light to discussions related to the importance of meaningful social relationships as a basic need to human well-being. But even before the pandemic outbreak in the years 2020 and 2021, organizations and scholars were already drawing attention to the growing numbers related to lonely people in the world (World Economic Forum, 2019). Loneliness is an emotional distress caused by the lack of meaningful social connections, which affects people worldwide across all age groups, mainly young adults (Rook, 1984). The use of digital technologies has gained prominence as a means of alleviating the distress. As an example, studies have shown the benefits of using digital games both to stimulate social interactions (Steinfield, Ellison & Lampe, 2008) and to enhance the effects of digital interventions for mental health treatments, through gamification (Fleming et al., 2017). It is with these aspects in mind that the gamified app Noneliness was designed with the intention of reducing loneliness rates among young students at a German university. In addition to sharing the related works that supported the application development, this chapter also presents the aspects considered for the resource's design, its main functionalities, and the preliminary results related to the reduction of loneliness in the target audience.
During the periods of social isolation to contain the advance of COVID-19 in 2020 and 2021, educational institutions have had the challenge to adopt technological strategies not only to ensure continuity in students’ classes, but also to support their mental health in a period of uncertainty and health risks. Loneliness is an emotional distress caused by the lack of meaningful social connections; it has increasingly affected young adults worldwide during the pandemic's social isolation and still bears psychological effects in the current post-pandemic period. In the light of this challenge, the Nonenliness App was developed as a way to bring together university communities to address issues related to loneliness and mental health disorders through a gamified and social online environment. In this paper, we present the app and its main functionalities (Beta version) and discuss the preliminary results of a pilot clinical study conducted with university students in Germany (N = 12) to verify the app's efficacy and usability, alongside the challenges faced and the next steps to be taken regarding the platform's improvement.
We aim to debate and eventually be able to carefully judge how realistic the following statement of a young computer scientist is: “I would like to become an ethical correctly acting offensive cybersecurity expert”. The objective of this article is not to judge what is good and what is wrong behavior nor to present an overall solution to ethical dilemmas. Instead, the goal is to become aware of the various personal moral dilemmas a security expert may face during his work life. For this, a total of 14 cybersecurity students from HS Offenburg were asked to evaluate several case studies according to different ethical frameworks. The results and particularities are discussed, considering different ethical frameworks. We emphasize, that different ethical frameworks can lead to different preferred actions and that the moral understanding of the frameworks may differ even from student to student.
Towards a Formal Verification of Seamless Cryptographic Rekeying in Real-Time Communication Systems
(2022)
This paper makes two contributions to the verification of communication protocols by transition systems. Firstly, the paper presents a modeling of a cyclic communication protocol using a synchronized network of transition systems. This protocol enables seamless cryptographic rekeying embedded into cyclic messages. Secondly, we test the protocol using the model checking verification technique.
In this paper, we propose a unified approach for network pruning and one-shot neural architecture search (NAS) via group sparsity. We first show that group sparsity via the recent Proximal Stochastic Gradient Descent (ProxSGD) algorithm achieves new state-of-the-art results for filter pruning. Then, we extend this approach to operation pruning, directly yielding a gradient-based NAS method based on group sparsity. Compared to existing gradient-based algorithms such as DARTS, the advantages of this new group sparsity approach are threefold. Firstly, instead of a costly bilevel optimization problem, we formulate the NAS problem as a single-level optimization problem, which can be optimally and efficiently solved using ProxSGD with convergence guarantees. Secondly, due to the operation-level sparsity, discretizing the network architecture by pruning less important operations can be safely done without any performance degradation. Thirdly, the proposed approach finds architectures that are both stable and well-performing on a variety of search spaces and datasets.
The authors set the focus in this paper on the description of polarization with the help of the Jones calculus and the application of polarization in photography. Furthermore, the effect of the circular polarization filter is described by using the Jones calculus. Also, an enhancement of artistic and creative possibilities in photography through quantization or parametrization by the Jones matrices is presented.
Due to the increasing aging of the population, the number of elderly people requiring care is growing in most European countries. However, the number of caregivers working in nursing homes and on daily care services is declining in countries like Germany or Italy. This limits the time for interpersonal communication. Furthermore, as a result of the Covid-19 pandemic, social distancing during contact restrictions became more important, causing an additional reduction of personal interaction. This social isolation can strongly increase emotional stress. Robotic assistance could contribute to addressing this challenge on three levels: (1) supporting caregivers to respond individually to the needs of patients and residents in nursing homes; (2) observing patients’ health and emotional state; (3) complying with high hygiene standards and minimizing human contact if required. To further the research on emotional aspects and the acceptance of robotic assistance in care, we conducted two studies where elderly participants interacted with the social robot Misa. Facial expression and voice analysis were used to identify and measure the emotional state of the participants during the interaction. While interpersonal contact plays a major role in elderly care, the findings reveal that robotic assistance generates added value for both caregivers and patients and that they show emotions while interacting with them.
Voice user interfaces (VUIs) offer an intuitive, fast and convenient way for humans to interact with machines and computers. Yet, whether they’ll be truly successful and find widespread uptake in the near future depends on the user experience (UX) they offer. With this survey-based study (n = 108), we aim to identify the major annoyances German voice assistant users are facing in voice-driven human-computer interactions. The results of our questionnaire show that irritations appear in six categories: privacy issues, unwanted activation, comprehensibility, response quality, conversational design and voice characteristics. Our findings can help identify key areas of work to optimize voice user experience in order to achieve greater adaptation of the technology. In addition, they can provide valuable information for the further development and standardization of voice user experience (VUX) research.
In this study, various imaging algorithms for the localization of objects have been investigated. Therefore, an Ultra-Wideband (UWB) radar based experimental setup with a circular antenna array is designed as part of this work. This concept could be particularly useful in microwave medical imaging applications. In order to validate its applicability in microwave imaging, different imaging algorithms have been evaluated and compared by means of our experimental setup. Accurate imaging results have been achieved with our system under multiple test-scenarios.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
Due to the Covid-19 pandemic, the RoboCup WorldCup 2021 was held completely remotely. For this competition the Webots simulator (https://cyberbotics.com/) was used, so all teams needed to transfer their robot to the simulation. This paper describes our experiences during this process as well as a genetic learning approach to improve our walk engine to allow a more stable and faster movement in the simulation. Therefore we used a docker setup to scale easily. The resulting movement was one of the outstanding features that finally led to the championship title.
Sweaty has already participated several times in RoboCup soccer competitions (Adult Size). Now the work is focused on stabilizing the gait. Moreover, we would like to overcome the constraints of a ZMP-algorithm that has a horizontal footplate as precondition for the simplification of the equations. In addition we would like to switch between impedance and position control with a fuzzy-like algorithm that might help to minimize jerks when Sweaty’s feet touch the ground.
Spatially Distributed Wireless Networks (SDWN) are one of the basic technologies for the Internet of Things (IoT) and (Industrial) Internet of Things (IIoT) applications. These SDWN for many of these applications has strict requirements such as low cost, simple installation and operations, and high potential flexibility and mobility. Among the different Narrowband Wireless Wide Area Networking (NBWWAN) technologies, which are introduced to address these categories of wireless networking requirements, Narrowband Internet of Things (NB-IoT) is getting more traction due to attractive system parameters, energy-saving mode of operation with low data rates and bandwidth, and its applicability in 5G use cases. Since several technologies are available and because the underlying use cases come with various requirements, it is essential to perform a systematic comparative analysis of competing technologies to choose the right technology. It is also important to perform testing during different phases of the system development life cycle. This paper describes the systematic test environment for automated testing of radio communication and systematic measurements of the performance of NB-IoT.
Electrode modelling and simulation of diagnostic and pulmonary vein isolation in atrial fibrillation
(2022)
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this work, we explore three deep learning algorithms apply to seismic interpolation: deep prior image (DPI), standard, and generative adversarial networks (GAN). The standard and GAN approaches rely on a dataset of complete and decimated seismic images for the training process, while the DPI method learns from a decimated image itself, without training images. We carry out two main experiments, considering 10%, 30%, and 50% of regular and irregular decimation. The first tests the optimal situation for the GAN and the standard approaches, where training and testing images are from the same dataset. The second tests the ability of GAN and standard methods to learn simultaneously from three datasets, and generalize to a fourth dataset not used during training. The standard method provides the best results in the first experiment, when the training distribution is similar to the testing one. In this situation, the DPI approach reports the second best results. In the second experiment, the standard method shows the ability to learn simultaneously and effectively three data distributions for the regular case. In the irregular case, the DPI approach is more effective. The GAN approach is the less effective of the three deep learning methods in both experiments.
DE\GLOBALIZE
(2022)
The artistic research cycle DE\GLOBALIZE is a media ecological search movement for the terrestrial. After examining matters of fact in India (2014-18), matters of concern in Egypt (2016-2019) and matters of care in the Upper Rhine (2018-22), the focus turns toward matters of violence in the Congo (2022). From matter to mater, mother-earth, the garden to exploitation. From science, water and climate to migration, oppression and extermination.
The long-term research is accessible through interactive web documentation. The platform serves as a continuous media-archaeological archive for a speculative ethnography. The relational structure of the videographic essay is enabling the forensic processing of single documents in the sense of the actor-network theory.
The subject of the presentation at IFM is a field trip to the Congo planned for March 2022, which will focus on the ambivalence of violence and care in collaboration with local artists. The field trip is based on the postcolonial reflection luderitzcargo by the author from 1996, in which a freight container was transformed into a translocal cinema in Namibia.
Through the journey to Congo, a group of media artists, a psychotherapist, a theater dramaturg, a filmmaker and a philosopher intend to explore the political, technological and psycho-geographic borders. By artistic interventions with locals, we want to interfere with relational string figures as part of the new Earth Politics. They are focusing on the displaced consumption of resources which are hard-fought and guarantee prosperity in the global north. The so-called ghost acreages are repressed and justified as part of a civilizational mission. With this trip, we want to confront our self-lies with the ones of our hosts. We want to confront ourselves with the foreign, the dark and the displaced ghosts within ourselves. In the presentation at the #IFM2022 Conference, the platform DE\GLOBALIZE will be problematized itself as an example of epistemic violence for the ethnographic memory of (Western) knowledge.
We are not the missionaries but the perplexed travellers. In our search movement, we are dealing with psychoanalysis, video, performance and trance. As disoriented white men we try the reversal of Black Skin and White Mask by Franz Fanon without blackfacing. We will not only care about the sensitivity of our skin but that of our g/hosts and the one of mother earth.
A novelty solution for controls of assistive technology represent the usage of eye tracking devices such as for smart wheelchairs and robotic arms [10, 4]. In this context usage supporting methods like artificial feedback are not well explored. Vibrotactile feedback has shown to be helpful to decrease the cognitive load on the visual and auditive channels and can provide a perception of touch [17]. People with severe limitations of motor functions could benefit from eye tracking controls supported with vibrotactile feedback. In this study fundamental results will be presented in the design of an appropriate vibrotactile feedback system for eye tracking applications. We will show that a perceivable vibrotactile stimulus has no significant effect on the accuracy and precision of a head worn eye tracking device. It is anticipated that the results of this paper will lead to new insights in the design of vibrotactile feedback for eye tracking applications and eye tracking controls.
This work focuses on the dependencies between typical design parameters of surface acoustic wave (SAW) resonators and the nonlinear emitted signals of second and third order. The parameters metalization ratio and pitch are used as examples, but the approach can be extended to other design parameters as well. It is shown, that the interaction between the nonlinear current generation and the linear admittance is defining the measured nonlinear power signals. It is also discussed, that changes in linear properties get more pronounced in nonlinear responses. Therefore, slight effects on linear parameters will have significant influence on the observed nonlinearity.
The aim of this study is to identify indicators at country level that could prove useful in improving the effectiveness of fraud detection in European Structural and Investment Funds. The chapter analyses EU funds, belonging to the period 2014–2020, from and the study suggests the convenience of tracking funds, especially in countries with higher GDP and higher transparency levels, and the lesser relevance of the number of irregularities for countries with higher GDP and those receiving larger funds. Fraud and fraud detection rates in individual funds vary significantly across states. Federal states, such as the Federal Republic of Germany, are comparatively successful in detecting fraud in EU funds.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
Recent work has investigated the distributions of learned convolution filters through a large-scale study containing hundreds of heterogeneous image models. Surprisingly, on average, the distributions only show minor drifts in comparisons of various studied dimensions including the learned task, image domain, or dataset. However, among the studied image domains, medical imaging models appeared to show significant outliers through "spikey" distributions, and, therefore, learn clusters of highly specific filters different from other domains. Following this observation, we study the collected medical imaging models in more detail. We show that instead of fundamental differences, the outliers are due to specific processing in some architectures. Quite the contrary, for standardized architectures, we find that models trained on medical data do not significantly differ in their filter distributions from similar architectures trained on data from other domains. Our conclusions reinforce previous hypotheses stating that pre-training of imaging models can be done with any kind of diverse image data.
3D printing offers customisation capabilities regarding suspensions for oscillators of vibration energy harvesters. Adjusting printing parameters or geometry allows to influence dynamic properties like resonance frequency or bandwidth of the oscillator. This paper presents simulation results and measurements for a spiral shaped suspension printed with polylactic acid (PLA) and different layer heights. Eigenfrequencies have been simulated and measured and damping ratios have been experimentally determined.
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Current attack methods are able to manipulate the network's prediction by adding specific but small amounts of noise to the input. In turn, adversarial training (AT) aims to achieve robustness against such attacks and ideally a better model generalization ability by including adversarial samples in the trainingset. However, an in-depth analysis of the resulting robust models beyond adversarial robustness is still pending. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions, even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences. Data & Project website: https://github.com/GeJulia/robustness_confidences_evaluation
Despite the success of convolutional neural networks (CNNs) in many academic benchmarks for computer vision tasks, their application in the real-world is still facing fundamental challenges. One of these open problems is the inherent lack of robustness, unveiled by the striking effectiveness of adversarial attacks. Adversarial training (AT) is often considered as a remedy to train more robust networks. In this paper, we empirically analyze a variety of adversarially trained models that achieve high robust accuracies when facing state-of-the-art attacks and we show that AT has an interesting side-effect: it leads to models that are significantly less overconfident with their decisions even on clean data than non-robust models. Further, our analysis of robust models shows that not only AT but also the model's building blocks (like activation functions and pooling) have a strong influence on the models' prediction confidences.
Over the last years, Convolutional Neural Networks (CNNs) have been the dominating neural architecture in a wide range of computer vision tasks. From an image and signal processing point of view, this success might be a bit surprising as the inherent spatial pyramid design of most CNNs is apparently violating basic signal processing laws, i.e. Sampling Theorem in their down-sampling operations. However, since poor sampling appeared not to affect model accuracy, this issue has been broadly neglected until model robustness started to receive more attention. Recent work in the context of adversarial attacks and distribution shifts, showed after all, that there is a strong correlation between the vulnerability of CNNs and aliasing artifacts induced by poor down-sampling operations. This paper builds on these findings and introduces an aliasing free down-sampling operation which can easily be plugged into any CNN architecture: FrequencyLowCut pooling. Our experiments show, that in combination with simple and Fast Gradient Sign Method (FGSM) adversarial training, our hyper-parameter free operator substantially improves model robustness and avoids catastrophic overfitting. Our code is available at https://github.com/GeJulia/flc_pooling
Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model’s vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
Micronization of biochar (BC) may ease its application in agriculture. For example, fine biochar powders can be applied as suspensions via drip-irrigation systems or can be used to produce grnulated fertilizers. However, micronization may effect important physical biochar properties like the water holding capacity (WHC) or the porosity.
The development of a 3D printed force sensor for a gripper was studied applying an embedded constantan wire as sensing element. In the first section, the state of the art is explained. In the main section of the paper the modeling, simulation and verification of a sensor element are described for a three-point bending test made in accordance with the DIN EN ISO 178. The 3D printing process of the Fused Filament Fabrication (FFF) utilized for manufacturing the sensor samples in combination with an industrial robot are shown. A comparison between theory and practice are considered in detail. Finally, an outlook is given regarding the integration of the sensor element in gripper jaws.
This paper presents the development of a capacitive level sensor for robotics applications, which is designed for measurements of liquid levels during a pouring process. The proposed sensor design applies the advantages of guard electrodes in combination with passive shielding to increase resistance against external influences. This is important for reliable operations in rapidly changing measurement environments, as they occur in the field of robotics. The non-contact type sensor for liquid level measurement is the solution for avoiding contaminations and suit food guidelines. The designed sensor can be utilized in gastronomic applications. Two versions of the sensor were simulated, fabricated, and compared. The first version is based on copper electrodes, and the other type is fully 3D printed with electrodes made of conductive polylactic acid (PLA).
Rising societies’ demands require more sustainable products and technologies. Although numerous methods and tools have been developed in the last decades to support environmental-friendly product and process development, an interdisciplinary knowledge base of eco-innovative examples linked to the eco-innovative problems and solution principles is lacking. The paper proposes an ontology of examples for eco-friendly products and technologies assigned to the Inventive Principles (IPs) of the TRIZ methodology in accordance with the German TRIZ Standard VDI 4521. The examples of sustainable technologies and products build a database for sharing and reusing eco-innovation knowledge. The ontology acts as a tool for systematic solving of specific environmental problems in typical life cycle phases, for different environmental impact categories and engineering domains. Finally, the paper defines a future research agenda in the field of the TRIZ-based systematic eco-innovation.
Separation Estimation with Thermal Cameras for Separation Monitoring in Human-Robot Collaboration
(2022)
Human-Robot Collaborative applications have the drawback of being less efficient than their non-collaborative counterparts. One of the main reasons is, that the robot has to slow down when a human being is within the operating space of the robot. There are different approaches on dynamic speed and separation monitoring in human-robot collaborative applications. One approach additionally differentiates between human and non-human objects to increase efficiency in speed and separation monitoring. This paper proposes to estimate the separation distance by measuring the temperature of the approaching object. Measurements show that the measured temperature of a human being decreases with 1 deg C per meter distance from the sensor. This allows an estimation of separation between a robotic system and a human being.
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
(2022)
Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, \ie the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.