Refine
Document Type
- Conference Proceeding (55) (remove)
Conference Type
- Konferenzartikel (53)
- Konferenz-Abstract (2)
Is part of the Bibliography
- yes (55)
Keywords
- Deep Leaning (3)
- Current measurement (2)
- Geophysik (2)
- Radar (2)
- Switches (2)
- UWB radars (2)
- catheter ablation (2)
- certificate management (2)
- cryptography (2)
- imaging algorithms (2)
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (55) (remove)
Open Access
- Closed (55) (remove)
In many application areas, Deep Reinforcement Learning (DRL) has led to breakthroughs. In Curriculum Learning, the Machine Learning algorithm is not randomly presented with examples, but in a meaningful order of increasing difficulty. This has been used in many application areas to further improve the results of learning systems or to reduce their learning time. Such approaches range from learning plans created manually by domain experts to those created automatically. The automated creation of learning plans is one of the biggest challenges.In this work, we investigate an approach in which a trainer learns in parallel and analogously to the student to automatically create a learning plan for the student during this Double Deep Reinforcement Learning (DDRL). Three Reward functions, Friendly, Adversarial, and Dynamic based on the learner’s reward are compared. The domain for evaluation is kicking with variable distance, direction and relative ball position in the SimSpark simulated soccer environment.As a result, Statistic Curriculum Learning (SCL) performs better than a random curriculum with respect to training time and result quality. DDRL reaches a comparable quality as the baseline and outperforms it significantly in shorter trainings in the distance-direction subdomain reducing the number of required training cycles by almost 50%.
Ensuring that software applications present their users the most recent version of data is not trivial. Self-adjusting computations are a technique for automatically and efficiently recomputing output data whenever some input changes.
This article describes the software architecture of a large, commercial software system built around a framework for coarse-grained self-adjusting computations in Haskell. It discusses advantages and disadvantages based on longtime experience. The article also presents a demo of the system and explains the API of the framework.
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
(2023)
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes. In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks. In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases. Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the "LSUN-Bedroom" dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.The code for our experiments is provided at https://github.com/deepfake-study/deepfake-multiLID.
Erlang is a functional programming language with dynamic typing. The language offers great flexibility for destructing values through pattern matching and dynamic type tests. Erlang also comes with a type language supporting parametric polymorphism, equi-recursive types, as well as union and a limited form of intersection types. However, type signatures only serve as documentation; there is no check that a function body conforms to its signature.
Set-theoretic types and semantic subtyping fit Erlang’s feature set very well. They allow expressing nearly all constructs of its type language and provide means for statically checking type signatures. This article brings set-theoretic types to Erlang and demonstrates how existing Erlang code can be statically type checked without or with only minor modifications to the code. Further, the article formalizes the main ingredients of the type system in a small core calculus, reports on an implementation of the system, and compares it with other static type checkers for Erlang.
In recent years, predictive maintenance tasks, especially for bearings, have become increasingly important. Solutions for these use cases concentrate on the classification of faults and the estimation of the Remaining Useful Life (RUL). As of today, these solutions suffer from a lack of training samples. In addition, these solutions often require high-frequency accelerometers, incurring significant costs. To overcome these challenges, this research proposes a combined classification and RUL estimation solution based on a Convolutional Neural Network (CNN) and a Long Short-Term Memory (LSTM) network. This solution relies on a hybrid feature extraction approach, making it especially appropriate for low-cost accelerometers with low sampling frequencies. In addition, it uses transfer learning to be suitable for applications with only a few training samples.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
Seismic data processing relies on multiples attenuation to improve inversion and interpretation. Radon-based algorithms are often used for multiples and primaries discrimination. Deep learning, based on convolutional neural networks (CNNs), has shown encouraging applications for demultiple that could mitigate Radon-based challenges. In this work, we investigate new strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. Moreover, we investigate two distinctive training methods for all the strategies: UNet based on minimum absolute error (L1) training, and adversarial training (GAN-UNet). We test the trained models with the different strategies and methods on 400 synthetic data. We found that training to predict multiples, including the primaries …
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
The paper compares different anti-windup strategies for the current control of inverter-fed permanent magnet synchronous machines (PMSM) controlled by pulse-width modulation. In this respect, the focus is on the drive behavior with a relatively large product of stator frequency and sampling time. A requirement for dynamically high-quality anti-windup measures is, among other things, a sufficiently accurate decoupling of the stator current direct axis and quadrature axis components even at high stator frequencies. Discrete-time models of the electrical subsystem of the PMSM are well suited for this purpose, of which the method found to be the most accurate in a preliminary investigation is used as the basis for all anti-windup methods examined. Simulation studies and measurement results document the performance of the compared methods.
Current Harmonics Control Algorithm for inverter-fed Nonlinear Synchronous Electrical Machines
(2023)
Current harmonics are a well known challenge of electrical machines. They can be undesirable as they can cause instabilities in the control, generate additional losses and lead to torque ripples with noise. However, they can also be specifically generated in new methods in order to improve the machine behavior. In this paper, an algorithm for controlling current harmonics is proposed. It can be described as a combination of different PI controllers for defined angles of the machine with repetitive control characteristics for whole revolutions. The controller design is explained and important points where linearization is necessary are shown. Furthermore, the limits are analyzed and, for validation, measurement results with a permanently excited synchronous machine on the test bench are considered.
The nonlinear behavior of inverters is largely impacted by the interlocking and switching times. A method for online identifying the switching times of semiconductors in inverters is presented in the following work. By being able to identify these times, it is possible to compensate for the nonlinear behavior, reduce interlocking time, and use the information for diagnostic purposes. The method is first theoretically derived by examining different inverter switching cases and determining potential identification possibilities. It is then modified to consider the entire module for more robust identification. The methodology, including limitations and boundary conditions, is investigated and a comparison of two methods of measurement acquisition is provided. Subsequently the developed hardware is described and the implementation in an FPGA is carried out. Finally, the results are presented, discussed, and potential challenges are encountered.
The present work describes an extension of current slope estimation for parameter estimation of permanent magnet synchronous machines operated at inverters. The area of operation for current slope estimation in the individual switching states of the inverter is limited due to measurement noise, bandwidth limitation of the current sensors and the commutation processes of the inverter's switching operations. Therefore, a minimum duration of each switching state is necessary, limiting the final area of operation of a robust current slope estimation. This paper presents an extension of existing current slope estimation algorithms resulting in a greater area of operation and a more robust estimation result.
An important step in seismic data processing to improve inversion and interpretation is multiples attenuation. Radon-based algorithms are often used for discriminating primaries and multiples. Recently, deep learning (DL), based on convolutional neural networks (CNNs) has shown promising results in demultiple that could mitigate the challenges of Radon-based methods. In this work, we investigate new different strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. We evaluate the performance of the CNNs trained with the different strategies on 400 clean and noisy synthetic data, considering 3 metrics. We found that training a CNN to predict the multiples and then subtracting them from the input image is the most effective strategy for demultiple. Furthermore, including the primaries labels as a constraint during the training of multiples prediction improves the results. Finally, we test the strategies on a field dataset. The CNNs trained with different strategies report competitive results on real data compared with Radon demultiple. As a result, effectively trained CNN models can potentially replace Radon-based demultiple in existing workflows.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
Modern industrial production is heavily dependent on efficient workflow processes and automation. The steady flow of raw materials as well as the separation of vital parts and semi-finished products are at the core of these automated procedures. Commonly used systems for this work are bowl feeders, which separate the parts and material by a combination of mechanical vibration and friction. The production of these tools, especially the design of the ramping spiral, is delicate and time-consuming work, as the shape, slope, and material must be carefully adjusted for the corresponding parts. In this work, we propose an automated approach, making use of optimization procedures from artificial intelligence, to design the spiral ramps of the bowl feeders. Therefore, the whole system and considered parts are physically simulated and the optimized geometry is subsequently exported into a CAD system for the actual building, respectively printing. The employment of evolutionary optimization gives the need to develop a mathematical model for the whole setup and find an efficient representation of integral features.
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
Wireless communication networks are crucial for enabling megatrends like the Internet of Things (IoT) and Industry 4.0. However, testing these networks can be challenging due to the complex network topology and RF characteristics, requiring a multitude of scenarios to be tested. To address this challenge, the authors developed and extended an automated testbed called Automated Physical TestBed (APTB). This testbed provides the means to conduct controlled tests, analyze coexistence, emulate multiple propagation paths, and model dependable channel conditions. Additionally, the platform supports test automation to facilitate efficient and systematic experimentation. This paper describes the extended architecture, implementation, and performance evaluation of the APTB testbed. The APTB testbed provides a reliable and efficient solution for testing wireless communication networks under various scenarios. The implementation and performance verification of the testbed demonstrate its effectiveness and usefulness for researchers and industry practitioners.
Fused Filament Fabrication (FFF) is a widespread additive manufacturing technology, mostly in the field of printable polymers. The use of filaments filled with metal particles for the manufacture of metallic parts by FFF presents specific challenges regarding debinding and sintering. For aluminium and its alloys, the sintering temperature range overlaps with the temperature range of thermal decomposition of many commonly used “backbone” polymers, which provide stability to the green parts. Moreover, the high oxygen affinity of aluminium necessitates the use of special sintering regimes and alloying strategies. Therefore, it is challenging to achieve both low porosity and low levels of oxygen and carbon impurities at the same time. Feedstocks compatible with the special requirements of aluminium alloys were developed. We present results on the investigation of debinding/sintering regimes by Fourier Transform Infrared spectroscopy (FTIR) based In-Situ Process Gas Analysis and discuss optimized thermal treatment strategies for Al-based FFF.
An international study summarizes the threat situation in the OT environment under the heading "Growing security threats" [1]. According to this study, attacks on automation systems are likely to increase in the future. Accordingly, an automation system must be able to protect the integrity of the transmitted information in the future. This requirement is motivated, among other things, by the fact that the network-side isolation of industrial communication systems is no longer considered sufficient as the sole protective measure. This paper uses the example of PROFINET to show how the future requirements for a real-time communication protocol can be met and how they can be derived from the IEC 62443 standard.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
Encapsulant-free N.I.C.E. modules have strong ecological advantages compared to conventional laminated modules but suffer generally from lower electrical performance. Via long-term outdoor monitoring of fullsize industrial modules of both types with identical solar cells, we investigated if the performance difference remains constant over time and which parameters influence its value. After assessing about a full year’s data, two obvious levers for N.I.C.E. optimization are identified: The usage of textured glass and transparent adhesives on the module rear side. Also, the performance loss could be alleviated using tracking systems due to lower AOI values. Our measurements show additionally that N.I.C.E. module surfaces are in average about 2.5°C cooler compared to laminated modules. With these findings, we lay out a roadmap to reduce today’s LIV gap of about 5%rel by different optimizations.
In this paper we report on further success of our work to develop a multi-method energy optimization which works with a digital twin concept. The twin concept serves to replicate production processes of different kinds of production companies, including complex energy systems and test market interactions to then use them for model predictive optimizing. The presented work finally reports about the performed flexibility assessment leading to a flexibility audit with a list of measures and the impact of energy optimizations made related to interactions with the local power grid i.e., the exchange node of the low voltage distribution grid. The analysis and continuous exploration of flexibilities as well as the exchange with energy markets require a “guide” leading to continuous optimization with a further tool like the Flexibility Survey and Control Panel helping decision-making processes on the day-ahead horizon for real production plants or the investment planning to improve machinery, staff schedules and production
infrastructure.
As industrial networks continue to expand and connect more devices and users, they face growing security challenges such as unauthorized access and data breaches. This paper delves into the crucial role of security and trust in industrial networks and how trust management systems (TMS) can mitigate malicious access to these networks.The TMS presented in this paper leverages distributed ledger technology (blockchain) to evaluate the trustworthiness of blockchain nodes, including devices and users, and make access decisions accordingly. While this approach is applicable to blockchain, it can also be extended to other areas. This approach can help prevent malicious actors from penetrating industrial networks and causing harm. The paper also presents the results of a simulation to demonstrate the behavior of the TMS and provide insights into its effectiveness.
Printed electronics can add value to existing products by providing new smart functionalities, such as sensing elements over large-areas on flexible or non-conformal surfaces. Here we present a hardware concept and prototype for a thinned ASIC integrated with an inkjet-printed temperature sensor alongside in-built additional security and unique identification features. The hybrid system exploits the advantages of inkjet-printable platinum-based sensors, physically unclonable function circuits and a fluorescent particle-based coating as a tamper protection layer.
Currently, many theoretical as well as practically relevant questions towards the transferability and robustness of Convolutional Neural Networks (CNNs) remain unsolved. While ongoing research efforts are engaging these problems from various angles, in most computer vision related cases these approaches can be generalized to investigations of the effects of distribution shifts in image data. In this context, we propose to study the shifts in the learned weights of trained CNN models. Here we focus on the properties of the distributions of dominantly used 3×3 convolution filter kernels. We collected and publicly provide a dataset with over 1.4 billion filters from hundreds of trained CNNs, using a wide range of datasets, architectures, and vision tasks. In a first use case of the proposed dataset, we can show highly relevant properties of many publicly available pre-trained models for practical applications: I) We analyze distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like visual category of the dataset, task, architecture, or layer depth. Based on these results, we conclude that model pre-training can succeed on arbitrary datasets if they meet size and variance conditions. II) We show that many pre-trained models contain degenerated filters which make them less robust and less suitable for fine-tuning on target applications. Data & Project website: https://github.com/paulgavrikov/cnn-filter-db.
Deep learning models are intrinsically sensitive to distribution shifts in the input data. In particular, small, barely perceivable perturbations to the input data can force models to make wrong predictions with high confidence. An common defense mechanism is regularization through adversarial training which injects worst-case perturbations back into training to strengthen the decision boundaries, and to reduce overfitting. In this context, we perform an investigation of 3 × 3 convolution filters that form in adversarially- trained models. Filters are extracted from 71 public models of the ℓ ∞ -RobustBench CIFAR-10/100 and ImageNet1k leaderboard and compared to filters extracted from models built on the same architectures but trained without robust regularization. We observe that adversarially-robust models appear to form more diverse, less sparse, and more orthogonal convolution filters than their normal counterparts. The largest differences between robust and normal models are found in the deepest layers, and the very first convolution layer, which consistently and predominantly forms filters that can partially eliminate perturbations, irrespective of the architecture.
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.
Seismic data has often missing traces due to technical acquisition or economical constraints. A compete dataset is crucial in several processing and inversion techniques. Deep learning algorithms, based on convolutional neural networks (CNNs), have shown alternative solutions that overcome limitation of traditional interpolation methods e.g. data regularity, linearity assumption, etc. There are two different paradigms of CNN methods for seismic interpolation. The first one, so-called deep prior interpolation (DPI), trains a CNN to map random noise to a complete seismic image using only the decimated image itself. The second one, referred as standard deep learning method, trains a CNN to map a decimated seismic image into a complete one using a dataset of complete and artificially decimated images. Within this research, we systematically compare the performance of both methods for different quantities of regular and irregular missing traces using 4 datasets. We evaluate the results of both methods using 5 well-known metrics. We found that DPI method performs better than the standard method if the percentage of missing traces is low (10%) and otherwise if the level of decimation is high (50%).
In this work, we explore three deep learning algorithms apply to seismic interpolation: deep prior image (DPI), standard, and generative adversarial networks (GAN). The standard and GAN approaches rely on a dataset of complete and decimated seismic images for the training process, while the DPI method learns from a decimated image itself, without training images. We carry out two main experiments, considering 10%, 30%, and 50% of regular and irregular decimation. The first tests the optimal situation for the GAN and the standard approaches, where training and testing images are from the same dataset. The second tests the ability of GAN and standard methods to learn simultaneously from three datasets, and generalize to a fourth dataset not used during training. The standard method provides the best results in the first experiment, when the training distribution is similar to the testing one. In this situation, the DPI approach reports the second best results. In the second experiment, the standard method shows the ability to learn simultaneously and effectively three data distributions for the regular case. In the irregular case, the DPI approach is more effective. The GAN approach is the less effective of the three deep learning methods in both experiments.
In this study, various imaging algorithms for the localization of objects have been investigated. Therefore, an Ultra-Wideband (UWB) radar based experimental setup with a circular antenna array is designed as part of this work. This concept could be particularly useful in microwave medical imaging applications. In order to validate its applicability in microwave imaging, different imaging algorithms have been evaluated and compared by means of our experimental setup. Accurate imaging results have been achieved with our system under multiple test-scenarios.
In this study, an approach to a microwave-based radar system for the localization of objects has been proposed. This could be particularly useful in microwave imaging applications such as cardiac catheter detection. An experimental system is defined and realized with the selection of an appropriate antenna design. Hardware control functions and different imaging algorithms are implemented as well. The functionality of this measurement setup has been analyzed considering multiple testscenarios and it is proved to be capable of locating multiple objects as well as expanded objects.
We consider the local group of agents for exchanging the time-series data value and computing the approximation of the mean value of all agents. An agent represented by a node knows all local neighbor nodes in the same group. The node has the contact information of other nodes in other groups. The nodes interact with each other in synchronous rounds to exchange the updated time-series data value using the random call communication model. The amount of data exchanged between agent-based sensors in the local group network affects the accuracy of the aggregation function results. At each time step, the agent-based sensor can update the input data value and send the updated data value to the group head node. The group head node sends the updated data value to all group members in the same group. Grouping nodes in peer-to-peer networks show an improvement in Mean Squared Error (MSE).
Solar energy plays a central role in the energy transition. Clouds generate locally large fluctuations in the generation output of photovoltaic systems, which is a major problem for energy systems such as microgrids, among others. For an optimal design of a power system, this work analyzed the variability using a spatially distributed sensor network at Stuttgart Airport. It has been shown that the spatial distribution partially reduces the variability of solar radiation. A tool was also developed to estimate the output power of photovoltaic systems using irradiation time series and assumptions about the photovoltaic sites. For days with high fluctuations of the estimated photovoltaic power, different energy system scenarios were investigated. It was found the approach can be used to have a more realistic representation of aggregated PV power taking spatial smoothing into account and that the resulting PV power generation profiles provide a good basis for energy system design considerations like battery sizing.
Featherweight Go (FG) is a minimal core calculus that includes essential Go features such as overloaded methods and interface types. The most straightforward semantic description of the dynamic behavior of FG programs is to resolve method calls based on run-time type information. A more efficient approach is to apply a type-directed translation scheme where interface-values are replaced by dictionaries that contain concrete method definitions. Thus, method calls can be resolved by a simple lookup of the method definition in the dictionary. Establishing that the target program obtained via the type-directed translation scheme preserves the semantics of the original FG program is an important task.
To establish this property we employ logical relations that are indexed by types to relate source and target programs. We provide rigorous proofs and give a detailed discussion of the many subtle corners that we have encountered including the need for a step index due to recursive inter- faces and method definitions.
Biodegradable metals have entered the implant market in recent years, but still do not show fully satisfactory degradation behaviour and mechanical properties. In contrast, it has been shown that pure molybdenum has an excellent combination of the required properties in this respect. We report on PM based screen printing of thin-walled molybdenum tubes as a processing step for medical stent manufacture. We also present data on the in vivo degradation and biocompatibility of molybdenum. The degradation of molybdenum wires implanted in the aorta of rats was evaluated by SEM and EDX. Biocompatibility was assessed by histological investigation of organs and analysis of molybdenum levels in tissue extracts and body fluids. Degradation rates of up to 13.5 μm/y were observed after 12 months. No histological changes or elevated molybdenum levels in organ tissues were observed. In summary, the results further underline that molybdenum is a highly promising biodegradable metallic material.
The EREMI project is a 2-year project funded under the ERASMUS+ framework programme and its team has developed and will validate an advanced higher education program, including life-long learning, on the interdisciplinary topic of resource efficiency in manufacturing industries and the overall system optimization of low or not digitized physical infrastructure. All of these will be achieved by applying IoT technologies towards efficient industrial systems, and by utilizing a high-level educated human capital on these economically, politically, and technically crucial and highly relevant topics for the rapidly developing industries and economies of intensively economically and industrially transforming countries - Bulgaria, North Macedonia, and Romania. Efficiency will be attained by utilizing the experience and expertise of the involved German partner organisation.
The visual-inertial mapping and localization system maplab is analyzed by its implementation and subsequent evaluation. The mapping or localization is based on environmental feature detection. In addition to creating maps, there is also the option of fusion of several maps and thus mapping extensive areas and using them for further analysis of data. In this way, various software tools can be used to optimize the existing data sets.
Two sensor components are needed: an inertial measuring unit (IMU) and a monochrome camera, which are combined by a hardware rig and put into operation for the analysis of the visual-inertial system. System calibration is crucial for precision and system functioning and is based on nonlinear dynamic state estimation. This ensures the best possible estimate of the position of the environmental feature and the map. Maplab is particularly suitable for mapping rooms or small building complexes as the implementation and evaluation of the results in different application scenarios show. Special emphasis is laid on the evaluation of larger scenarios, in which is shown, that the system is struggling to keep up geometric consistencies and thus provide an accurate map.
To deal with frequent power outages in developing countries, people turn to solutions like uninterruptible power supply (UPS), which stores electric energy during normal operating hours and use it to meet energy needs during rolling blackout intervals. Locally produced UPSs of poorer power quality are widely accessible in the marketplaces, and they have a negative impact on power quality. The charging and discharging of the batteries in these UPSs generate significant amount of power losses in weak grid environments. The Smart-UPS is our proposed smart energy metering (SEM) solution for low voltage consumers that is provided by the distribution company. It does not require batteries, therefore there is no power loss or harmonic distortion due to corresponding charging and discharging. Through load flow and harmonic analysis of both traditional UPS and Smart-UPS systems on ETAP, this paper examines their impact on the harmonics and stability of the distribution grid. The simulation results demonstrate that Smart-UPS can assist fixing power quality issues in a developing country like Pakistan by providing cleaner energy than the battery-operated traditional UPSs.
Due to the Covid-19 pandemic, the RoboCup WorldCup 2021 was held completely remotely. For this competition the Webots simulator (https://cyberbotics.com/) was used, so all teams needed to transfer their robot to the simulation. This paper describes our experiences during this process as well as a genetic learning approach to improve our walk engine to allow a more stable and faster movement in the simulation. Therefore we used a docker setup to scale easily. The resulting movement was one of the outstanding features that finally led to the championship title.
Narrowband Internet-of-Things (NB-IoT) is a 3rd generation partnership project (3GPP) standardized cellular technology, adopted for 5G and optimized for massive Machine Type Communication (mMTC). Applications are anticipated around infrastructure monitoring, asset management, smart city and smart energy applications. In this paper, we evaluate the suitability of NB-IoT for private (campus) networks in industrial environments, including complex cloud-based applications around process automation. An end-to-end system has been developed, comprising of a sensor unit connected to a NB-IoT modem, a base station (gNodeB) equipped with a beamforming array and a local (private) network architecture comprising a sensor management system in the edge cloud. The experimental study includes field tests in realistic industrial environments with latency, reliability and coverage measurements. The results show a good suitability of NB-IoT for process automation with high scalability, low-power requirements and moderate latency requirements.
In recent years, Physical Unclonable Functions (PUFs) have gained significant attraction in the Internet of Things (IoT) for security applications such as cryptographic key generation and entity authentication. PUFs extract the uncontrollable production characteristics of physical devices to generate unique fingerprints for security applications. One common approach for designing PUFs is exploiting the intrinsic features of sensors and actuators such as MEMS elements, which typically exist in IoT devices. This work presents the Cantilever-PUF, a PUF based on a specific MEMS device – Aluminum Nitride (AlN) piezoelectric cantilever. We show the variations of electrical parameters of AlN cantilevers such as resonance frequency, electrical conductivity, and quality factor, as a result of uncontrollable manufacturing process variations. These variations, along with high thermal and chemical stability, and compatibility with silicon technology, makes AlN cantilever a decent candidate for PUF design. We present a cantilever design, which magnifies the effect of manufacturing process variations on electrical parameters. In order to verify our findings, the simulation results of the Monte Carlo method are provided. The results verify the eligibility of AlN cantilever to be used as a basic PUF device for security applications. We present an architecture, in which the designed Cantilever-PUF is used as a security anchor for PUF-enabled device authentication as well as communication encryption.
The desire to connect more and more devices and to make them more intelligent and more reliable, is driving the needs for the Internet of Things more than ever. Such IoT edge systems require sound security measures against cyber-attacks, since they are interconnected, spatially distributed, and operational for an extended period of time. One of the most important requirements for the security in many industrial IoT applications is the authentication of the devices. In this paper, we present a mutual authentication protocol based on Physical Unclonable Functions, where challenge-response pairs are used for both device and server authentication. Moreover, a session key can be derived by the protocol in order to secure the communication channel. We show that our protocol is secure against machine learning, replay, man-in-the-middle, cloning, and physical attacks. Moreover, it is shown that the protocol benefits from a smaller computational, communication, storage, and hardware overhead, compared to similar works.
Many different methods, such as screen printing, gravure, flexography, inkjet etc., have been employed to print electronic devices. Depending on the type and performance of the devices, processing is done at low or high temperature using precursor- or particle-based inks. As a result of the processing details, devices can be fabricated on flexible or non-flexible substrates, depending on their temperature stability. Furthermore, in order to reduce the operating voltage, printed devices rely on high-capacitance electrolytes rather than on dielectrics. The printing resolution and speed are two of the major challenging parameters for printed electronics. High-resolution printing produces small-size printed devices and high-integration densities with minimum materials consumption. However, most printing methods have resolutions between 20 and 50 μm. Printing resolutions close to 1 μm have also been achieved with optimized process conditions and better printing technology.
The final physical dimensions of the devices pose severe limitations on their performance. For example, the channel lengths being of this dimension affect the operating frequency of the thin-film transistors (TFTs), which is inversely proportional to the square of channel length. Consequently, short channels are favorable not only for high-frequency applications but also for high-density integration. The need to reduce this dimension to substantially smaller sizes than those possible with today’s printers can be fulfilled either by developing alternative printing or stamping techniques, or alternative transistor geometries. The development of a polymer pen lithography technique allows scaling up parallel printing of a large number of devices in one step, including the successive printing of different materials. The introduction of an alternative transistor geometry, namely the vertical Field Effect Transistor (vFET), is based on the idea to use the film thickness as the channel length, instead of the lateral dimensions of the printed structure, thus reducing the channel length by orders of magnitude. The improvements in printing technologies and the possibilities offered by nanotechnological approaches can result in unprecedented opportunities for the Internet of Things (IoT) and many other applications. The vision of printing functional materials, and not only colors as in conventional paper printing, is attractive to many researchers and industries because of the added opportunities when using flexible substrates such as polymers and textiles. Additionally, the reduction of costs opens new markets. The range of processing techniques covers laterally-structured and large-area printing technologies, thermal, laser and UV-annealing, as well as bonding techniques, etc. Materials, such as conducting, semiconducting, dielectric and sensing materials, rigid and flexible substrates, protective coating, organic, inorganic and polymeric substances, energy conversion and energy storage materials constitute an enormous challenge in their integration into complex devices.