Refine
Year of publication
- 2021 (79) (remove)
Document Type
- Conference Proceeding (44)
- Article (reviewed) (16)
- Book (4)
- Letter to Editor (4)
- Report (4)
- Doctoral Thesis (3)
- Part of a Book (2)
- Contribution to a Periodical (1)
- Patent (1)
Conference Type
- Konferenzartikel (40)
- Konferenz-Abstract (3)
- Konferenz-Poster (1)
Has Fulltext
- no (79) (remove)
Is part of the Bibliography
- yes (79)
Keywords
Institute
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (79) (remove)
Open Access
- Closed Access (43)
- Open Access (35)
- Bronze (4)
- Closed (1)
- Diamond (1)
- Grün (1)
MINT-College TIEFE
(2021)
Das Projekt MINT-College TIEFE konnte in der zweiten Förderperiode die verschiedenen Maßnahmen der vorangegangenen Förderperiode weiter ausbauen und verstetigen. Die Angebote im Rahmen des Projekts MINT-College TIEFE begleiteten die Studierenden über den Student-Life-Cycle hinweg über das komplette Studium der technischen Studiengänge, beginnend in der Schule und endend beim Übergang in den Beruf. Um die Qualität der Lehre an der Hochschule Offenburg zu verbessern, wurden darüber hinaus verschiedene digital unterstützte Lehrformate weiterentwickelt und ausgebaut. Zentrale Angebote des MINT-College, das 2019 zentrale Einrichtung der Hochschule Offenburg wurde, sind die für die Studieneingangsphase entwickelten Angebote der Einführungstage, des Mentorenprogramms, der Brückenkurse, des Lernzentrums und Angebote für den Übergang in den Beruf, wie das Gründerbüro. Die mediendidaktischen Unterstützungsangebote für Lehrende unterstützten den Lernkulturwandel an der Hochschule. Es wurden systematisch nachhaltige Strukturen aufgebaut, um Innovationen für das Lehren und das Lernen auch künftig entwickeln, erproben und etablieren zu können.
The increasing use of artificial intelligence (AI) technologies across application domains has prompted our society to pay closer attention to AI’s trustworthiness, fairness, interpretability, and accountability. In order to foster trust in AI, it is important to consider the potential of interactive visualization, and how such visualizations help build trust in AI systems. This manifesto discusses the relevance of interactive visualizations and makes the following four claims: i) trust is not a technical problem, ii) trust is dynamic, iii) visualization cannot address all aspects of trust, and iv) visualization is crucial for human agency in AI.
Herzfehler sind weltweit die häufigste Form von angeborenen Organdefekten. In unterschiedlichen Studien wird die Inzidenz zumeist zwischen vier und elf von 1.000 Lebendgeburten angegeben (1–5). Im Rahmen der multizentrischen PAN-Studie (PAN: Prävalenz angeborener Herzfehler bei Neugeborenen), welche die Häufigkeit angeborener Herzfehler bei Neugeborenen in Deutschland zwischen Juli 2006 und Juni 2007 untersuchte, ergab sich eine Gesamtprävalenz von 107,6 pro 10.000 Lebendgeburten. Gegenstand dieser Arbeit sind Untersuchungen an Implantaten zur Behandlung von Atriumseptumdefekten (ASD). Vorhofseptumdefekte machen mit 17,0%, nach den Ventrikelseptumdefekten (VSD) mit 48,9%die zweithäufigste Art von Herzfehlern aus (6, 7).Als Vorhofseptumdefekte werden Öffnungen in der Scheidewand zwischen den Herzvorhöfen bezeichnet. Bei der Therapie eines ASD ist der minimalinvasive Verschluss mittels sogenannter Okkluder heute das Mittel der Wahl. Diese werden über einen femoralen Zugang im Rahmen einer Herzkatheteruntersuchung unter Ultraschallkontrolle und Durchleuchtung an die Implantationsstelle vorgeschoben und dort platziert(8). Die Okkluder bestehen in der Regel aus einem Drahtgeflecht aus Nitinol und haben die typische Form eines sogenannten Doppelschirmchens. Dabei weichen die unterschiedlichen Okkluder der einzelnen Firmen hinsichtlich Form und Beschaffenheit oft erheblich voneinander ab. Derzeit gibt es keine Untersuchungsmethode, die die auf dem Markt befindlichen Okkluder hinsichtlich ihrer mechanischen Eigenschaften vergleichbar macht. Diese Arbeit solleinen Beitrag erbringen, um grundlegende, die Okkludermodelle charakterisierende Parameter zu schaffen, um so deren interindividuelle Vergleichbarkeit zu ermöglichen. Hierzu werden in-vitro Messungen durchgeführt, welche geeignet sind das Verhalten der untersuchten Modelle unter unterschiedlichen Bedingungen und bei variierenden Defektgrößen zu charakterisieren.
Cryptographic protection of messages requires frequent updates of the symmetric cipher key used for encryption and decryption, respectively. Protocols of legacy IT security, like TLS, SSH, or MACsec implement rekeying under the assumption that, first, application data exchange is allowed to stall occasionally and, second, dedicated control messages to orchestrate the process can be exchanged. In real-time automation applications, the first is generally prohibitive, while the second may induce problematic traffic patterns on the network. We present a novel seamless rekeying approach, which can be embedded into cyclic application data exchanges. Although, being agnostic to the underlying real-time communication system, we developed a demonstrator emulating the widespread industrial Ethernet system PROFINET IO and successfully use this rekeying mechanism.
We demonstrate how to exploit group sparsity in order to bridge the areas of network pruning and neural architecture search (NAS). This results in a new one-shot NAS optimizer that casts the problem as a single-level optimization problem and does not suffer any performance degradation from discretizating the architecture.
The twin concept is increasingly used for optimization tasks in the context of Industry 4.0 and digitization. The twin concept can also help small and medium-sized enterprises (SME) to exploit their energy flexibility potential and to achieve added value by appropriate energy marketing. At the same time, this use of flexibility helps to realize a climate-neutral energy supply with high shares of renewable energies. The digital twin reflects real production, power flows and market influences as a computer model, which makes it possible to simulate and optimize on-site interventions and interactions with the energy market without disturbing the real production processes. This paper describes the development of a generic model library that maps flexibility-relevant components and processes of SME, thus simplifying the creation of a digital twin. The paper also includes the development of an experimental twin consisting of SME hardware components and a PLC-based SCADA system. The experimental twin provides a laboratory environment in which the digital twin can be tested, further developed and demonstrated on a laboratory scale. Concrete implementations of such a digital twin and experimental twin are described as examples.
The following describes a new method for estimating the parameters of an interior permanent magnet synchronous machine (IPMSM). For the estimation of the parameters the current slopes caused by the switching of the inverter are used to determine the unknowns of the system equations of the electrical machine. The angle and current dependence of the machine parameters are linearized within a PWM cycle. By considering the different switching states of the inverter, several system equations can be derived and a solution can be found within one PWM cycle. The use of test signals and filter-based approaches is avoided. The derived algorithm is explained and validated with measurements on a test bench.
Der verstärkte Einsatz von Wärmepumpen bei der Realisierung einer klimaneutralen Wärmeversorgung führt zu einer signifikanten Zunahme und Änderung der elektrischen Lasten in den Verteilnetzen. Daher gilt es, Wärmepumpen so zu steuern, dass sie Verteilnetze wenig belasten oder sogar unterstützen.
Inhalt des Projekts „PV²WP - PV Vorhersage für die netzdienliche Steuerung von Wärmepumpen“ (Projektlaufzeit 1.07.2018 – 30.06.2021) war die Demonstration eines neuen Ansatzes zur Steuerung von Heizungssystemen, die auf Wärmepumpen und thermischen Speichern basieren und in Kombination mit einer Photovoltaikanlage betrieben werden. Das übergeordnete Ziel war dabei die Verbesserung der Netzintegration und Smart-Grid-Tauglichkeit entsprechender Heizungssysteme durch eine kostengünstige Technologie bei gleichzeitiger Erhöhung der Wirtschaftlichkeit.
Dabei wurden drei zukunftsweisende Technologien in Kombination genutzt und demonstriert: wolkenkamerabasierte Kurzfristprognosen, prädiktive Steuerung und Regelung sowie machinelearning-basierte Systemmodellierung als Basis für die Optimierung. Als Demonstrationsumgebung diente mit dem Projekthaus Ulm ein real bewohntes Einfamilienhaus.Umweltforschung
Autonomous driving is disrupting the automotive industry as we know it today. For this, fail-operational behavior is essential in the sense, plan, and act stages of the automation chain in order to handle safety-critical situations on its own, which currently is not reached with state-of-the-art approaches.The European ECSEL research project PRYSTINE realizes Fail-operational Urban Surround perceptION (FUSION) based on robust Radar and LiDAR sensor fusion and control functions in order to enable safe automated driving in urban and rural environments. This paper showcases some of the key exploitable results (e.g., novel Radar sensors, innovative embedded control and E/E architectures, pioneering sensor fusion approaches, AI-controlled vehicle demonstrators) achieved until its final year 3.
Generative adversarial networks (GANs) provide state-of-the-art results in image generation. However, despite being so powerful, they still remain very challenging to train. This is in particular caused by their highly non-convex optimization space leading to a number of instabilities. Among them, mode collapse stands out as one of the most daunting ones. This undesirable event occurs when the model can only fit a few modes of the data distribution, while ignoring the majority of them. In this work, we combat mode collapse using second-order gradient information. To do so, we analyse the loss surface through its Hessian eigenvalues, and show that mode collapse is related to the convergence towards sharp minima. In particular, we observe how the eigenvalues of the are directly correlated with the occurrence of mode collapse. Finally, motivated by these findings, we design a new optimization algorithm called nudged-Adam (NuGAN) that uses spectral information to overcome mode collapse, leading to empirically more stable convergence properties.
Transformer models have recently attracted much interest from computer vision researchers and have since been successfully employed for several problems traditionally addressed with convolutional neural networks. At the same time, image synthesis using generative adversarial networks (GANs) has drastically improved over the last few years. The recently proposed TransGAN is the first GAN using only transformer-based architectures and achieves competitive results when compared to convolutional GANs. However, since transformers are data-hungry architectures, TransGAN requires data augmentation, an auxiliary super-resolution task during training, and a masking prior to guide the self-attention mechanism. In this paper, we study the combination of a transformer-based generator and convolutional discriminator and successfully remove the need of the aforementioned required design choices. We evaluate our approach by conducting a benchmark of well-known CNN discriminators, ablate the size of the transformer-based generator, and show that combining both architectural elements into a hybrid model leads to better results. Furthermore, we investigate the frequency spectrum properties of generated images and observe that our model retains the benefits of an attention based generator.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Facial image manipulation is a generation task where the output face is shifted towards an intended target direction in terms of facial attribute and styles. Recent works have achieved great success in various editing techniques such as style transfer and attribute translation. However, current approaches are either focusing on pure style transfer, or on the translation of predefined sets of attributes with restricted interactivity. To address this issue, we propose FacialGAN, a novel framework enabling simultaneous rich style transfers and interactive facial attributes manipulation. While preserving the identity of a source image, we transfer the diverse styles of a target image to the source image. We then incorporate the geometry information of a segmentation mask to provide a fine-grained manipulation of facial attributes. Finally, a multi-objective learning strategy is introduced to optimize the loss of each specific tasks. Experiments on the CelebA-HQ dataset, with CelebAMask-HQ as semantic mask labels, show our model’s capacity in producing visually compelling results in style transfer, attribute manipulation, diversity and face verification. For reproducibility, we provide an interactive open-source tool to perform facial manipulations, and the Pytorch implementation of the model.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (eg Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Interpreting seismic data requires the characterization of a number of key elements such as the position of faults and main reflections, presence of structural bodies, and clustering of areas exhibiting a similar amplitude versus angle response. Manual interpretation of geophysical data is often a difficult and time-consuming task, complicated by lack of resolution and presence of noise. In recent years, approaches based on convolutional neural networks have shown remarkable results in automating certain interpretative tasks. However, these state-of-the-art systems usually need to be trained in a supervised manner, and they suffer from a generalization problem. Hence, it is highly challenging to train a model that can yield accurate results on new real data obtained with different acquisition, processing, and geology than the data used for training. In this work, we introduce a novel method that combines generative neural networks with a segmentation task in order to decrease the gap between annotated training data and uninterpreted target data. We validate our approach on two applications: the detection of diffraction events and the picking of faults. We show that when transitioning from synthetic training data to real validation data, our workflow yields superior results compared to its counterpart without the generative network.
Most eCommerce applications, like web-shops have millions of products. In this context, the identification of similar products is a common sub-task, which can be utilized in the implementation of recommendation systems, product search engines and internal supply logistics. Providing this data set, our goal is to boost the evaluation of machine learning methods for the prediction of the category of the retail products from tuples of images and descriptions.
The manufacturing of conventional electronics has become a highly complicated process, which requires intensive investment. In this context, printed electronics keeps attracting attention from both academia and industry. The primary reason is the simplification of the manufacturing process via additive printing technology such as ink-jet printing. Consequently, advantages are realized such as on-demand fabrication, minimal material waste and versatile choice of substrate materials. Central to the development of printed electronic circuits are printed transistors. Recently, metal oxide semiconductors such as indium oxide have become promising materials for the fabrication of printed transistors due to their high charge mobility. Furthermore, electrolyte-gating also provides benefits such as the low-voltage operation in sub-1 V regime due to the large gate capacitance provided by electrical double layers. This opens new possibilities to fabricate printed devices and circuits for niche applications.
To facilitate the design and fabrication of printed circuits, the development of compact models is necessary. However, most of the current works have focused on the study of the static behavior of transistors, while the in-depth understanding of other characteristics such as the dynamic or noise behavior is missing. To this end, the purpose of this work is the comprehensive study on capacitance and noise properties of inkjet-printed electrolyte-gated thin-film transistors (EGT) based on indium oxide semiconductors. Proper modeling approaches are also proposed to capture accurately the electrical behaviour, which can be further utilized to enable advanced analysis of digital, analog and mixed-signal circuits.
In this work, the capacitance of EGTs is characterized using voltage-dependent impedance spectroscopy. Intrinsic and extrinsic effects are carefully separated by using de-embedding test structures. Also, a dedicated equivalent circuit model is established to offer accurate simulations of the measured frequency response of the gate impedance. Based on that, it is revealed that top-gated EGTs have the potential to reach operation frequency in the kHz regime with proper optimizations of materials and printing process. Furthermore, a Meyer-like model is proposed to accurately capture the capacitance-voltage characteristics of the lumped terminal capacitance. Both parasitic and nonquasi-static effects are considered. This further enables the AC and transient analysis of complex circuits in circuit simulators.
Following, the study of noise properties in the field of printed electronics is conducted. Low-frequency noise of EGTs is characterized using a reliable experimental setup. By examining measured noise spectra of the drain current at various gate voltages, the number fluctuation with correlated mobility fluctuation has been determined as the primary noise mechanism. Based on that, normalized flat-band voltage noise can be determined as the key performance metrics, which is only 1.08 × 10−7 V^2 µm^2, significantly lower in comparison with other thin-film technologies, which are based on dielectric gating and semiconductors such as IZO and IGZO. A plausible reason could be the large gate capacitance offered by the electrical double layers. This renders EGT technology useful for low-noise and sensitive applications such as sensor periphery circuits.
Last but not least, various circuit designs based on EGT technology are proposed, including basic digital circuits such as inverters and ring oscillators. Their performance metrics such as the propagation delay and power consumption are extensively characterized. Also, the first design of a printed full-wave rectifier is presented by using diode-connected EGTs, which features near-zero threshold voltage. As a consequence, the presented rectifier can effectively process input voltage with a small amplitude of 100 mV and a cut-off frequency of 300 Hz, which is particularly attractive for the application domain of energy harvesting. Additionally, the previously established capacitance models are verified on those circuits, which provide a satisfactory agreement between the simulation and measurement data.
An Empirical Investigation of Model-to-Model Distribution Shifts in Trained Convolutional Filters
(2021)
We present first empirical results from our ongoing investigation of distribution shifts in image data used for various computer vision tasks. Instead of analyzing the original training and test data, we propose to study shifts in the learned weights of trained models. In this work, we focus on the properties of the distributions of dominantly used 3x3 convolution filter kernels. We collected and publicly provide a data set with over half a billion filters from hundreds of trained CNNs, using a wide range of data sets, architectures, and vision tasks. Our analysis shows interesting distribution shifts (or the lack thereof) between trained filters along different axes of meta-parameters, like data type, task, architecture, or layer depth. We argue, that the observed properties are a valuable source for further investigation into a better understanding of the impact of shifts in the input data to the generalization abilities of CNN models and novel methods for more robust transfer-learning in this domain.
Es wird ein neuer Ansatz zur Bestimmung des Abstands zweier oder mehrerer Smartphones zueinander vorgestellt. Dabei wird die Position des jeweiligen Smartphones im Raum bzw. im Gelände bezüglich eines Referenzpunkts (Spatial Anchor Point) ermittelt. Über einen zentralen Server tauschen die Smartphones ihre Position relativ zum Referenzpunkt aus und können daraus die Abstände zueinander berechnen. Unterschreitet der Abstand zweier Smartphones einen Schwellwert (< 2 m), erfolgt eine entsprechende Signalisierung auf den Smartphones.
Die Erfindung betrifft ein Verfahren zur Synchronisation eines Netzwerkgeräts für die drahtlose Kommunikation, insbesondere eines Netzwerk-Endgeräts, in einem Drahtlosnetzwerk, wobei das Netzwerkgerät einen integrierten Schaltkreis für die drahtlose Kommunikation (IWC), eine Synchronisationsevent-Detektoreinrichtung (SED) für das Detektieren von Synchronisationsevents, einen steuerbaren Clock-Generator (CCG) für das Erzeugen eines synchronisierten Zeitsignals TCCGund eine Synchronisationssteuereinrichtung (SCD) zur Steuerung des Synchronisationsvorgangs des Netzwerkgeräts umfasst. In dem Netzwerkgerät werden während einer Synchronisationsphase folgende Verfahrensschritte durchgeführt: Zunächst wird ein Synchronisations-Frame empfangen und ein Synchronisations-Timestamp TAPdetektiert. Anschließend wird ein Timestamp TBmittels einer im IWC enthaltenen IWC-Clock erzeugt, der die Empfangszeit des Synchronisations-Frames definiert. In einem weiteren Schritt wird an einem Port des IWC ein Potenzialwechsel erzeugt, der einen Synchronisationsevent darstellt. Weiterhin wird ein Timestamp TSEmittels der IWC-Clock erzeugt, der den Zeitpunkt des Synchronisationsevents definiert. Die SED detektiert den Synchronisationsevent durch Auswerten der zeitlichen Länge des Potenzialwechsels des Ports des IWC und erzeugt einen Timestamp TSunter Verwendung des synchronisierten Zeitsignals TCCG, wobei der Timestamp TSdenselben Zeitpunkt des Synchronisationsevents definiert wie der Timestamp TSE. Die Timestamps TAP, TB, TSEund TS, die mittels Verarbeitung von ein oder mehreren Synchronisationsevent-Frames gemäß den Schritten (a) bis (d) ermittelt wurden, werden dann zur Synchronisierung des vom CCG erzeugten synchronisierten Zeitsignals TCCGauf das Master-Zeitsignal verwendet.
Im Jahre 2010 bot die Hochschule Offenburg ein Medizintechnikstudium mit dem Schwerpunkt ’Kardiologie, Elektrophysiologie und elektronische kardiologische Implantate’ als Bachelor- und später auch Masterstudiengang an. Ziel des auf diesen Schwerpunkt ausgelegten didaktischen Lehrkonzeptes ist die Vermittlung sofort anwendungsbereiten theoretischen Wissens und praktischen Könnens, welches die Absolventinnen und Absolventen in ihrer künftigen Berufsausübung in der Industrie oder als technische Partner der behandelnden Ärztinnen und Ärzte in hochspezialisierten klinischen Einrichtungen benötigen.
Aufgrund fehlender kommerzieller Angebote ist zur Umsetzung dieses Lehrkonzeptes die ingenieurtechnische Realisierung geeigneter Lehrmittel zwingend erforderlich. Dies betrifft die hard- und softwareseitige Erstellung visueller Demonstrationsmöglichkeiten für pathologische und implantatinduzierte Herzrhythmen, sowie die synthetische Bereitstellung originalgetreuer elektrokardiographischer Ableitsignale aus der klinischen Routine. Des Weiteren den Aufbau von in-vitro Trainingssystemen zu Therapien mit elektronischen kardiologischen Implantaten sowie zur Hochfrequenz-Katheterablation.
Insbesondere die Wahlfächer ’Programmierung von Herzschrittmachern’ und ‚Programmierung von Defibrillatoren’, deren Besuch den Teilnehmenden einen besonders raschen Berufseinstieg ermöglichen sollte, wurden in didaktischer Hinsicht in engem Bezug zum 4-Komponenten-Instruktionsdesign-Modell der Lehre gestaltet.
Durch den kontinuierlichen Einsatz der Instrumente der formativen Evaluation gelangen sowohl deutliche Verbesserungen am Gesamtkonzept der Lehrveranstaltungen als auch an den dort eingesetzten, selbst realisierten Lösungen des benannten speziellen Lehr- und Trainingsequipments.
Eine summative Evaluation des Lehrkonzeptes ist aufgrund seines Alleinstellungsmerkmals schwierig. Aus diesem Grund erschien die quantitative Prüfung des Einflusses eines Besuchs des praktisch orientierten Wahlfachs ’Programmierung von Herzschrittmachern’ auf die Note der kombinierten Abschlussklausur in den Fächern ’Elektrokardiographie’ und ’Elektrostimulation’ sinnvoll. In diese Evaluation eingeschlossen wurde eine Kohorte von 221 Studierenden, 76 Frauen und 145 Männer, von denen 93 am Wahlfach nicht teilnahmen und 128 die es besucht hatten.
Über 7 zusammengefasste Studienjahre zeigte sich, dass die praktische Ausbildung im Wahlfach ’Programmierung von Herzschrittmachern’ das Leistungsniveau der Studierenden der Medizintechnik in der kombinierten Abschlussprüfung ’Elektrokardiographie und Elektrostimulation’ deutlich beeinflusste.
Das im Rahmen dieser Arbeit mitgestaltete Lehrkonzept, die realisierten Lehrmaterialien und Lehrumgebungen wurden im Bachelor- und Masterstudiengang der Medizintechnik an der Hochschule Offenburg in den Praktika, Seminaren und Vorlesungen des Schwerpunktes ’Kardiologie, Elektrophysiologie und elektronische kardiologische Implantate’ vielfältig genutzt. Sie ermöglichten die Gestaltung interaktiver praktischer Weiterbildungsveranstaltungen für ärztliches und mittleres medizinisches Personal und für auf diesen Gebieten tätige medizintechnische Firmen.
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods. The code for the methods proposed in the paper is available at github.com/paulaharder/SpectralAdversarialDefense
Aerosol particles play an important role in the climate system by absorbing and scattering radiation and influencing cloud properties. They are also one of the biggest sources of uncertainty for climate modeling. Many climate models do not include aerosols in sufficient detail. In order to achieve higher accuracy, aerosol microphysical properties and processes have to be accounted for. This is done in the ECHAM-HAM global climate aerosol model using the M7 microphysics model, but increased computational costs make it very expensive to run at higher resolutions or for a longer time. We aim to use machine learning to approximate the microphysics model at sufficient accuracy and reduce the computational cost by being fast at inference time. The original M7 model is used to generate data of input-output pairs to train a neural network on it. By using a special logarithmic transform we are able to learn the variables tendencies achieving an average score of . On a GPU we achieve a speed-up of 120 compared to the original model.
Estimation of Scattering and Transfer Parameters in Stratified Dispersive Tissues of the Human Torso
(2021)
The aim of this study is to understand the effect of the various layers of biological tissues on electromagnetic radiation in a certain frequency range. Understanding these effects could prove crucial in the development of dynamic imaging systems under operating environments during catheter ablation in the heart. As the catheter passes through some arterial paths in the region of interest inside the heart through the aorta, a three-dimensional localization of the catheter is required. In this paper, a study is given on the detection of the catheter by using electromagnetic waves. Therefor, an appropriate model for the layers of the human torso is defined and simulated without and with an inserted electrode.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
Correlation Clustering, also called the minimum cost Multicut problem, is the process of grouping data by pairwise similarities. It has proven to be effective on clustering problems, where the number of classes is unknown. However, not only is the Multicut problem NP-hard, an undirected graph G with n vertices representing single images has at most edges, thus making it challenging to implement correlation clustering for large datasets. In this work, we propose Multi-Stage Multicuts (MSM) as a scalable approach for image clustering. Specifically, we solve minimum cost Multicut problems across multiple distributed compute units. Our approach not only allows to solve problem instances which are too large to fit into the shared memory of a single compute node, but it also achieves significant speedups while preserving the clustering accuracy at the same time. We evaluate our proposed method on the CIFAR10 …
Multiple Object Tracking (MOT) is a long-standing task in computer vision. Current approaches based on the tracking by detection paradigm either require some sort of domain knowledge or supervision to associate data correctly into tracks. In this work, we present a self-supervised multiple object tracking approach based on visual features and minimum cost lifted multicuts. Our method is based on straight-forward spatio-temporal cues that can be extracted from neighboring frames in an image sequences without supervision. Clustering based on these cues enables us to learn the required appearance invariances for the tracking task at hand and train an AutoEncoder to generate suitable latent representations. Thus, the resulting latent representations can serve as robust appearance cues for tracking even over large temporal distances where no reliable spatio-temporal features can be extracted. We show that, despite being trained without using the provided annotations, our model provides competitive results on the challenging MOT Benchmark for pedestrian tracking.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Engineering, construction and operation of complex machines involves a wide range of complicated, simultaneous tasks, which potentially could be automated. In this work, we focus on perception tasks in such systems, investigating deep learning approaches for multi-task transfer learning with limited training data. We show an approach that takes advantage of a technical systems’ focus on selected objects and their properties. We create focused representations and simultaneously solve joint objectives in a system through multi-task learning with convolutional autoencoders. The focused representations are used as a starting point for the data-saving solution of the additional tasks. The efficiency of this approach is demonstrated using images and tasks of an autonomous circular crane with a grapple.
A versatile liquid metal (LM) printing process enabling the fabrication of various fully printed devices such as intra- and interconnect wires, resistors, diodes, transistors, and basic circuit elements such as inverters which are process compatible with other digital printing and thin film structuring methods for integration is presented. For this, a glass capillary-based direct-write method for printing LMs such as eutectic gallium alloys, exploring the potential for fully printed LM-enabled devices is demonstrated. Examples for successful device fabrication include resistors, p–n diodes, and field effect transistors. The device functionality and easiness of one integrated fabrication flow shows that the potential of LM printing is far exceeding the use of interconnecting conventional electronic devices in printed electronics.
The nonlinear behavior of inverters is mainly influenced by the interlocking and switching times of the semiconductors. In the following work, a method is presented that enables the possibility of an online identification of the switching times of the semiconductors. This information allows a compensation of the non-linear behavior, a reduction of the locking time and can be used for diagnostic purposes. First, a theoretical derivation of the method is made by considering different cases when switching of the inverter and deriving identification possibilities. The method is then extended so that the entire module is taken into account. Furthermore, a possible theoretical implementation is shown. After the methodology has been investigated with possible limitations, boundary conditions and with respect to real hardware, an implementation in the FPGA is performed. Finally, the results are presented, discussed
and further improvements are presented in an outlook.
Significant progress in the development and commercialization of electrically conductive adhesives has been made. This makes shingling a very attractive approach for solar cell interconnection. In this study, we investigate the shading tolerance of two types of solar modules based on shingle interconnection: first, the already commercialized string approach, and second, the matrix technology where solar cells are intrinsically interconnected in parallel and in series. An experimentally validated LTspice model predicts major advantages for the power output of the matrix layout under partial shading. Diagonal as well as random shading of a 1.6-m2 solar module is examined. Power gains of up to 73.8 % for diagonal shading and up to 96.5 % for random shading are found for the matrix technology compared to the standard string approach. The key factor is an increased current extraction due to lateral current flows. Especially under minor shading, the matrix technology benefits from an increased fill factor as well. Under diagonal shading, we find the probability of parts of the matrix module being bypassed to be reduced by 40 % in comparison to the string module. In consequence, the overall risk of hotspot occurrence in matrix modules is decreased significantly.
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
Recently, adversarial attacks on image classification networks by the AutoAttack (Croce and Hein, 2020b) framework have drawn a lot of attention. While AutoAttack has shown a very high attack success rate, most defense approaches are focusing on network hardening and robustness enhancements, like adversarial training. This way, the currently best-reported method can withstand about 66% of adversarial examples on CIFAR10. In this paper, we investigate the spatial and frequency domain properties of AutoAttack and propose an alternative defense. Instead of hardening a network, we detect adversarial attacks during inference, rejecting manipulated inputs. Based on a rather simple and fast analysis in the frequency domain, we introduce two different detection algorithms. First, a black box detector that only operates on the input images and achieves a detection accuracy of 100% on the AutoAttack CIFAR10 benchmark and 99.3% on ImageNet, for epsilon = 8/255 in both cases. Second, a whitebox detector using an analysis of CNN feature maps, leading to a detection rate of also 100% and 98.7% on the same benchmarks.
Sustainable chemical processes should be designed to combine the technological advantages and progress with lower safety risks and minimization of environmental impact such as, for example, reduction of raw materials, energy and water consumption, and avoidance of hazardous waste and pollution with toxic chemical agents. A number of novel eco-friendly chemical technologies have been developed in the recent decades with the help of the eco-innovations approaches and methods such as Life Cycle Analysis, Green Process Engineering, Process Intensification, Process Design for Sustainability, and others. An emerging approach to the sustainable process design in process engineering builds on the innovative solutions inspired from nature. However, the implementation of the eco-friendly technologies often faces secondary ecological problems. The study postulates that the eco-inventive principles identified in natural systems allow to avoid secondary eco-problems and proposes to apply these principles for sustainable design in chemical process engineering. The research work critically examines how this approach differs from the biomimetics, as it is commonly used for copying natural systems. The application of nature-inspired eco-design principles is illustrated with an example of a sustainable technology for extraction of nickel from pyrophyllite.
Die vorliegende Arbeit gibt einen Überblick über das Verhältnis zwischen Nutzen und Einschränkungen eines frühneuzeitlichen Riefelharnisches auf die Biomechanik des Menschen. Zu den zentralen Ergebnissen gehört, dass die Rüstung eine gewisse Einschränkung der Beweglichkeit bringt, jedoch durch verschiedene mechanische Konzepte versucht wurde, diese größtmöglich zu minimieren. Besonders das sogenannte Geschübe stellt hierbei einen Kompromiss zwischen Beweglichkeit und Schutzfunktion dar und findet vor allem im Bereich der Gelenke Anwendung. Steife Strukturen werden an Stellen eingesetzt, die kaum Bewegungsfreiheit fordern. Zu diesen Bereichen gehören beispielsweise der Brustkorb oder obere Teile des Rückens. Der Vorteil der steiferen Teile der Rüstung ist ihre erhöhte Schutzfunktion, die ein geringeres Verletzungsrisiko mit sich bringt.
IoT networks are increasingly used as entry points for cyberattacks, as often they offer low-security levels, as they may allow the control of physical systems and as they potentially also open the access to other IT networks and infrastructures. Existing intrusion detection systems (IDS) and intrusion prevention systems (IPS) mostly concentrate on legacy IT networks. Nowadays, they come with a high degree of complexity and adaptivity, including the use of artificial intelligence. It is only recently that these techniques are also applied to IoT networks. In this paper, we present a survey of machine learning and deep learning methods for intrusion detection, and we investigate how previous works used federated learning for IoT cybersecurity. For this, we present an overview of IoT protocols and potential security risks. We also report the techniques and the datasets used in the studied works, discuss the challenges of using ML, DL and FL for IoT cybersecurity and provide future insights.
Investigation of the Angle Dependency of Self-Calibration in Multiple-Input-Multiple-Output Radars
(2021)
Multiple-Input-Multiple-Output (MIMO) is a key technology in improving the angular resolution (spatial resolution) of radars. In MIMO radars the amplitude and phase errors in antenna elements lead to increase in the sidelobe level and a misalignment of the mainlobe. As the result the performance of the antenna channels will be affected. Firstly, this paper presents analysis of effect of the amplitude and phase errors on angular spectrum using Monte-Carlo simulations. Then, the results are compared with performed measurements. Finally, the error correction with a self-calibration method is proposed and its angle dependency is evaluated. It is shown that the values of the errors change with an incident angle, which leads to a required angle-dependent calibration.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
eLetter zum Artikel "Plague Through History" von Nils Chr. Stenseth, veröffentlicht in Science, Vol. 321, Issue 5890, Seite 773-774 (doi.org/10.1126/science.1161496)