Refine
Year of publication
Document Type
- Conference Proceeding (1089)
- Article (unreviewed) (558)
- Article (reviewed) (529)
- Part of a Book (454)
- Book (222)
- Other (138)
- Contribution to a Periodical (123)
- Patent (94)
- Report (62)
- Letter to Editor (30)
- Doctoral Thesis (26)
- Working Paper (8)
- Periodical Part (4)
- Study Thesis (2)
- Image (1)
- Moving Images (1)
Conference Type
- Konferenzartikel (856)
- Konferenz-Abstract (153)
- Sonstiges (40)
- Konferenz-Poster (31)
- Konferenzband (13)
Language
- German (1734)
- English (1595)
- Other language (5)
- Russian (3)
- Multiple languages (2)
- French (1)
- Spanish (1)
Has Fulltext
- no (3341) (remove)
Keywords
- Digitalisierung (39)
- RoboCup (32)
- Dünnschichtchromatographie (26)
- Arbeitszeugnis (22)
- Finite-Elemente-Methode (22)
- Energieversorgung (21)
- Kommunikation (21)
- Management (19)
- Industrie 4.0 (18)
- Machine Learning (18)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (786)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (717)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (704)
- Fakultät Wirtschaft (W) (559)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (390)
- INES - Institut für nachhaltige Energiesysteme (178)
- Fakultät Medien (M) (ab 22.04.2021) (173)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (133)
- IMLA - Institute for Machine Learning and Analytics (72)
- ACI - Affective and Cognitive Institute (57)
Open Access
- Closed Access (1241)
- Open Access (866)
- Closed (533)
- Bronze (190)
- Diamond (53)
- Gold (11)
- Hybrid (11)
- Grün (7)
The measurement of the active material volume fraction in composite electrodes of lithium-ion battery cells is difficult due to the small (sub-micrometer) and irregular structure and multi-component composition of the electrodes, particularly in the case of blend electrodes. State-of-the-art experimental methods such as focused ion beam/scanning electron microscopy (FIB/SEM) and subsequent image analysis require expensive equipment and significant expertise. We present here a simple method for identifying active material volume fractions in single-material and blend electrodes, based on the comparison of experimental equilibrium cell voltage curve (open-circuit voltage as function of charge throughput) with active material half-cell potential curves (half-cell potential as function of lithium stoichiometry). The method requires only (i) low-current cycling data of full cells, (ii) cell opening for measurement of electrode thickness and active electrode area, and (iii) literature half-cell potentials of the active materials. Mathematical optimization is used to identify volume fractions and lithium stoichiometry ranges in which the active materials are cycled. The method is particularly useful for model parameterization of either physicochemical (e.g., pseudo-two-dimensional) models or equivalent circuit models, as it yields a self-consistent set of stoichiometric and structural parameters. The method is demonstrated using a commercial LCO–NCA/graphite pouch cell with blend cathode, but can also be applied to other blends (e.g., graphite–silicon anode).
In this preliminary report, we present a simple but very effective technique to stabilize the training of CNN based GANs. Motivated by recently published methods using frequency decomposition of convolutions (e.g. Octave Convolutions), we propose a novel convolution scheme to stabilize the training and reduce the likelihood of a mode collapse. The basic idea of our approach is to split convolutional filters into additive high and low frequency parts, while shifting weight updates from low to high during the training. Intuitively, this method forces GANs to learn low frequency coarse image structures before descending into fine (high frequency) details. Our approach is orthogonal and complementary to existing stabilization methods and can simply plugged into any CNN based GAN architecture. First experiments on the CelebA dataset show the effectiveness of the proposed method.
Im Beitrag werden nach einer kurzen Einführung zu hochfesten Stählen und dem Schweißen von hochfesten Stählen die gültigen und zukünftigen Bemessungsregeln von Schweißverbindungen mit hochfesten Stählen behandelt und an Beispielen erläutert. Es werden die für den Schweißprozess wichtigen Material‐ und Gefügeeingeschaften und Herstellungsverfahren von höherfesten Stählen beschrieben und die sich daraus einstellenden Anforderungen während des Schweißvorgangs dargelegt. Anhand von Beispielen werden die derzeit gültigen Bemessungsregeln für Schweißverbindungen mit höherfesten Stählen erläutert und ein neues durch Versuche abgesichertes Bemessungsmodell für Kehlnähte vorgestellt, das es erlaubt, gezielt den wichtigen Einfluss des Schweißzusatzwerkstoffs zu erfassen. Abschließend werden die aus numerischen Schweißsimulationen gewonnenen Erkentnisse hinsichtlich Temperatur, Gefüge, Eigenspannungszustand und auch Tragverhalten einer Schweißnaht beschrieben und durch Beispiele veranschaulicht.
Die erste Auflage von "Ausführung von Stahlbauten" wurde – trotz ihres erheblichen Umfangs – zu einem sehr beliebten Nachschlagewerk und unverzichtbaren Praxishelfer in vielen Bereichen des Stahlbaus. Nachdem die Normen, auf die sich die erste Auflage bezieht, grundlegend überarbeitet wurden, entstand mit dem hier vorliegenden Buch auch eine aktualisierte Nachfolgeausgabe der Kommentare. Das Werk profitiert davon, dass es von den im Normungsausschuss federführenden Spezialisten ausgearbeitet wurde: Die Autoren liefern also zuverlässig die korrekten Interpretationen der Normentexte. Damit ist auch diese zweite Auflage von "Ausführung von Stahlbauten" eine gute Wahl, um mit den Normen DIN EN 1090-2 und DIN EN 1090-4 sicher und professionell arbeiten zu können. Da sich die Norm DIN EN 1090-1 aktuell in Überarbeitung befindet, ist diese nicht Teil der zweiten Auflage der Stahlbauten-Kommentare.
Die Einführung von Projektmanagementstandards kostet nachweislich Zeit und Geld, bringt vorübergehende Unruhe in die Organisation und ist nicht selten durch eine lästige Kundenforderung initiiert. Durch die Beschränkung der Sichtweise auf diese Aspekte wird das Thema häufig als unangenehm empfunden. Wir möchten die Implementierung von PM-Standards aber als lohnende Investition vorstellen, Potenziale, Chancen und Synergien aufzeigen und eine solide Basis für zahlreiche Organisations- und Verbesserungsprojekte zur Einführung von PM-Standards schaffen.
Internationale Projektarbeit
(2019)
In many application domains, in particular automotives, guaranteeing a very low failure rate is crucial to meet functional and safety standards. Especially, reliable operation of memory components such as SRAM cells is of essential importance. Due to aggressive technology downscaling, process and runtime variations significantly impact manufacturing yield as well as functionality. For this reason, a thorough memory failure rate assessment is imperative for correct circuit operation and yield improvement. In this regard, Monte Carlo simulations have been used as the conventional method to estimate the variability induced failure rate of memory components. However, Monte Carlo methods become infeasible when estimating rare events such as high-sigma failure rates. To this end, Importance Sampling methods have been proposed which reduce the number of required simulations substantially. However, existing methods still suffer from inaccuracies and high computational efforts, in particular for high-sigma problems. In this paper, we fill this gap by presenting an efficient mixture Importance Sampling approach based on Bayesian optimization, which deploys a surface model of the objective function to find the most probable failure points. Its advantages include constant complexity independent of the dimensions of design space, the potential to find the global extrema, and higher trustworthiness of the estimated failure rate by accurately exploring the design space. The approach is evaluated on a 6T-SRAM cell as well as a master-slave latch based on a 28nm FDSOI process. The results show an improvement in accuracy, resulting in up to 63× better accuracy in estimating failure rates compared to the best state-of-the-art solutions on a 28nm technology node.
Im Zentrum des Gesamtprojektes stand die nutzerzentrierte Entwicklung einer praxisorientierten Lern- und Anleitungsumgebung, in der kontextbezogene Informationen direkt in den Arbeitsbereich projiziert werden – das Lernen also sowohl am Arbeitsplatz als auch situiert erfolgen kann. Durch die Projektion in Verbindung mit Interaktivität werden Lerninhalte im wahrsten Sinne des Wortes „begreifbar“. So wurde ein kontextbewusstes System geschaffen, das Lernende interaktiv wie ein Coach begleitet und motiviert.
Ziel des Projekts STABIL war die Vorhersage der Alterung und Verbesserung der Lebensdauer von mobilen und stationären Lithium-Ionen-Batterien. Batterien sind zentrale Komponenten der Elektromobilität und der stationären Speicherung von regenerativem Strom. Die im Stand der Technik unzureichende Lebensdauer der Batterie ist heute wesentlicher Kostentreiber. Im Projekt wurde daher in einem skalenübergreifenden und interdisziplinären Ansatz das Verhalten von einzelnen Batteriezellen und ganzen Batteriesystemen unter zwei unterschiedlichen systemischen Randbedingungen untersucht.
Ziel des LiBaLu-Teilprojekts Modellierung und Simulation war die Unterstützung der Elektroden- und Zellentwicklung mit Hilfe umfangreicher Computersimulationen im Sinne des computergestützten Engineering (CAE). Zwei verschiedene Schwerpunkte standen im Mittelpunkt der Untersuchungen. Zum einen wurde das mechanistische Verständnis der komplexen Elektrochemie in Lithium-Luftbatterien durch mikrokinetische Modelle aufgeklärt. Auf Basis von postulierten Mehrschrittmechanismen wurden makroskopische Eigenschaften (Entlade-/Ladekennlinien, Zyklovoltammogramme) vorhergesagt und mit experimentellen Daten der Projektpartner verglichen. Zum anderen wurde das Design der Prototypzelle mit Hilfe numerischer Simulationen untersucht und optimiert. So konnten z. B. optimale Schichtdicken oder die Rolle von Gastransportlimitierungen identifiziert werden.
More than 200 years ago, the scientist Alexander von Humboldt noted in his travel diaries that "everything is interconnectedness", when he was fascinated by nature and the phenomena observed. The view of nature has become much more detailed through the knowledge of phenomena and natural processes, which led to a more precise view of nature shaped by Humboldt. Technological progress and the artificial intelligence of highly developed computer systems are upsetting this view and changing the established world view through a new, unprecedented interaction between man and machinery. Thus we need digital axioms and comprehensive rules and laws for such autonomous acting systems that determine human interaction between cybernetic systems and biological individuals. This digital humanism should encompass our relationship to nature, our handling of the complexity and diversity of nature and the technological influences on society in order to avoid technical colonialism through supercomputers.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
Laser ultrasound was used to determine dispersion curves of surface acoustic waves on a Si (001) surface covered by AlScN films with a scandium content between 0 and 41%. By including off-symmetry directions for wavevectors, all five independent elastic constants of the film were extracted from the measurements. Results for their dependence on the Sc content are presented and compared to corresponding data in the literature, obtained by alternative experimental methods or by ab-initio calculations.
Time-Sensitive Networking (TSN) is the most promising time-deterministic wired communication approach for industrial applications. To extend TSN to "IEEE 802.11" wireless networks two challenging problems must be solved: synchronization and scheduling. This paper is focused on the first one. Even though a few solutions already meet the required synchronization accuracies, they are built on expensive hardware that is not suited for mass market products. While next Wi-Fi generation might support the required functionalities, this paper proposes a novel method that makes possible high-precision wireless synchronization using commercial low-cost components. With the proposed solution, a standard deviation of synchronization error of less than 500 ns can be achieved for many use cases and system loads on both CPU and network. This performance is comparable to modern wired real-time field busses, which makes the developed method a significant contribution for the extension of the TSN protocol to the wireless domain.
Amorphous In-Ga-Zn-O (IGZO) is a high-mobility semiconductor employed in modern thin-film transistors for displays and it is considered as a promising material for Schottky diode-based rectifiers. Properties of the electronic components based on IGZO strongly depend on the manufacturing parameters such as the oxygen partial pressure during IGZO sputtering and post-deposition thermal annealing. In this study, we investigate the combined effect of sputtering conditions of amorphous IGZO (In:Ga:Zn=1:1:1) and post-deposition thermal annealing on the properties of vertical thin-film Pt-IGZO-Cu Schottky diodes, and evaluated the applicability of the fabricated Schottky diodes for low-frequency half-wave rectifier circuits. The change of the oxygen content in the gas mixture from 1.64% to 6.25%, and post-deposition annealing is shown to increase the current rectification ratio from 10 5 to 10 7 at ±1 V, Schottky barrier height from 0.64 eV to 0.75 eV, and the ideality factor from 1.11 to 1.39. Half-wave rectifier circuits based on the fabricated Schottky diodes were simulated using parameters extracted from measured current-voltage and capacitance-voltage characteristics. The half-wave rectifier circuits were realized at 100 kHz and 300 kHz on as-fabricated Schottky diodes with active area of 200 μm × 200 μm, which is relevant for the near-field communication (125 kHz - 134 kHz), and provided the output voltage amplitude of 0.87 V for 2 V supply voltage. The simulation results matched with the measurement data, verifying the model accuracy for circuit level simulation.
Modern society is more than ever striving for digital connectivity -- everywhere and at any time, giving rise to megatrends such as the Internet of Things (IoT). Already today, 'things' communicate and interact autonomously with each other and are managed in networks. In the future, people, data, and things will be interlinked, which is also referred to as the Internet of Everything (IoE). Billions of devices will be ubiquitously present in our everyday environment and are being connected over the Internet.
As an emerging technology, printed electronics (PE) is a key enabler for the IoE offering novel device types with free form factors, new materials, and a wide range of substrates that can be flexible, transparent, as well as biodegradable. Furthermore, PE enables new degrees of freedom in circuit customizability, cost-efficiency as well as large-area fabrication at the point of use.
These unique features of PE complement conventional silicon-based technologies. Additive manufacturing processes enable the realization of many envisioned applications such as smart objects, flexible displays, wearables in health care, green electronics, to name but a few.
From the perspective of the IoE, interconnecting billions of heterogeneous devices and systems is one of the major challenges to be solved. Complex high-performance devices interact with highly specialized lightweight electronic devices, such as e.g. smartphones and smart sensors. Data is often measured, stored, and shared continuously with neighboring devices or in the cloud. Thereby, the abundance of data being collected and processed raises privacy and security concerns.
Conventional cryptographic operations are typically based on deterministic algorithms requiring high circuit and system complexity, which makes them unsuitable for lightweight devices.
Many applications do exist, where strong cryptographic operations are not required, such as e.g. in device identification and authentication. Thereby, the security level mainly depends on the quality of the entropy source and the trustworthiness of the derived keys. Statistical properties such as the uniqueness of the keys are of great importance to precisely distinguish between single entities.
In the past decades, hardware-intrinsic security, particularly physically unclonable functions (PUFs), gained a lot of attraction to provide security features for IoT devices. PUFs use their inherent variations to derive device-specific unique identifiers, comparable to fingerprints in biometry.
The potentials of this technology include the use of a true source of randomness, on demand key derivation, as well as inherent key storage.
Combining these potentials with the unique features of PE technology opens up new opportunities to bring security to lightweight electronic devices and systems. Although PE is still far from being matured and from being as reliable as silicon technology, in this thesis we show that PE-based PUFs are promising candidates to provide key derivation suitable for device identification in the IoE.
Thereby, this thesis is primarily concerned with the development, investigation, and assessment of PE-based PUFs to provide security functionalities to resource constrained printed devices and systems.
As a first contribution of this thesis, we introduce the scalable PE-based Differential Circuit PUF (DiffC-PUF) design to provide secure keys to be used in security applications for resource constrained printed devices. The DiffC-PUF is designed as a hybrid system architecture incorporating silicon-based and inkjet-printed components. We develop an embedded PUF platform to enable large-scale characterization of silicon and printed PUF cores.
In the second contribution of this thesis, we fabricate silicon PUF cores based on discrete components and perform statistical tests under realistic operating conditions. A comprehensive experimental analysis on the PUF security metrics is carried out. The results show that the silicon-based DiffC-PUF exhibits nearly ideal values for the uniqueness and reliability metrics. Furthermore, the identification capabilities of the DiffC-PUF are investigated and it is shown that additional post-processing can further improve the quality of the identification system.
In the third contribution of this thesis, we firstly introduce an evaluation workflow to simulate PE-based DiffC-PUFs, also called hybrid PUFs. Hereof, we introduce a Python-based simulation environment to investigate the characteristics and variations of printed PUF cores based on Monte Carlo (MC) simulations. The simulation results show, that the security metrics to be expected from the fabricated devices are close to ideal at the best operating point.
Secondly, we employ fabricated printed PUF cores for statistical tests under varying operating conditions including variations in ambient temperature, relative humidity, and supply voltage. The evaluations of the uniqueness, bit aliasing, and uniformity metrics are in good agreement with the simulation results. The experimentally determined mean reliability value is relatively low, which can be explained by the missing passivation and encapsulation of the printed transistors. The investigation of the identification capabilities based on the raw PUF responses shows that the pure hybrid PUF is not suitable for cryptographic applications, but qualifies for device identification tasks.
The final contribution is to switch to the perspective of an attacker. To judge on the security capabilities of the hybrid PUF, a comprehensive security analysis in the manner of a cryptanalysis is performed. The analysis of the entropy of the hybrid PUF shows that its vulnerability against model-based attacks mainly depends on the selected challenge building method. Furthermore, an attack methodology is introduced to assess the performances of different mathematical cloning attacks on the basis of eavesdropped challenge-response pairs (CRPs). To clone the hybrid PUF, a sorting algorithm is introduced and compared with commonly used supervised machine learning (ML) classifiers including logistic regression (LR), random forest (RF), as well as multi-layer perceptron (MLP).
The results show that the hybrid PUF is vulnerable against model-based attacks. The sorting algorithm benefits from shorter training times compared to the ML algorithms. If the eavesdropped CRPs are erroneous, the ML algorithms outperform the sorting algorithm.
Diffracted waves carry high resolution information that can help interpreting fine structural details at a scale smaller than the seismic wavelength. Because of the low signal-to-noise ratio of diffracted waves, it is challenging to preserve them during processing and to identify them in the final data. It is, therefore, a traditional approach to pick manually the diffractions. However, such task is tedious and often prohibitive, thus, current attention is given to domain adaptation. Those methods aim to transfer knowledge from a labeled domain to train the model, and then infer on the real unlabeled data. In this regard, it is common practice to create a synthetic labeled training dataset, followed by testing on unlabeled real data. Unfortunately, such procedure may fail due to the existing gap between the synthetic and the real distribution since quite often synthetic data oversimplifies the problem, and consequently the transfer learning becomes a hard and non-trivial procedure. Furthermore, deep neural networks are characterized by their high sensitivity towards cross-domain distribution shift. In this work, we present deep learning model that builds a bridge between both distributions creating a semi-synthetic datatset that fills in the gap between synthetic and real domains. More specifically, our proposal is a feed-forward, fully convolutional neural network for imageto-image translation that allows to insert synthetic diffractions while preserving the original reflection signal. A series of experiments validate that our approach produces convincing seismic data containing the desired synthetic diffractions.
We propose in this work to solve privacy preserving set relations performed by a third party in an outsourced configuration. We argue that solving the disjointness relation based on Bloom filters is a new contribution in particular by having another layer of privacy on the sets cardinality. We propose to compose the set relations in a slightly different way by applying a keyed hash function. Besides discussing the correctness of the set relations, we analyze how this impacts the privacy of the sets content as well as providing privacy on the sets cardinality. We are in particular interested in how having bits overlapping in the Bloom filters impacts the privacy level of our approach. Finally, we present our results with real-world parameters in two concrete scenarios.
This paper describes a comparative study of two tactile systems supporting navigation for persons with little or no visual and auditory perception. The efficacy of a tactile head-mounted device (HMD) was compared to that of a wearable device, a tactile belt. A study with twenty participants showed that the participants took significantly less time to complete a course when navigating with the HMD, as compared to the belt.
Additive manufacturing is a rapidly growing manufacturing process for which many new processes and materials are currently being developed. The biggest advantage is that almost any shape can be produced, while conventional manufacturing methods reach their limits. Furthermore, a lot of material is saved because the part is created in layers and only as much material is used as necessary. In contrast, in the case of machining processes, it is not uncommon for more than half of the material to be removed and disposed of. Recently, new additive manufacturing processes have been on the market that enables the manufacturing of components using the FDM process with fiber reinforcement. This opens up new possibilities for optimizing components in terms of their strength and at the same time increasing sustainability by reducing materials consumption and waste. Within the scope of this work, different types of test specimens are to be designed, manufactured and examined. The test specimens are tensile specimens, which are used both for standardized tensile tests and for examining a practical component from automotive engineering used in student project. This project is a vehicle designed to compete in the Shell Eco-marathon, one of the world’s largest energy efficiency competitions. The aim is to design a vehicle that covers a certain distance with as little fuel as possible. Accordingly, it is desirable to manufacture the components with the lowest possible weight, while still ensuring the required rigidity. To achieve this, the use of fiber-reinforced 3D-printed parts is particularly suitable due to the high rigidity. In particular, the joining technology for connecting conventionally and additively manufactured components is developed. As a result, the economic efficiency was assessed, and guidelines for the design of components and joining elements were created. In addition, it could be shown that the additive manufacturing of the component could be implemented faster and more sustainably than the previous conventional manufacturing.
Propagation of acoustic waves is considered in a system consisting of two stiff quarter-spaces connected by a planar soft layer. The two quarter-spaces and the layer form a half-space with a planar surface. In a numerical study, surface waves have been found and analyzed in this system with displacements that are localized not only at the surface, but also in the soft layer. In addition to the semi-analytical finite element method, an alternative approach based on an expansion of the displacement field in a double series of Laguerre functions and Legendre polynomials has been applied.
It is shown that a number of branches of the mode spectrum can be interpreted and remarkably well described by perturbation theory, where the zero-order modes are the wedge waves guided at a rectangular edge of the stiff quarter-spaces or waves guided at the edge of a soft plate with rigid surfaces.
For elastic moduli and densities corresponding to the material combination PMMA–silicone–PMMA, at least one of the branches in the dispersion relation of surface waves trapped in the soft layer exhibits a zero-group velocity point.
Potential applications of these 1D guided surface waves in non-destructive evaluation are discussed.
Modern Franciscan Leadership
(2020)
This article combines two important areas of practical theology: Monastic rules and leadership in a cloistral organisation, using the Rule of Saint Francis as a prominent example. The aim of this research is to examine how living Christian tradition in a monastic order affects leadership today, discovering how the Rule and Franciscan spirituality impact managing a convent. The research question is answered within this inductive research applying the methodology of the ‘theology in four voices.’ Based on the results, it is possible to build a coherent leadership system based on Biblical and Franciscan sources.
To reach customers by dialog marketing campaigns is more and more difficult. This is a common problem of companies and marketing agencies worldwide: information overload, multi-channel-communication and a confusing variety of offers make it hard to gain the attention of the target group. The contribution of this paper is four-fold: we provide an overview of the current state of print dialog marketing activities and trends (I). Based on this corpus we identify the main key performance indicators of dialog marketing customer interaction (II). A qualitative user experience study identifies the customer wishes and needs, focusing on lottery offers for senior citizens (III). Finally, we evaluate the success of two different dialog marketing campaigns with 20,000 clients and compare the key performance indicators of the original hands-on experience-based print mailings with user experience tested and optimized mailings (IV).
Im Rahmen des Forschungsprojekts Professional UX entwickelt die Hochschule Offenburg gemeinsam mit dem Softwarehaus Dr. Hornecker in Freiburg eine innovative Systemlösung, die es ermöglicht, anhand von Mimik, Stimme und Blickverlauf beim Nutzer entstehende Emotionen bei der Nutzung interaktiver Anwendungen zu erfassen und zu interpretieren. Ziel der Untersuchung ist es, Indikatoren zu identifizieren, die eine exakte Zuordnung von wahrgenommenen Reizen zu den jeweils ausgelösten Emotionen erlauben. Sobald negative Emotionen wie Ärger oder Unsicherheit auftreten, kann dieser erfasst und im Nachgang der jeweils irritierende Reiz eliminiert werden. Das Projektteam hat einen ersten Prototyp für die Professional UX Systemlösung in Form von Hard- und Software entwickelt, mit dem es möglich ist, UX-Messungen während der User Interaktion durchzuführen und automatisiert mithilfe von KI auswerten zu lassen.
Analysis of Miniaturized Printed Flexible RFID/NFC Antennas Using Different Carrier Substrates
(2020)
Antennas for Radio Frequency Identification (RFID) provide benefits for high frequencies (HF) and wireless data transmission via Near Field Communication (NFC) and many other applications. In this case, various requirements for the design of the reader and transmitter antennas must be met in order to achieve a suitable transmission quality. In this work, a miniaturized cost-effective RFID/NFC antenna for a microelectronic measurement system is designed and printed on different flexible carrier substrates using a new and low-cost Direct Ink Writing (DIW) technology. Various practical aspects such as reflection and impedance magnitude as well as the behavior of the printed RFID/NFC antennas are analyzed and compared to an identical copper-based antenna of the same size. The results are presented in this paper. Furthermore, the problems during the printing process itself on the different substrates are evaluated. The effects of the characteristics on the antenna under kink-free bending tests are examined and subsequently long-term measurements are carried out.
In the modern knowledge-based and digital economy, the value of knowledge is growing relative to other assets and new intellectual property is being created at an ever-increasing rate. Therefore, the ability to find non-trivial solutions, systematically generate new concepts, and create intellectual property rapidly become crucial to achieving competitive advantage and leveraging the intellectual potential of organizations.
„Nichts geschieht ohne Risiko, aber ohne Risiko geschieht auch nichts“, sagte der ehemalige Bundespräsident Walther Scheel. Der Ausspruch sensibilisiert dafür, dass in fast allen Themen und Prozessen Risiken stecken und die Akteure ein kalkulierbares Risiko eingehen sollten, um auch in komplexen Themen einen signifikanten Fortschritt zu erlangen. In unserem Fall sind die Akteure Projektleiter und Projektteammitglieder, die kaum eigene/persönliche Risiken eingehen, sondern Projektrisiken professionell managen müssen. Die Teammitglieder sind dabei von den Projekten selten persönlich bedroht, sondern das Projekt oder das Unternehmen und entsprechend sind die Risiken oft auch deutlich größer, als eine einzelne Person es sich vorstellen oder persönlich verantworten kann.
Im Archiv für Kriminologie wurden bislang drei Arbeiten zur 3-D-CAD-Rekonstruktion der ersten "Eisernen Hand" des berühmten Reichsritters Gottfried ("Götz") von Berlichingen (1480-1562) vorgestellt. Mittlerweile sind einige neue Gesichtspunkte herausgearbeitet worden, die hier kurz als Ergänzung mitgeteilt werden sollen.
With economic weight shifting toward net zero, now is the time for ECAs, Exim-Banks, and PRIs to lead. Despite previous success, aligning global economic governance to climate goals requires additional activities across export finance and investment insurance institutions. The new research project initiated by Oxford University, ClimateWorks Foundation, and Mission 2020 including other practitioners and academics from institutions such as Atradius DSB, Columbia University, EDC, FMO and Offenburg University focuses on reshaping future trade and investment governance in light of climate action. The idea of a ‘Berne Union Net Zero Club’ is an important item in a potential package of reforms. This can include realigning mandates and corporate strategies, principles of intervention, as well as ECA, Exim-Bank and PRI operating models in order to accelerate net zero transformation. Full transparency regarding Berne Union members’ activities would be an excellent starting point. We invite all interested parties in the sector to come together to chart our own path to net zero
Generative convolutional deep neural networks, e.g. popular GAN architectures, are relying on convolution based up-sampling methods to produce non-scalar outputs like images or video sequences. In this paper, we show that common up-sampling methods, i.e. known as up-convolution or transposed convolution, are causing the inability of such models to reproduce spectral distributions of natural training data correctly. This effect is independent of the underlying architecture and we show that it can be used to easily detect generated data like deepfakes with up to 100% accuracy on public benchmarks. To overcome this drawback of current generative models, we propose to add a novel spectral regularization term to the training optimization objective. We show that this approach not only allows to train spectral consistent GANs that are avoiding high frequency errors. Also, we show that a correct approximation of the frequency spectrum has positive effects on the training stability and output quality of generative networks.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
Method for controlling a device, in particular, a prosthetic hand or a robotic arm (US20200327705A1)
(2020)
A method for controlling a device, in particular a prosthetic hand or a robotic arm, includes using an operator-mounted camera to detect at least one marker positioned on or in relation to the device. Starting from the detection of the at least one marker, a predefined movement of the operator together with the camera is detected and is used to trigger a corresponding action of the device. The predefined movement of the operator is detected in the form of a line of sight by means of camera tracking. A system for controlling a device, in particular a prosthetic hand or a robotic arm, includes a pair of AR glasses adapted to detect the at least one marker and to detect the predefined movement of the operator.
The recent successes and wide spread application of compute intensive machine learning and data analytics methods have been boosting the usage of the Python programming language on HPC systems. While Python provides many advantages for the users, it has not been designed with a focus on multiuser environments or parallel programming - making it quite challenging to maintain stable and secure Python workflows on a HPC system. In this paper, we analyze the key problems induced by the usage of Python on HPC clusters and sketch appropriate workarounds for efficiently maintaining multi-user Python software environments, securing and restricting resources of Python jobs and containing Python processes, while focusing on Deep Learning applications running on GPU clusters.
Oesophageal Electrode Probe and Device for Cardiological Treatment and/or Diagnosis (US20200261024)
(2020)
An oesophageal electrode probe for bioimpedance measurement and/or for neurostimulation is provided; a device for transoesophageal cardiological treatment and/or cardiological diagnosis is also provided; a method for the open-loop or closed-loop control of a cardiological catheter ablation device and/or a cardiological, circulatory and/or respiratory support device is also provided. The oesophageal electrode probe comprises a bioimpedance measuring device for measuring the bioimpedance of at least one part of tissue surrounding the oesophageal electrode probe. The bioimpedance device comprises at least one first and one second electrode. The at least one first electrode is arranged on a side of the oesophageal electrode probe facing towards the heart. The at least one second electrode is arranged on a side of the oesophageal electrode probe facing away from the heart. The device comprises the oesophageal electrode probe and a control and/or evaluation device.
A disturbed synchronization of the ventricular contraction can cause a highly developed systolic heart failure in affected patients with reduction of the left ventricular ejection fraction, which can often be explained by a diseased left bundle branch block (LBBB). If medication remains unresponsive, the concerned patients will be treated with a cardiac resynchronization therapy (CRT) system. The aim of this study was to integrate His-bundle pacing into the Offenburg heart rhythm model in order to visualize the electrical pacing field generated by His-Bundle-Pacing. Modelling and electrical field simulation activities were performed with the software CST (Computer Simulation Technology) from Dessault Systèms. CRT with biventricular pacing is to be achieved by an apical right ventricular electrode and an additional left ventricular electrode, which is floated into the coronary vein sinus. The non-responder rate of the CRT therapy is about one third of the CRT patients. His- Bundle-Pacing represents a physiological alternative to conventional cardiac pacing and cardiac resynchronization. An electrode implanted in the His-bundle emits a stronger electrical pacing field than the electrical pacing field of conventional cardiac pacemakers. The pacing of the Hisbundle was performed by the Medtronic Select Secure 3830 electrode with pacing voltage amplitudes of 3 V, 2 V and 1,5 V in combination with a pacing pulse duration of 1 ms. Compared to conventional pacemaker pacing, His-bundle pacing is capable of bridging LBBB conduction disorders in the left ventricle. The His-bundle pacing electrical field is able to spread via the physiological pathway in the right and left ventricles for CRT with a narrow QRS-complex in the surface ECG.
OVVL (the Open Weakness and Vulnerability Modeller) is a tool and methodology to support threat modeling in the early stages of the secure software development lifecycle. We provide an overview of OVVL (https://ovvl.org), its data model and browser-based UI. We equally provide a discussion of initial experiments on how identified threats in the design phase can be aligned with later activities in the software lifecycle (issue management and security testing).
The development of Internet of Things (IoT) embedded devices is proliferating, especially in the smart home automation system. However, the devices unfortunately are imposing overhead on the IoT network. Thus, the Internet Engineering Task Force (IETF) have introduced the IPv6 Low-Power Wireless Personal Area Network (6LoWPAN) to provide a solution to this constraint. 6LoWPAN is an Internet Protocol (IP) based communication where it allows each device to connect to the Internet directly. As a result, the power consumption is reduced. However, the limitation of data transmission frame size of the IPv6 Routing Protocol for Low-power and Lossy Network’s (RPL’s) had made it to be the running overhead, and thus consequently degrades the performance of the network in terms of Quality of Service (QoS), especially in a large network. Therefore, HRPL was developed to enhance the RPL protocol to minimize redundant retransmission that causes the routing overhead. We introduced the T-Cut Off Delay to set the limit of the delay and the H field to respond to actions taken within the T-Cut Off Delay. Thus, this paper presents the comparison performance assessment of HRPL between simulation and real-world scenarios (6LoWPAN Smart Home System (6LoSH) testbed) in validating the HRPL functionalities. Our results show that HRPL had successfully reduced the routing overhead when implemented in 6LoSH. The observed Control Traffic Overhead (CTO) packet difference between each experiment is 7.1%, and the convergence time is 9.3%. Further research is recommended to be conducted for these metrics: latency, Packet Delivery Ratio (PDR), and throughput.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
Time Sensitive Networking (TSN) provides mechanisms to enable deterministic and real-time networking in industrial networks. Configuration of these mechanisms is key to fully deploy and integrate TSN in the networks. The IEEE 802.1 Qcc standard has proposed different configuration models to implement a TSN configuration. Up until now, TSN and its configuration have been explored mostly for Ethernet-based industrial networks. However, they are still considered “work-in-progress” for wireless networks. This work focuses on the fully centralized model and describes a generic concept to enable the configuration of TSN mechanisms in wireless industrial networks. To this end, a configuration entity is implemented to conFigure the wireless end stations to satisfy their requirements. The proposed solution is then validated with the Digital Enhanced Cordless Telecommunication ultra-low energy (DECT ULE) wireless communication protocol.
Analysis of Amplitude and Phase Errors in Digital-Beamforming Radars for Automotive Applications
(2020)
Fundamentally, automotive radar sensors with Digital-Beamforming (DBF) use several transmitter and receiver antennas to measure the direction of the target. However, hardware imperfections, tolerances in the feeding lines of the antennas, coupling effects as well as temperature changes and ageing will cause amplitude and phase errors. These errors can lead to misinterpretation of the data and result in hazardous actions of the autonomous system. First, the impact of amplitude and phase errors on angular estimation is discussed and analyzed by simulations. The results are compared with the measured errors of a real radar sensor. Further, a calibration method is implemented and evaluated by measurements.
A Gamified and Adaptive Learning System for Neurodivergent Workers in Electronic Assembling Tasks
(2020)
Learning and work-oriented assistive systems are often designed to fit the workflow of neurotypical workers. Neurodivergent workers and individuals with learning disabilities often present cognitive and sensorimotor characteristics that are better accommodated with personalized learning and working processes. Therefore, we designed an adaptive learning system that combines an augmented interaction space with user-sensitive virtual assistance to support step-by-step guidance for neurodivergent workers in electronic assembling tasks. Gamified learning elements were also included in the interface to provide self-motivation and praise whenever users progress in their learning and work achievements.
The interaction between agents in multiagent-based control systems requires peer to peer communication between agents avoiding central control. The sensor nodes represent agents and produce measurement data every time step. The nodes exchange time series data by using the peer to peer network in order to calculate an aggregation function for solving a problem cooperatively. We investigate the aggregation process of averaging data for time series data of nodes in a peer to peer network by using the grouping algorithm of Cichon et al. 2018. Nodes communicate whether data is new and map data values according to their sizes into a histogram. This map message consists of the subintervals and vectors for estimating the node joining and leaving the subinterval. At each time step, the nodes communicate with each other in synchronous rounds to exchange map messages until the network converges to a common map message. The node calculates the average value of time series data produced by all nodes in the network by using the histogram algorithm. The relative error for comparing the output of averaging time series data, and the ground truth of the average value in the network will decrease as the size of the network increases. We perform simulations which show that the approximate histograms method provides a reasonable approximation of time series data.
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.