Refine
Year of publication
Document Type
- Conference Proceeding (1184)
- Article (reviewed) (679)
- Article (unreviewed) (566)
- Part of a Book (460)
- Contribution to a Periodical (287)
- Book (227)
- Other (139)
- Working Paper (105)
- Patent (98)
- Report (76)
Conference Type
- Konferenzartikel (945)
- Konferenz-Abstract (156)
- Sonstiges (42)
- Konferenz-Poster (32)
- Konferenzband (13)
Language
- German (2071)
- English (1857)
- Other language (5)
- Russian (3)
- Multiple languages (2)
- French (1)
- Spanish (1)
Is part of the Bibliography
- yes (3940) (remove)
Keywords
- Digitalisierung (41)
- RoboCup (32)
- Dünnschichtchromatographie (28)
- Social Media (24)
- COVID-19 (23)
- Kommunikation (23)
- Arbeitszeugnis (22)
- Energieversorgung (22)
- E-Learning (21)
- Export (21)
Institute
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (945)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (808)
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (779)
- Fakultät Wirtschaft (W) (617)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (464)
- INES - Institut für nachhaltige Energiesysteme (239)
- Fakultät Medien (M) (ab 22.04.2021) (219)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (155)
- Zentrale Einrichtungen (81)
- IMLA - Institute for Machine Learning and Analytics (79)
Open Access
- Open Access (1464)
- Closed Access (1245)
- Closed (528)
- Bronze (285)
- Diamond (76)
- Gold (76)
- Hybrid (50)
- Grün (16)
Elektrische Wärmepumpen sind eine Schlüsseltechnologie für klimafreundliche Gebäude. In Mehrfamilienhäusern ist ihr Einsatz noch eine Herausforderung und entsprechend wenig verbreitet. Im Rahmen des Verbundprojekts "HEAVEN" haben Forschende nun ein Mehrquellen-Wärmepumpensystem entwickelt, das an die Anforderungen größerer Wohngebäude angepasst ist. Getestet wurde es im Rahmen des Verbundprojekts "Smartes Quartier Durlach" in einem Karlsruher Gebäude. Daten zum ersten Betriebsjahr liegen nun vor.
Diese Metadaten wurden zur Verfügung gestellt von der Literaturdatenbank RSWB®plus
Am 1. Juli 2022 trafen sich im Rahmen des Abschlusskolloquiums des Projekts ACA-Modes rund 60 Teilnehmende aus Forschung, Lehre und Industrie zu einer internationalen Konferenz an der Hochschule Offenburg. Hier wurden die Projektergebnisse rund um die erfolgreiche Implementierung modellprädiktiver Regelstrategien vorgestellt, aktuelle Fragestellungen diskutiert und Entwicklungspfade hin zu einem netzdienlichen Betrieb von Energieverbundsystemen skizziert.
Heat pumps play a central role in decarbonizing the heat supply of buildings. However, in this article, implementing heat pumps in existing buildings, a significant challenge is still presented due to high temperature requirements. In this article, a systematic analysis of the effects of heat source temperatures, maximum heat pump condenser temperatures, and system temperatures on the seasonal performance of heat pump (HP) systems is presented. The quantitative performance analysis encompasses over 50 heat pumps installed in residential buildings, revealing correlations between the building characteristics, observed temperatures, and heat pump type. The performance of an HP system retrofitted to a 30-dwelling multifamily building is presented in more detail. The bivalent HP system combines air and ground as heat sources and achieves a seasonal performance factor of 3.25 with a share of the gas boiler of 27% in its first year of operation. In these findings, the technical feasibility of retrofitting heat pumps is demonstrated in existing buildings and insights are provided into overcoming the challenges associated with high temperature requirements.
Wärmepumpen sind eine Schlüsseltechnologie der Wärmewende. Durch die Nutzbarmachung von Umweltwärme und den Antrieb mit Elektrizität, die zunehmend aus erneuerbaren Energien gewonnen wird, kann die CO2-Intensität der Wärmeversorgung gesenkt werden. Eine Herausforderung besteht in der Anwendung in größeren Mehrfamilienbestandsgebäuden. Lösungsansätze und beispielhafte Umsetzungen werden hierzu vorgestellt.
Lithium-ion batteries exhibit slow voltage dynamics on the minute time scale that are usually associated with transport processes. We present a novel modelling approach toward these dynamics by combining physical and data-driven models into a Grey-box model. We use neural networks, in particular neural ordinary differential equations. The physical structure of the Grey-box model is borrowed from the Fickian diffusion law, where the transport domain is discretized using finite volumes. Within this physical structure, unknown parameters (diffusion coefficient, diffusion length, discretization) and dependencies (state of charge, lithium concentration) are replaced by neural networks and learnable parameters. We perform model-to-model comparisons, using as training data (a) a Fickian diffusion process, (b) a Warburg element, and (c) a resistor-capacitor circuit. Voltage dynamics during constant-current operation and pulse tests as well as electrochemical impedance spectra are simulated. The slow dynamics of all three physical models in the order of ten to 30 min are well captured by the Grey-box model, demonstrating the flexibility of the present approach.
A novel peptidyl-lys metalloendopeptidase (Tc-LysN) from Tramates coccinea was recombinantly expressed in Komagataella phaffii using the native pro-protein sequence. The peptidase was secreted into the culture broth as zymogen (~38 kDa) and mature enzyme (~19.8 kDa) simultaneously. The mature Tc-LysN was purified to homogeneity with a single step anion-exchange chromatography at pH 7.2. N-terminal sequencing using TMTpro Zero and mass spectrometry of the mature Tc-LysN indicated that the pro-peptide was cleaved between the amino acid positions 184 and 185 at the Kex2 cleavage site present in the native pro-protein sequence. The pH optimum of Tc-LysN was determined to be 5.0 while it maintained ≥60% activity between pH values 4.5—7.5 and ≥30% activity between pH values 8.5—10.0, indicating its broad applicability. The temperature maximum of Tc-LysN was determined to be 60 °C. After 18 h of incubation at 80 °C, Tc-LysN still retained ~20% activity. Organic solvents such as methanol and acetonitrile, at concentrations as high as 40% (v/v), were found to enhance Tc-LysN’s activity up to ~100% and ~50%, respectively. Tc-LysN’s thermostability, ability to withstand up to 8 M urea, tolerance to high concentrations of organic solvents, and an acidic pH optimum make it a viable candidate to be employed in proteomics workflows in which alkaline conditions might pose a challenge. The nano-LC-MS/MS analysis revealed bovine serum albumin (BSA)’s sequence coverage of 84% using Tc-LysN which was comparable to the sequence coverage of 90% by trypsin peptides.
Currently, immersive technologies are enjoying great popularity. This trend is reflected in technological advances and the emergence of new products for the mass market, such as augmented reality glasses. The range of applications for immersive technologies is growing with more efficient and affordable technologies and student adoption. Especially in education, the use will improve existing learning methods. Immersive application use visual, audio and haptic sensors to fully engage the user in a virtual environment. This impression is reinforced with the help of realistic visualizations and the opportunity for interaction. In particular, Augmented reality is characterized by a high degree of integration between reality and the inserted virtual objects. An augmented interactive simulation for the determination of the specific charge of an electron will be used as an example to demonstrate how such immersion can be created for users. A virtual Helmholtz coil is used to measure and calculate the e/m constant. The voltage at the cathode for generating the electron beam, but also the voltage of the homogeneous magnetic field for deflecting the electron beam, can be variably controlled by haptic user input. Based on these voltages, an immersive virtual electron beam is calculated and visualized. In this paper, the authors present the conceptual steps of this immersive application and address the challenges associated with designing and developing an augmented and interactive simulation.
Redesigning a curriculum for teaching media technology is a major challenge. Up-to-date teaching and learning concepts are necessary that meet the constant technological progress and prepare students specifically for their professional life. Teaching and studying should be characterized by a student-oriented teaching and learning culture. In order to achieve this goal, consistent evaluation is essential. The aim of the evaluation concept presented here is to generate structured information regarding the quality of content-related, didactic and organizational aspects of teaching. The exchange of opinions between students and lecturers should be encouraged in order to continuously improve the teaching and learning processes.
The paper will focus on the activities of the International Year of Light and Optical Technologies 2015 (IYL) with their impact in life, science, art, culture, education and outreach as well as the importance in promoting the objectives for sustainable development. It describes our activities carried out in the run-up to or during the IYL, as well as reports on the generic projects that led to the success of the IYL. The success of the IYL is illustrated by examples and statistics. Relating to the potential and success of the IYL, the impact and the genesis of the International Day of Light (IDL) is presented. Impressions from the opening ceremony of the IYL in Paris at UNESCO headquarters and the Inaugural Ceremony of the IDL will then be covered. A second focus is placed on the interdisciplinary media projects realized by the students of our university dedicated to these events. Finally, an analysis of the impact and legacy of IYL and IDL will be presented.
In recent times, 5G has found applications in several public as well as private networks. There is a growing need to make it compatible with diverse services without compromising security. Current security options for authenticating devices into a home network are 5G Authentication and Key Agreement (5G-AKA) and Extensible Authentication Protocol (EAP)-AKA'. However, for specific use cases such as private networks, more customizable and convenient authentication mechanisms are required. The current mobile networks use authentication based only on SIM cards, but as 5G is being applied in fields like IIoT and automation, even in Non-Public-Networks (NPNs), there is a need for a simpler method of authentication. Certificate-based authentication is one such mechanism that is passwordless and works solely on the information present in the digital certificate that the user holds. The paper suggests an authentication mechanism that performs certificate-based mutual authentication between the UE and the Home network. The proposed concept identifies both the user and network with digital certificates and intends to carry out primary authentication with the help of it. In this work we conduct a study on presently available authentication protocols for 5G networks, both theoretically and experimentally in hardware as well as virtual environments. On the basis of the analysis a series of proposed steps for certificate primary authentication are presented.
The Transport Layer Security protocol is a widespread cryptographic protocol designed to provide secure communication over insecure networks by providing authenticity, integrity, and confidentiality. As a first step, in the TLS Handshake Protocol a common master secret is negotiated. In many configurations, this step makes considerable use of asymmetric cryptographic algorithms. It seems to be a prevalent assumption that the use of such asymmetric cryptographic algorithms is unsuitable for resource-constrained devices. Therefore, the work at hand analyzes the runtime performance of the TLS vl.2 session establishments on an embedded ARM Cortex-M4 platform. We measure the execution time to generate and parse session establishment messages for the client and server sides. In particular, we study the impact of different elliptic curves used for the ephemeral Diffie-Hellman key exchange and the impact of different lengths and subject public key algorithms of certification paths. Our analysis shows that the use of asymmetric cryptographic algorithms is well possible on resource-constrained devices, if carefully chosen and well implemented. This allows the use of the well-proven TLS protocol also for applications from the (Industrial) Internet of Things, including Fieldbus communication.
Physical unclonable functions (PUFs) are increasingly generating attention in the field of hardware-based security for the Internet of Things (IoT). A PUF, as its name implies, is a physical element with a special and unique inherent characteristic and can act as the security anchor for authentication and cryptographic applications. Keeping in mind that the PUF outputs are prone to change in the presence of noise and environmental variations, it is critical to derive reliable keys from the PUF and to use the maximum entropy at the same time. In this work, the PUF output positioning (POP) method is proposed, which is a novel method for grouping the PUF outputs in order to maximize the extracted entropy. To achieve this, an offset data is introduced as helper data, which is used to relax the constraints considered for the grouping of PUF outputs, and deriving more entropy, while reducing the secret key error bits. To implement the method, the key enrollment and key generation algorithms are presented. Based on a theoretical analysis of the achieved entropy, it is proven that POP can maximize the achieved entropy, while respecting the constraints induced to guarantee the reliability of the secret key. Moreover, a detailed security analysis is presented, which shows the resilience of the method against cyber-security attacks. The findings of this work are evaluated by applying the method on a hybrid printed PUF, where it can be practically shown that the proposed method outperforms other existing group-based PUF key generation methods.
The often-occurring short-term orders of manufactured products require a high machine availability. This requirement increases the importance of predictive maintenance solutions for bearings used in machines. There are, among others, hybrid solutions that rely on a physical model. For their usage, knowing the different degradation stages of bearings is essential. This research analyzes the underlying failure mechanisms of these stages theoretically and in a practical example of the well-known FEMTO dataset used for the IEEE PHM 2012 Data Challenge to provide this knowledge. In addition, it shows for which use cases the usage of low-frequency accelerometers is sufficient. The analysis provides that the degradation stages toward the end of the bearing life can also be detected with low-frequency accelerometers. Further, the importance of high-frequency accelerometers to detect bearing faults in early degradation stages is pointed out. These aspects have not been paid attention to by industry and research until now, despite providing a considerable cost-saving potential.
As cyber-attacks and functional safety requirements increase in Operational Technology (OT), implementing security measures becomes crucial. The IEC/IEEE 60802 draft standard addresses the security convergence in Time-Sensitive Networks (TSN) for industrial automation.We present the standard’s security architecture and its goals to establish end-to-end security with resource access authorization in OT systems. We compare the standard to our abstract technology-independent model for the management of cryptographic credentials during the lifecycles of OT systems. Additionally, we implemented the processes, mechanisms, and protocols needed for IEC/IEEE 60802 and extended the architecture with public key infrastructure (PKI) functionalities to support complete security management processes.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
Training deep neural networks using backpropagation is very memory and computationally intensive. This makes it difficult to run on-device learning or fine-tune neural networks on tiny, embedded devices such as low-power micro-controller units (MCUs). Sparse backpropagation algorithms try to reduce the computational load of on-device learning by training only a subset of the weights and biases. Existing approaches use a static number of weights to train. A poor choice of this so-called backpropagation ratio limits either the computational gain or can lead to severe accuracy losses. In this paper we present TinyProp, the first sparse backpropagation method that dynamically adapts the back-propagation ratio during on-device training for each training step. TinyProp induces a small calculation overhead to sort the elements of the gradient, which does not significantly impact the computational gains. TinyProp works particularly well on fine-tuning trained networks on MCUs, which is a typical use case for embedded applications. For typical datasets from three datasets MNIST, DCASE2020 and CIFAR10, we are 5 times faster compared to non-sparse training with an accuracy loss of on average 1%. On average, TinyProp is 2.9 times faster than existing, static sparse backpropagation algorithms and the accuracy loss is reduced on average by 6 % compared to a typical static setting of the back-propagation ratio.
This paper presents a system that uses a multi-stage AI analysis method for determining the condition and status of bicycle paths using machine learning methods. The approach for analyzing bicycle paths includes three stages of analysis: detection of the road surface, investigation of the condition of the bicycle paths, and identification of substrate characteristics. In this study, we focus on the first stage of the analysis. This approach employs a low-threshold data collection method using smartphone-generated video data for image recognition, in order to automatically capture and classify surface condition and status.
For the analysis convolutional neural networks (CNN) are employed. CNNs have proven to be effective in image recognition tasks and are particularly well-suited for analyzing the surface condition of bicycle paths, as they can identify patterns and features in images. By training the CNN on a large dataset of images with known surface conditions, the network can learn to identify common features and patterns and reliably classify them.
The results of the analysis are then displayed on digital maps and can be utilized in areas such as bicycle logistics, route planning, and maintenance. This can improve safety and comfort for cyclists while promoting cycling as a mode of transportation. It can also assist authorities in maintaining and optimizing bicycle paths, leading to more sustainable and efficient transportation system.
In this paper we present the concept of the "KI-Labor Südbaden" to support regional companies in the use of AI technologies. The approach is based on the "Periodic Table of AI" and is extended with both new dimensions for sustainability, and the impact of AI on the working environment. It is illustrated on the basis of three real-world use cases: 1. The detection of humans with lowresolution infrared (IR) images for collaborative robotics; 2. The use of machine data from specifically designed vehicles; 3. State-of-the-art Large Language Models (LLMs) applied to internal company documents. We explain the use cases, thereby demonstrating how to apply the Periodic Table of AI to structure AI applications.
In der Geschichte »Die Schule« (Originaltitel: ,,The fun they had“) von 1954 beschreibt der russisch-amerikanische Wissenschaftler und Science fiction Autor Isaac Asimov, wie die Schule der Zukunft im Jahr 2157 aussieht – oder genauer: dass es gar keine Schulen mehr gibt. Jedes Kind hat neben seinem Kinderzimmer im Elternhaus einen kleinen Schulraum, in dem es von einem mechanischen Lehrer (einer Maschine mit Bildschirm und einem Schlitz zum Einwerfen der Hausaufgaben) unterrichtet wird. Diese Lehrmaschine ist perfekt auf die Fähigkeiten des einzelnen Kindes eingestellt und kann es optimal beschulen. Nur: Maschinen können kaputt gehen. Die elfjährige Margie wird von ihrem mechanischen Lehrer wieder und wieder in Geographie abgefragt, aber jedes Mal schlechter benotet. Das sieht die Mutter und ruft den Schulinspektor, um den mechanischen Lehrer zu reparieren.
Verfahren zum Betrieb eines batterieelektrischen Fahrzeugs mit einer elektrischen Maschine zum Antrieb des Fahrzeugs und einem Inverter (1) zum Ansteuern der elektrischen Maschine, wobei der Inverter (1) eine dreiphasige Brückenschaltung mit einer Anzahl von als Halbleiter ausgebildeten Schaltern (3) umfasst, wobei im Inverter (1) entstehende Verluste zum Heizen eines Innenraums des Fahrzeugs und/oder zum Temperieren einer Batterie und/oder zum Temperieren von Getriebeöl verwendet werden, wobei der Inverter (1) mittels Raumzeigermodulation gesteuert wird, wobei ein nicht-optimales Schaltverhalten des Inverters (1) herbeigeführt wird, indem nicht optimale Spannungs-Raumzeiger (e, eu, ev, ew, e1, e2, -e1, -e2) eingestellt werden, wobei eine Skalierung der Spannungs-Raumzeiger (e, e1, e2) über die Schaltung von Nullspannungsvektoren, die je nach zeitlichem Anteil die Spannung reduzieren, oder durch Zuhilfenahme eines jeweils gegenüberliegenden Spannungs-Raumzeigers (-e1, -e2) erfolgt, so dass eine Schaltfolge mit einer maximalen Anzahl von Schaltzyklen realisiert wird, dadurch gekennzeichnet, dass in der Mitte einer Schaltperiode (Tp) keine Symmetrie erzeugt wird.
Die Erfindung betrifft ein Verfahren zum Betrieb eines batterieelektrischen Fahrzeugs mit einer elektrischen Maschine zum Antrieb des Fahrzeugs und einem Inverter (1) zum Ansteuern der elektrischen Maschine, wobei der Inverter (1) eine dreiphasige Brückenschaltung mit einer Anzahl von als Halbleiter ausgebildeten Schaltern (3) umfasst, wobei im Inverter (1) entstehende Verluste zum Heizen eines Innenraums des Fahrzeugs und/oder zum Temperieren einer Batterie und/oder zum Temperieren von Getriebeöl verwendet werden, wobei der Inverter (1) mittels Raumzeigermodulation gesteuert wird, wobei ein nicht-optimales Schaltverhalten des Inverters (1) herbeigeführt wird, indem nicht optimale Spannungs-Raumzeiger (e, eu, ev, ew, e1, e2, -e1, -e2) eingestellt werden, wobei eine Skalierung der Spannungs-Raumzeiger (e, e1, e2) über die Schaltung von Nullspannungsvektoren, die je nach zeitlichem Anteil die Spannung reduzieren, oder durch Zuhilfenahme eines jeweils gegenüberliegenden Spannungs-Raumzeigers (-e1, - e2) erfolgt, so dass eine Schaltfolge mit einer maximalen Anzahl von Schaltzyklen realisiert wird, wobei in der Mitte einer Schaltperiode (Tp) keine Symmetrie erzeugt wird.
Die Erfindung betrifft ein Verfahren zum Betrieb eines batterieelektrischen Fahrzeugs mit einer elektrischen Maschine zum Antrieb des Fahrzeugs und einem Inverter (1) zum Ansteuern eine Stators (2) der elektrischen Maschine, wobei der Inverter (1) eine dreiphasige Brückenschaltung mit einer Anzahl von als Halbleiter ausgebildeten Schaltern (3) umfasst, wobei im Inverter (1) und/oder in der elektrischen Maschine entstehende Verluste zum Heizen eines Innenraums des Fahrzeugs und/oder zum Temperieren einer Batterie und/oder zum Temperieren von Getriebeöl verwendet werden, wobei während des Stillstands des Fahrzeugs ein von einem Permanentmagneten der elektrischen Maschine verursachter Permanentmagnetfluss durch Einstellen einer nichtdrehmomentbildenden Statorstromkomponente (Id) in Höhe des negativen Quotienten aus einem Statorfluss (&psgr;PM) und einer d-Komponente einer Statorinduktivität (Ld) so stark geschwächt wird, dass der magnetische Fluss kompensiert wird, wobei ein sehr hochfrequenter Wechselstrom als drehmomentbildende Statorstromkomponente (Iq) eingestellt wird.
Die Erfindung betrifft ein Verfahren zum Betrieb eines batterieelektrischen Fahrzeugs mit einer elektrischen Maschine zum Antrieb des Fahrzeugs und einem Inverter (1) zum Ansteuern eines Stators (2) der elektrischen Maschine, wobei der Inverter (1) eine dreiphasige Brückenschaltung mit einer Anzahl von als Halbleiter ausgebildeten Schaltern (3) umfasst, wobei im Inverter (1) und/oder in der elektrischen Maschine entstehende Verluste zum Heizen eines Innenraums des Fahrzeugs und/oder zum Temperieren einer Batterie und/oder zum Temperieren von Getriebeöl verwendet werden, wobei eine als Wechselstrom ausgebildete nichtdrehmomentbildende Statorstromkomponente (Id) in die elektrische Maschine eingeprägt wird, wobei im Stillstand eine drehmomentbildende Statorstromkomponente (Iq) zu Null geregelt wird, wobei im Fahrbetrieb ein Kompensationsstrom als drehmomentbildende Statorstromkomponente (Iq) eingeprägt wird, der ein durch die Variation der nichtdrehmomentbildenden Statorstromkomponente (Id) entstehendes Drehmoment kompensiert.
This article presents the development, parameterization, and experimental validation of a pseudo-three-dimensional (P3D) multiphysics aging model of a 500 mAh high-energy lithium-ion pouch cell with graphite negative electrode and lithium nickel manganese cobalt oxide (NMC) positive electrode. This model includes electrochemical reactions for solid electrolyte interphase (SEI) formation at the graphite negative electrode, lithium plating, and SEI formation on plated lithium. The thermodynamics of the aging reactions are modeled depending on temperature and ion concentration and the reactions kinetics are described with an Arrhenius-type rate law. Good agreement of model predictions with galvanostatic charge/discharge measurements and electrochemical impedance spectroscopy is observed over a wide range of operating conditions. The model allows to quantify capacity loss due to cycling near beginning-of-life as function of operating conditions and the visualization of aging colormaps as function of both temperature and C-rate (0.05 to 2 C charge and discharge, −20 °C to 60 °C). The model predictions are also qualitatively verified through voltage relaxation, cell expansion and cell cycling measurements. Based on this full model, six different aging indicators for determination of the limits of fast charging are derived from post-processing simulations of a reduced, pseudo-two-dimensional isothermal model without aging mechanisms. The most successful aging indicator, compared to results from the full model, is based on combined lithium plating and SEI kinetics calculated from battery states available in the reduced model. This methodology is applicable to standard pseudo-two-dimensional models available today both commercially and as open source.
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
Modern industrial production is heavily dependent on efficient workflow processes and automation. The steady flow of raw materials as well as the separation of vital parts and semi-finished products are at the core of these automated procedures. Commonly used systems for this work are bowl feeders, which separate the parts and material by a combination of mechanical vibration and friction. The production of these tools, especially the design of the ramping spiral, is delicate and time-consuming work, as the shape, slope, and material must be carefully adjusted for the corresponding parts. In this work, we propose an automated approach, making use of optimization procedures from artificial intelligence, to design the spiral ramps of the bowl feeders. Therefore, the whole system and considered parts are physically simulated and the optimized geometry is subsequently exported into a CAD system for the actual building, respectively printing. The employment of evolutionary optimization gives the need to develop a mathematical model for the whole setup and find an efficient representation of integral features.
Im Automobilbau bietet der Einsatz der Multimaterialbauweise ein signifikantes Potenzial zur Gewichtsreduktion. Zugleich erfordert diese Bauweise eine große Anzahl von Fügeverfahren für die Verbindung der unterschiedlichen Werkstoffe und Werkstoffklassen. Dabei muss eine Vielzahl an konstruktiven und materialseitigen Anforderungen berücksichtigt werden. Um in diesem Auswahlprozess den Aspekt des Leichtbaus beim Fügeverfahren selbst systematisch zu integrieren, wurde eine Methodik entwickelt, welche die Fügeverfahren im Hinblick auf ihr jeweiliges Leichtbaupotenzial bewertet.
Design and Implementation of a Camera-Based Tracking System for MAV Using Deep Learning Algorithms
(2023)
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The primary objective of this study is to build up a system capable of detecting objects and locating them on a map in real time. Detection and positioning are achieved solely through the utilization of the drone’s camera and sensors. To accomplish this goal, several deep learning algorithms are assessed and adopted because of their suitability with the system. Object detection is based upon a single-shot detector architecture chosen for maximum computation speed, and the tracking is based upon the combination of deep neural-network-based features combined with an efficient sorting strategy. Subsequently, the developed system is evaluated using diverse metrics to determine its performance for detection and tracking. To further validate the approach, the system is employed in the real world to show its possible deployment. For this, two distinct scenarios were chosen to adjust the algorithms and system setup: a search and rescue scenario with user interaction and precise geolocalization of missing objects, and a livestock control scenario, showing the capability of surveying individual members and keeping track of number and area. The results demonstrate that the system is capable of operating in real time, and the evaluation verifies that the implemented system enables precise and reliable determination of detected object positions. The ablation studies prove that object identification through small variations in phenotypes is feasible with our approach.
Landing heel first has been associated with elevated external knee abduction moments (KAM), thereby potentially increasing the risk of sustaining a non-contact ACL injury. Apart from the foot strike angle, knee valgus angle (VAL) and vertical center of mass velocity at initial ground contact (IC) have been associated with increased KAM in females across different sidestep cuts. While real-time biofeedback training has been proven effective for gait retraining [4], the highly dynamic, non-cyclical nature of cutting maneuvers makes real-time feedback unsuitable and alternative approaches necessary. This study aimed at assessing the efficacy of immediate software-aided feedback on cutting technique in reducing KAM during handball-specific cutting maneuvers.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2 mittels methanogener Mikroorganismen durch Umsetzung von H2 und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2 enthaltenden Gases in das Medium der Begasungskolonne, eine Abführeinrichtung zum Abführen eines CH4 enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2 in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium in der Begasungskolonne ein H2 enthaltendes Gas zugeführt wird.
In Zeiten großer Veränderungen haben genossenschaftlich organisierte KMU die Möglichkeit, auf komplexe Herausforderungen mit kooperativen Lösungsansätzen zu reagieren, vor allem wenn dabei die Kraft und Kreativität der Gemeinschaft genutzt wird. Getreu dem Motto „Was einer alleine nicht schafft, das schaffen viele“ des Genossenschaftsvorreiters Friedrich Wilhelm Raiffeisen ist gemeinschaftliches unternehmerisches Handeln identitätsstiftend und motivierend, woraus wiederum eine sich selbst verstärkende Eigendynamik entstehen kann. Wie Mittelstand, Politik und Gesellschaft davon profitieren, stellen Prof. Dr. Tobias Popovic und Prof. Dr. Thomas Baumgärtler in diesem Beitrag dar.
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2mittels methanogener Mikroorganismen durch Umsetzung von H2und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2enthaltenden Gases in das Medium der Begasungskolonne, eine Abführeinrichtung zum Abführen eines CH4enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium in der Begasungskolonne ein H2enthaltendes Gas zugeführt wird.
Encapsulant-free N.I.C.E. modules have strong ecological advantages compared to conventional laminated modules but suffer generally from lower electrical performance. Via long-term outdoor monitoring of fullsize industrial modules of both types with identical solar cells, we investigated if the performance difference remains constant over time and which parameters influence its value. After assessing about a full year’s data, two obvious levers for N.I.C.E. optimization are identified: The usage of textured glass and transparent adhesives on the module rear side. Also, the performance loss could be alleviated using tracking systems due to lower AOI values. Our measurements show additionally that N.I.C.E. module surfaces are in average about 2.5°C cooler compared to laminated modules. With these findings, we lay out a roadmap to reduce today’s LIV gap of about 5%rel by different optimizations.
Phytases are widely used food and feed enzymes to improve phosphate availability and reduce anti-nutritional factors. Despite the benefits, enzyme usage is restricted by the harsh conditions in a gastrointestinal tract (pH 2–6) and feed pelleting conditions at high temperatures (60–90 °C). The commercially available phytase Quantum® Blue has been immobilized as CLEAs using glutardialdehyde and soy protein resulting in a residual activity of 33%. The influence of the precipitating agent, precipitant concentration, cross-linker concentration and cross-linking time, sodium borohydride as well as the proteic feeders gluten, soy protein and bovine serum albumin (BSA) has been optimized. The best conditions were 90% (v/v) ethyl lactate as precipitating reagent, 100 mM glutardialdehyde and a soy protein concentration of 227 mg/L with a cross-linking time of 1 h. The intrinsically stable phytase remained its high thermal stability and temperature optimum. The phytase-CLEA achieved a 425% increase of residual activity under harsh acidic conditions between pH 2.2 and 3.5 compared to the free enzyme. The free and immobilized phytase were deployed in an in vitro assay simulating the acidic conditions in the gizzard of poultry at pH 2. The hydrolysis of phytate was monitored using a novel high-performance thin-layer chromatography (HPTLC) analysis and DAD scanner to study the InsPx fingerprint. All lower inositol phosphate pools (InsP1–InsP6) and free phosphate were separated and analyzed. The phytase-CLEA efficiently released 80% of the total phosphate within 180 min, whereas the free enzyme only released 6% in the same time under the same conditions.
In this work the nonlinear behavior of layered surface acoustic wave (SAW) resonators is studied with the help of finite element (FE) computations. The full calculations depend strongly on the availability of accurate tensor data. While there are accurate material data for linear computations, the complete sets of higher-order material constants, needed for nonlinear simulations, are still not available for relevant materials. To overcome this problem, scaling factors were used for each available nonlinear tensor. The approach here considers piezoelectricity, dielectricity, electrostriction, and elasticity constants up to the fourth order. These factors act as a phenomenological estimate for incomplete tensor data. Since no set of fourth-order material constants for LiTaO3 is available, an isotropic approximation for the fourth-order elastic constants was applied. As a result, it was found that the fourth-order elastic tensor is dominated by one-fourth order Lamé constant. With the help of the FE model, derived in two different, but equivalent ways, we investigate the nonlinear behavior of a SAW resonator with a layered material stack. The focus was set to third-order nonlinearity. Accordingly, the modeling approach is validated using measurements of third-order effects in test resonators. In addition, the acoustic field distribution is analyzed.
Established robot manufacturers have developed methods to determine and optimize the accuracy of their robots. These methods vary from robot manufacturers to their competitors. Due to the lack of published data, a comparison of robot performance is difficult. The aim of this article is to find methods to evaluate important characteristics of a robot with an accurate and cost-effective setup. A laser triangulation sensor and geometric referenced spheres were used as a base to compare the robot performance.
In this contribution, we present a novel 3D printed multi-material, electromagnetic vibration harvester. The harvester is based on a cantilever design and utilizes an embedded constantan wire within a matrix of polyethylene terephthalate glycol (PETG). A prototype has been manufactured with a combination of a fused filament fabrication (FFF) printer and a robot with a custom-made tool.
In the framework of electro-elasticity theory and the finite element method (FEM), a model is set up for the computation of quantities in surface acoustic wave (SAW) devices accounting for nonlinear effects. These include second-order and third-order intermodulations, second and third harmonic generation and the influence of electro-acoustic nonlinearity on the frequency characteristics of SAW resonators. The model is based on perturbation theory, and requires input material constants, e.g., the elastic moduli up to fourth order for all materials involved. The model is two-dimensional, corresponding to an infinite aperture, but all three Cartesian components of the displacement and electrical fields are accounted for. The first version of the model pertains to an infinite periodic arrangement of electrodes. It is subsequently generalized to systems with a finite number of electrodes. For the latter version, a recursive algorithm is presented which is related to the cascading scheme of Plessky and Koskela and strongly reduces computation time and memory requirements. The model is applied to TC-SAW systems with copper electrodes buried in an oxide film on a LiNbO3 substrate. Results of computations are presented for the electrical current due to third-order intermodulations and the displacement field associated with the second harmonic and second-order intermodulations, generated by monochromatic input tones. The scope of this review is limited to methodological aspects with the goal to enable calculations of nonlinear quantities in SAW devices on inexpensive and easily accessible computing platforms.
Due to its performance, the field of deep learning has gained a lot of attention, with neural networks succeeding in areas like Computer Vision (CV), Neural Language Processing (NLP), and Reinforcement Learning (RL). However, high accuracy comes at a computational cost as larger networks require longer training time and no longer fit onto a single GPU. To reduce training costs, researchers are looking into the dynamics of different optimizers, in order to find ways to make training more efficient. Resource requirements can be limited by reducing model size during training or designing more efficient models that improve accuracy without increasing network size.
This thesis combines eigenvalue computation and high-dimensional loss surface visualization to study different optimizers and deep neural network models. Eigenvectors of different eigenvalues are computed, and the loss landscape and optimizer trajectory are projected onto the plane spanned by those eigenvectors. A new parallelization method for the stochastic Lanczos method is introduced, resulting in faster computation and thus enabling high-resolution videos of the trajectory and secondorder information during neural network training. Additionally, the thesis presents the loss landscape between two minima along with the eigenvalue density spectrum at intermediate points for the first time.
Secondly, this thesis presents a regularization method for Generative Adversarial Networks (GANs) that uses second-order information. The gradient during training is modified by subtracting the eigenvector direction of the biggest eigenvalue, preventing the network from falling into the steepest minima and avoiding mode collapse. The thesis also shows the full eigenvalue density spectra of GANs during training.
Thirdly, this thesis introduces ProxSGD, a proximal algorithm for neural network training that guarantees convergence to a stationary point and unifies multiple popular optimizers. Proximal gradients are used to find a closed-form solution to the problem of training neural networks with smooth and non-smooth regularizations, resulting in better sparsity and more efficient optimization. Experiments show that ProxSGD can find sparser networks while reaching the same accuracy as popular optimizers.
Lastly, this thesis unifies sparsity and neural architecture search (NAS) through the framework of group sparsity. Group sparsity is achieved through ℓ2,1-regularization during training, allowing for filter and operation pruning to reduce model size with minimal sacrifice in accuracy. By grouping multiple operations together, group sparsity can be used for NAS as well. This approach is shown to be more robust while still achieving competitive accuracies compared to state-of-the-art methods
Die Erfindung betrifft eine Vorrichtung zur biologischen Methanisierung von CO und/oder CO2 mittels methanogener Mikroorganismen durch Umsetzung von H2 und CO und/oder CO2, die eine Begasungskolonne und eine Entgasungskolonne, jeweils mit
einer Bodenseite und einer der Bodenseite gegenüberliegenden oberen Seite, ein in der Begasungskolonne und der Entgasungskolonne bereitgestelltes Medium mit methanogenen Mikroorganismen, eine Zuführeinrichtung zum Zuführen eines H2 enthaltenden Gases in das Medium der Begasungskolonne, wobei die Zuführeinrichtung im Bereich der Bodenseite der Begasungskolonne angeordnet ist, eine Abführeinrichtung zum Abführen eines CH4 enthaltenden Gases aus der Entgasungskolonne, eine Verbindungsleitung zwischen Begasungskolonne und Entgasungskolonne im Bereich der Bodenseiten, eine Pumpe zum Überführen von Medium über die Verbindungsleitung von der Begasungskolonne in die Entgasungskolonne, und eine Rückführleitung zwischen der Begasungskolonne und der Entgasungskolonne im Bereich der oberen Seiten zum Rückführen von Medium
aus der Entgasungskolonne in die Begasungskolonne aufweist. Die Erfindung betrifft auch ein Verfahren zur biologischen Methanisierung von CO und/oder CO2 in einer Vorrichtung mittels methanogener Mikroorganismen als Teil eines in der Vorrichtung bereitgestellten Mediums, wobei das Medium in einem Kreislauf über eine Begasungskolonne und eine Entgasungskolonne geführt wird, wobei die Kolonnen jeweils über eine Verbindungsleitung im Bereich ihrer Bodenseiten und über eine Rückführleitung im Bereich der den Bodenseiten gegenüberliegenden oberen Seiten miteinander verbunden sind, worin das Medium sich in der Begasungskolonne absteigend und in der Entgasungskolonne aufsteigend bewegt, worin dem Medium im Bereich der Bodenseite der Begasungskolonne ein H2 enthaltendes Gas zugeführt wird.
Wirtschaftlichkeitsbetrachtung eines smarten Energiekonzepts für ein Bestandsquartier in Karlsruhe
(2023)
Die Transformation der Energieversorgung in Bestandsgebäuden ist für die Erreichung der Klimaziele im Gebäudesektor entscheidend. In einem modellhaften Quartiersprojekt in Karlsruhe-Durlach wird ein ‚smartes Energiekonzept‘, bestehend aus Wärmepumpen, Blockheizkraftwerk und PV-Anlagen mit lokalem Strom- und Wärmenetz umgesetzt und messtechnisch begleitet. Ziel ist dabei eine CO2-effiziente und wirtschaftliche Bereitstellung von Wärme und Strom.
In dem Artikel wird eine Wirtschaftlichkeitsbetrachtung für das Wärme- und Stromcontracting auf Basis der realen Investitionskosten sowie der gemessenen und berechneten Energieflüsse durchgeführt. Die Wärmegestehungskosten hängen neben den Investitionskosten von den energiewirtschaftlichen Rahmenbedingungen ab. Mit ansteigender CO2-Steuer werden mittelfristig Wärmegestehungskosten erreicht, die unter denen konventioneller Energiesysteme liegen. Dadurch bietet das integrierte Energiekonzept ein breites Anwendungspotenzial für städtische Bestandsquartiere außerhalb von Fernwärme-Gebieten.
Convolutional neural networks (CNN) define the state-of-the-art solution on many perceptual tasks. However, current CNN approaches largely remain vulnerable against adversarial perturbations of the input that have been crafted specifically to fool the system while being quasi-imperceptible to the human eye. In recent years, various approaches have been proposed to defend CNNs against such attacks, for example by model hardening or by adding explicit defence mechanisms. Thereby, a small “detector” is included in the network and trained on the binary classification task of distinguishing genuine data from data containing adversarial perturbations. In this work, we propose a simple and light-weight detector, which leverages recent findings on the relation between networks’ local intrinsic dimensionality (LID) and adversarial attacks. Based on a re-interpretation of the LID measure and several simple adaptations, we surpass the state-of-the-art on adversarial detection by a significant m argin and reach almost perfect results in terms of F1-score for several networks and datasets. Sources available at: https://github.com/adverML/multiLID
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
IT-Governance
(2023)
Die Dynamik der technologischen Entwicklungen übt einen großen Druck auf die Leitungs- und Überwachungsorgane eines Unternehmens aus. Die Hyperkonnektivität impliziert, dass die interne IT und OT Anknüpfungspunkte an den externen Kontext besitzen, wodurch die Komplexität aufgrund eines Nebeneinanders einer Vielzahl von Hard- und Software exponentiell steigt. Die gesetzlichen Notwendigkeiten zusammen mit den geschäftspolitischen Anforderungen sollten zur Überlegung führen, eine IT-Governance im Unternehmen zu etablieren. Das System der Wahl und die Dichte der Regulierung ist den Verantwortlichen unter Berücksichtigung des Unternehmensinteresses überlassen, lautete das Fazit des ersten Teils des Beitrags (ZCG 4/23). Im zweiten Teil werden nun konkret die ISO Standards 38500 et al. als eine Möglichkeit zur Umsetzung näher betrachtet. Dabei geht es um die einzelnen Komponenten in Form der zehn zur Verfügung stehenden Standards und deren integrative Top-Down-Gestaltung. Es zeigt sich, dass Themen wie die Daten-Governance und die KI-Governance ausreichend Berücksichtigung finden.
IT-Governance (Teil 1)
(2023)
Unabhängig von den gelieferten Ergebnissen hat ChatGPT die KI-Anwendungen auf ein neues Level gehoben. Aber auch digitalwirtschaftliche Geschäftsmodelle wie Ökosystem-Plattformen verändern die Art und Weise des Wirtschaftens. Eine Rahmung mittels einer IT-Governance wird dadurch nicht nur erforderlich, sondern bietet eine große Chance, die exponentiellen Entwicklungen strukturiert angehen und begleiten zu können. Ausgehend vom Deutschen Corporate Governance Kodex (DCGK) beleuchtet der erste Teil den Bezug dazu.
Sofern ein Rahmenwerk für den risikoorientierten Umgang mit Ransomware-Angriffen existiert, sollten die Verantwortlichen in Unternehmen darauf zurückgreifen und in die unternehmensweite Systematik einbetten. Das ermöglicht die Steuerung und das Management von Risiken, die zuvor von hoher Unsicherheit geprägt waren und Organisationen unerwartet treffen. Ferner ist zu berücksichtigen, dass das Social Engineering eine bedeutende Rolle bei der Lieferung von schadhafter Software spielt und frühzeitig in den Analyseprozess einzubeziehen ist.
Die moderne Erpressung von Unternehmen nach erfolgreichen Ransomware-Attacken ist sowohl ein monetäres als auch nicht-monetäres Problem. Angreifende erhalten über einen initialen, häufig menschlichen Endpunkt Zugang zur Organisation und können die Schadsoftware platzieren. Die beiden Angriffsvektoren Social Engineering und Ransomware nutzen die organisatorischen und technischen Schwachstellen, um auf diverse Vermögensgegenstände zuzugreifen. In diesem ersten Beitrag der zweiteiligen Serie wird das Verständnis für dieses Vorgehen entwickelt.
Ausreißer in Datenreihen geben einen Hinweis auf mögliche Risiken. Die empirischen Daten bestimmen weitestgehend die anzuwendenden Methoden. Dabei helfen Klassifikationssysteme, um zielorientiert zu einer Auswahl gelangen zu können. Die einfachste Form bilden univariate Datenreihen, deren Ausreißer mittels Häufigkeitsverteilungen, Konfidenzintervalle um den Mittelwert und Boxplots bestimmt werden.
This thesis deals with the redesign of manufacturing systems by simulation and optimization. Material flow simulation is a common tool for solving problems in system design. Limitations are the high requirements in time and knowledge to execute simulation studies, evaluate results and solve design problems. New chances arrives with the technologies of industry 4.0 and the digital shadow, providing data for simulation. However, the methods to use production data for the redesign of production systems are not available yet. Purpose of this work is providing the methods to automate simulation from digital shadow, use simulation to optimize and solve problems in system design. Two case studies are used to support the action research approach of this work. The result of this work is a framework for the application of the digital shadow in optimization and problem-solving.
An in-depth study of U-net for seismic data conditioning: Multiple removal by moveout discrimination
(2024)
Seismic processing often involves suppressing multiples that are an inherent component of collected seismic data. Elaborate multiple prediction and subtraction schemes such as surface-related multiple removal have become standard in industry workflows. In cases of limited spatial sampling, low signal-to-noise ratio, or conservative subtraction of the predicted multiples, the processed data frequently suffer from residual multiples. To tackle these artifacts in the postmigration domain, practitioners often rely on Radon transform-based algorithms. However, such traditional approaches are both time-consuming and parameter dependent, making them relatively complex. In this work, we present a deep learning-based alternative that provides competitive results, while reducing the complexity of its usage, and, hence simplifying its applicability. Our proposed model demonstrates excellent performance when applied to complex field data, despite it being exclusively trained on synthetic data. Furthermore, extensive experiments show that our method can preserve the inherent characteristics of the data, avoiding undesired oversmoothed results, while removing the multiples from seismic offset or angle gathers. Finally, we conduct an in-depth analysis of the model, where we pinpoint the effects of the main hyperparameters on real data inference, and we probabilistically assess its performance from a Bayesian perspective. In this study, we put particular emphasis on helping the user reveal the inner workings of the neural network and attempt to unbox the model.
It is common practice to apply padding prior to convolution operations to preserve the resolution of feature-maps in Convolutional Neural Networks (CNN). While many alternatives exist, this is often achieved by adding a border of zeros around the inputs. In this work, we show that adversarial attacks often result in perturbation anomalies at the image boundaries, which are the areas where padding is used. Consequently, we aim to provide an analysis of the interplay between padding and adversarial attacks and seek an answer to the question of how different padding modes (or their absence) affect adversarial robustness in various scenarios.
Seismic data processing relies on multiples attenuation to improve inversion and interpretation. Radon-based algorithms are often used for multiples and primaries discrimination. Deep learning, based on convolutional neural networks (CNNs), has shown encouraging applications for demultiple that could mitigate Radon-based challenges. In this work, we investigate new strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. Moreover, we investigate two distinctive training methods for all the strategies: UNet based on minimum absolute error (L1) training, and adversarial training (GAN-UNet). We test the trained models with the different strategies and methods on 400 synthetic data. We found that training to predict multiples, including the primaries …
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In this work, we employ a generative solution, since it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for multiple removal. To that end, we run experiments on synthetic and on real data, and we compare the deep diffusion performance with standard algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
In this paper, we describe a first publicly available fine-grained product recognition dataset based on leaflet images. Using advertisement leaflets, collected over several years from different European retailers, we provide a total of 41.6k manually annotated product images in 832 classes. Further, we investigate three different approaches for this fine-grained product classification task, Classification by Image, by Text, as well as by Image and Text. The approach "Classification by Text" uses the text extracted directly from the leaflet product images. We show, that the combination of image and text as input improves the classification of visual difficult to distinguish products. The final model leads to an accuracy of 96.4% with a Top-3 score of 99.2%. We release our code at https://github.com/ladwigd/Leaflet-Product-Classification.
Neural networks have a number of shortcomings. Amongst the severest ones is the sensitivity to distribution shifts which allows models to be easily fooled into wrong predictions by small perturbations to inputs that are often imperceivable to humans and do not have to carry semantic meaning. Adversarial training poses a partial solution to address this issue by training models on worst-case perturbations. Yet, recent work has also pointed out that the reasoning in neural networks is different from humans. Humans identify objects by shape, while neural nets mainly employ texture cues. Exemplarily, a model trained on photographs will likely fail to generalize to datasets containing sketches. Interestingly, it was also shown that adversarial training seems to favorably increase the shift toward shape bias. In this work, we revisit this observation and provide an extensive analysis of this effect on various architectures, the common L_2-and L_-training, and Transformer-based models. Further, we provide a possible explanation for this phenomenon from a frequency perspective.
An important step in seismic data processing to improve inversion and interpretation is multiples attenuation. Radon-based algorithms are often used for discriminating primaries and multiples. Recently, deep learning (DL), based on convolutional neural networks (CNNs) has shown promising results in demultiple that could mitigate the challenges of Radon-based methods. In this work, we investigate new different strategies to train a CNN for multiples removal based on different loss functions. We propose combined primaries and multiples labels in the loss for training a CNN to predict primaries, multiples, or both simultaneously. We evaluate the performance of the CNNs trained with the different strategies on 400 clean and noisy synthetic data, considering 3 metrics. We found that training a CNN to predict the multiples and then subtracting them from the input image is the most effective strategy for demultiple. Furthermore, including the primaries labels as a constraint during the training of multiples prediction improves the results. Finally, we test the strategies on a field dataset. The CNNs trained with different strategies report competitive results on real data compared with Radon demultiple. As a result, effectively trained CNN models can potentially replace Radon-based demultiple in existing workflows.
Seismic data processing involves techniques to deal with undesired effects that occur during acquisition and pre-processing. These effects mainly comprise coherent artefacts such as multiples, non-coherent signals such as electrical noise, and loss of signal information at the receivers that leads to incomplete traces. In the past years, there has been a remarkable increase of machine-learning-based solutions that have addressed the aforementioned issues. In particular, deep-learning practitioners have usually relied on heavily fine-tuned, customized discriminative algorithms. Although, these methods can provide solid results, they seem to lack semantic understanding of the provided data. Motivated by this limitation, in this work, we employ a generative solution, as it can explicitly model complex data distributions and hence, yield to a better decision-making process. In particular, we introduce diffusion models for three seismic applications: demultiple, denoising and interpolation. To that end, we run experiments on synthetic and on real data, and we compare the diffusion performance with standardized algorithms. We believe that our pioneer study not only demonstrates the capability of diffusion models, but also opens the door to future research to integrate generative models in seismic workflows.
Neural networks tend to overfit the training distribution and perform poorly on out-ofdistribution data. A conceptually simple solution lies in adversarial training, which introduces worst-case perturbations into the training data and thus improves model generalization to some extent. However, it is only one ingredient towards generally more robust models and requires knowledge about the potential attacks or inference time data corruptions during model training. This paper focuses on the native robustness of models that can learn robust behavior directly from conventional training data without out-of-distribution examples. To this end, we study the frequencies in learned convolution filters. Clean-trained models often prioritize high-frequency information, whereas adversarial training enforces models to shift the focus to low-frequency details during training. By mimicking this behavior through frequency regularization in learned convolution weights, we achieve improved native robustness to adversarial attacks, common corruptions, and other out-of-distribution tests. Additionally, this method leads to more favorable shifts in decision-making towards low-frequency information, such as shapes, which inherently aligns more closely with human vision.
The mathematical representations of data in the Spherical Harmonic (SH) domain has recently regained increasing interest in the machine learning community. This technical report gives an in-depth introduction to the theoretical foundation and practical implementation of SH representations, summarizing works on rotation invariant and equivariant features, as well as convolutions and exact correlations of signals on spheres. In extension, these methods are then generalized from scalar SH representations to Vectorial Harmonics (VH), providing the same capabilities for 3d vector fields on spheres.
Purpose
This study aims to investigate a systematic approach to the production and use of additively manufactured injection mould inserts in product development (PD) processes. For this purpose, an evaluation of the additive tooling design method (ATDM) is performed.
Design/methodology/approach
The evaluation of the ATDM is conducted within student workshops, where students develop products and validate them using AT-prototypes. The evaluation process includes the analysis of work results as well as the use of questionnaires and participant observation.
Findings
This study shows that the ATDM can be successfully used to assist in producing and using AT mould inserts to produce valid AT prototypes. As a reference for the implementation of AT in industrial PD, extracts from the work of the student project groups and suitable process parameters for prototype production are presented.
Originality/value
This paper presents the application and evaluation of a method to support AT in PD that has not yet been scientifically evaluated.
Im Rahmen dieser Studie sollen Struktur und der Ablauf internationaler Carve-Out-Transaktionen dargestellt werden. Der Fokus liegt hierbei auf den rechtlichen Aspekten solcher Transaktionen. Nichtsdestotrotz, da internationale Carve-Out-Transaktionen gerade eine sehr enge und komplexe Verflechtung rechtlicher, organisatorischer und strategischer Aspekte ausmacht, soll die internationale Carve-Out-Transaktion als Ganzes beleuchtet werden.
Ziel dieser Studie ist es, den Markt von FinTech Unternehmen in Deutschland unter Berücksichtigung der beteiligten Marktkräfte darzustellen. Hierfür sollen die theoretischen Grundlagen einer Marktanalyse dargelegt und darauf aufbauend eine Marktanalyse durchgeführt werden. Betrachtet werden sollen in diesem Zusammenhang auch die rechtlichen Rahmenbedingungen, die für FinTech Unternehmen in Deutschland einschlägig sind, da auch diese Einfluss auf das Marktgeschehen haben. Ziel ist es, bestehende Rechtsgrundlagen sowie Entwürfe von künftigen Rechtsvorschriften mittels Recherche zu identifizieren und zu analysieren. Betrachtet werden sollen dabei das deutsche Recht sowie das für Deutschland als Mitgliedstaat der EU geltende EU-Recht.
The paper compares different anti-windup strategies for the current control of inverter-fed permanent magnet synchronous machines (PMSM) controlled by pulse-width modulation. In this respect, the focus is on the drive behavior with a relatively large product of stator frequency and sampling time. A requirement for dynamically high-quality anti-windup measures is, among other things, a sufficiently accurate decoupling of the stator current direct axis and quadrature axis components even at high stator frequencies. Discrete-time models of the electrical subsystem of the PMSM are well suited for this purpose, of which the method found to be the most accurate in a preliminary investigation is used as the basis for all anti-windup methods examined. Simulation studies and measurement results document the performance of the compared methods.
Soiling is an important issue in the renewable energy sector since it can result in significant yield losses, especially in regions with higher pollution or dust levels. To mitigate the impact of soiling on photovoltaic (PV) plants, it is essential to regularly monitor and clean the panels, as well as develop accurate soiling predictions that can affect cleaning strategies and enhance the overall performance of PV power plants. This research focuses on the problem of soiling loss in photovoltaic power plants and the potential to improve the accuracy of soiling predictions. The study examines how soiling can affect the efficiency and productivity of the modules and how to measure and predict soiling using machine learning (ML) algorithms. The research includes analyzing real data from large-scale ground-mounted PV sites and comparing different soiling measurement methods. It was observed that there were some deviations in the real soiling loss values compared to the expected values for some projects in southern Spain, thus, the main goal of this work is to develop machine learning models that could predict the soiling more accurately. The developed models have a low mean square error (MSE), indicating the accuracy and suitability of the models to predict the soiling rates. The study also investigates the impact of different cleaning strategies on the performance of PV power plants and provides a powerful application to predict both the soiling and the number of cleaning cycles.
A report from the World Economic Forum (2019) stated loneliness as the third societal stressor in the world, mainly in western countries. Moreover, research shows that loneliness tends to be experienced more severely by young adults than other age groups (Rokach, 2000), which is the case of university students who face profound periods of loneliness when attending university in a new place (Diehl et al., 2018). Digital technology, especially mental health apps (MHapps), have been viewed as promising solutions to address this distress in universities, however, little evidence on this topic reveals uncertainty around how these resources impact individual well-being. Therefore, this research proposed to investigate how the gamified social mobile app Noneliness reduced loneliness rates and other associated mental health issues of students from a German university. As little work has focused on digital apps targeting loneliness, this project also proposed to describe and discuss the app’s design and development processes. A multimethod approach was adopted: literature review on high-efficacy MHapps design, gamification for mental health and loneliness interventions; User Experience Design and Human-centered Computing. Evaluations occurred according to the app’s development iterations, which assessed four versions (from prototype to Beta) through quantitative and qualitative studies with university students. The main results obtained regarding the design aspects were: users' preference for minimalistic interfaces; importance in maintaining privacy and establishing trust among users; students' willingness to use an online support space for emotional and educational support. Most used features were those related to group discussions, private chats and university social events. Preferred gamification elements were those that provided positive reinforcement to motivate social interactions (e.g. Points, Levels and Achievements). Results of a pilot randomized controlled trial with university students (N = 12), showed no statistically significant interactions in reducing loneliness among experimental group members (n = 7, x² = 3.500, p-value = 0.477, Cramer’s V = 0.27) who made continued use of the app for six weeks. On the other hand, the app showed effects of moderate magnitude on loneliness reduction in this group. The app also demonstrated relatively strong magnitude effects on other associated variables, such as depression and stress in the experimental group. In addition to motivating the conduct of further studies with larger samples, the findings point to a potential app effectiveness not only to reduce loneliness, but also other variables that may be associated with the distress.
Predictive control has great potential in the home energy management domain. However, such controls need reliable predictions of the system dynamics as well as energy consumption and generation, and the actual implementation in the real system is associated with many challenges. This paper presents the implementation of predictive controls for a heat pump with thermal storage in a real single-family house with a photovoltaic rooftop system. The predictive controls make use of a novel cloud camera-based short-term solar energy prediction and an intraday prediction system that includes additional data sources. In addition, machine learning methods were used to model the dynamics of the heating system and predict loads using extensive measured data. The results of the real and simulated operation will be presented.
Printed circuit boards (PCB) are a foundation of electronical devices in modern society. The fabrication of these boards requires various processes and machines. The utilisation of a robot with multiple tools can shorten the process chain compared to screen printing. In this paper a system is presented, which utilises an industrial six axis robot to manufacture
PCBs. The process flow and conversion process of the Gerber format into robot specific commands is presented. The advantages and challenges applying a robot to print circuits are discussed.
Novel approaches for the design of assistive technology controls propose the usage of eye tracking devices such as for smart wheelchairs and robotic arms. The advantages of artificial feedback, especially vibrotactile feedback, as opposed to their use in prostheses, have not been sufficiently explored. Vibrotactile feedback reduces the cognitive load on the visual and auditory channel. It provides tactile sensation, resulting in better use of assistive technologies. In this study the impact of vibration on the precision and accuracy of a head-worn eye tracking device is investigated. The presented system is suitable for further research in the field of artificial feedback. Vibration was perceivable for all participants, yet it does not produce any significant deviations in precision and accuracy.
Digital, virtual environments and the metaverse are rapidly taking shape and will generate disruptive changes in the areas of ethics, privacy, safety, and how the relationships between human beings will be developed. To uncover some of some of the implications that will impact those areas, this study investigates the perceptions of 101 younger people from the generations Y and Z. We present a first exploratory analysis of the findings, focusing on knowledge and self-perception. Results show that these young generations are seriously doubting their knowledge on the metaverse and virtual worlds – regarding both the definition and the usage. It is interesting to see only a medium confidence level, considering that the participants are young and from an academic environment, which should increase their interest in and the affinity towards virtual worlds. Males from both generations perceive themselves as significantly more knowledgeable than females. Regarding a fitting definition, almost 40% agreed on the metaverse as a “universal and immersive virtual world that is made accessible using virtual reality and augmented reality technologies”. Regarding the topic in general, several participants (almost 40%) considered themselves sceptics or “just” users (38%). Interestingly, generation Y participants were more likely than the younger generation Z participants to identify themselves as early adopters or innovators. In result, the considerable amount of “mixed feelings” regarding digital, virtual environments and the metaverse shows that in-depth studies on the perception of the metaverse as well as its ethical and integrity implications are required to create more accessible, inclusive, safe, and inclusive digital, virtual environments.
Als Fortsetzung des FHOP-Projektes wurde an der Fachhochschule Offenburg auf Basis des bestehenden Mikroprozessorkerns im Rahmen einer Diplomarbeit ein Mikrocontroller in ES2-0.7 μm-Technologie entworfen. Der Controller wurde modular aufgebaut mit den Komponenten: FHOP-Mikroprozessor, Buscontroller, Waitstate-Chipselect-Einheit, 16x16 Bit Multiplizierer, 2KB ROM, 256 Byte RAM, Watchdog, PIO mit 16 konfigurierbaren Ports, SIO, 2 Timer und ein Interruptcontroller für 8 Interrputquellen.
Der Chip benötigt bei einer Komplexität von ca. 65400 Transistoren eine Siliziumfläche von etwa 27 mm². Er wurde im September 1996 zur Fertigung gegeben und mittlerweile erfolgreich getestet. Das interne ROM des Mikrocontrollers enthält das BIOS sowie ein Testprogramm. Zur Erstellung der Software steht eine komplette Entwicklungsumgebung zur Verfügung. Sämtliche Komponenten stehen im FHOP-Design-Kit in Kürze zur Verfügung.
Nach dem Nachweis der Funktionalität des an der Fachhochschule Offenburg entwickelten Mikroprozessorkernels FHOP (First Homemade Operational Processor), wird eine Anwendung des Kernels in einem Applikationschip beschrieben.
Der Thermologger-ASIC soll mit Hilfe eines Temperatursensors die Umgebungstemperatur bei technischen Prozessen in regelmäßigen Zeitabständen erfassen und abspeichern. Die Meßwerte werden bei Bedarf ber eine serielle Schnittstelle des Thermologger-ASICs an einen PC übertragen und ausgewertet. Zur Verringerung der Leistungsaufnahme wird zwischen zwei Temperaturmessungen in einen Power-Down-Mode geschaltet.
Der ASIC soll später in einer Chipkarte integriert werden.
Im Frühjahr 1995 entstand die Idee, einen Lottozahlengenerator als Demonstrations- und Studienobjekt, für die Anwendung komplexer digitaler Entwurfsmethoden, zu entwerfen. Mit Hilfe der Schaltung ist es möglich, 6 verschiedene Zahlen zufällig aus 49 Zahlen zu ermitteln. Bei der Ziehung der einzelnen Zahlen werden verschiedene Töne und Melodien erzeugt. Die Schaltung ist so konzipiert, daß eine einfache Bedienung möglich ist. Der Chip wurde als Standardzellen-Entwurf mit einer Fläche von ca. 7 um² geroutet.
An der Fachhochschule Offenburg wurde im Sept. 93 das Projekt eines implantierbaren 16 Bit Mikroprozessor-Kernels FHOP ins Leben gerufen. Ausgehend von dem in einem Testchip erfolgreich erprobten umstrukturierten Entwurf wurde durch gezielten Einsatz von strukturiertem Routen unter Nutzung der Fähigkeiten zum hierarchischen Arbeiten in der MENTOR-IC-Station eine erheblich verkleinerte und flächenmäßig optimierte Struktur abgeleitet, die sich mit 4 Quadratmilimetern Fläche durchaus mit kommerziellen Mikroprozessor-Kerneln vergleichen läßt.
FHOP-Mikroprozessor-Kernel
(1995)
Für die Implementation in ASIC's wurde ein kompakter Mikroprozessor-Kernel als Standardzellen-Makro entworfen. Durch konsequenten Einsatz von Hochsprachen und CAE-Werkzeugen (VHDL, Synthese) konnte ein vollständiges Design in nur vier Monaten durchgeführt werden. Der Prozessor wird in einem Testchip erprobt.
Erstellen von Hardmakros und Aufbau einer Zellbibliothek unter Verwendung des ES2-Library-Kits
(1993)
Es wird eine Anleitung zur Erstellung von Hardmakros mit der Mentor-Graphics-Software gegeben. Die Hardmakros werden mit Standardzellen aus der ES2-Bibliothek der Firma EUROCHIP aufgebaut. Die Hardmakros werden in eine eigenständige Bibliothek abgelegt und können in neuen Chip-Designs verwendet werden.
Mit zunehmend komplexer werdenden Schaltungen wachsen auch die Anforderungen an die Entwicklung einer entsprechenden Leiterplatte. Mit der BOARD-Station von MENTOR-Graphics können professionelle Leiterplatten entwickelt werden.
Im Rahmen dreier Entwicklungsprojekte an der Fachhochschule Offenburg wurden mehrere aufwendige Layoutentwürfe mit der BOARD-Station in verschiedenen Diplomarbeiten durchgeführt. Im Folgenden wird über die dabei gewonnenen Erfahrungen berichtet.
Digitaler Phasenreglerkreis mit numerisch gesteuertem Oszillator als LCA-Microcontroller Kombination
(1992)
Am Beispiel einer Schrittmotor-Indexerschaltung wird der effektive Einsatz von konfigurierbaren Logic Cell Arrays in Zusammenwirkung mit einem Mikrokontroller demonstriert, wobei die hohe Arbeitsgeschwindigkeit des LCAs den Bereich der Schaltung übernimmt und im Regelkreis die arithmetrische Berechnung durchführt. Die Konfiguration des LCA aus dem EPROM des Controllers führt zu einer ungewöhnlichen Flexibilität des Entwurfs und ermöglicht zahlreiche andere Anwendungen mit dieser Architektur.
Die Fachhochschule Offenburg bietet den Studenten des Fachbereichs Nachrichtentechnik seit Ende 1990 das Wahlfach "Entwicklung integrierter Anwenderschaltkreise (ASIC)" an. Ziel des Wahlfachs ist es, den Studenten Grundkenntnisse im Entwurf eines ASIC's zu vermitteln, und wie im folgenden Beitrag aufgezeigt, die Möglichkeit zu bieten, den gesamten Entwurfszyklus von der Schaltungsentwicklung bis hin zur Fertigungsmaske zu durchlaufen.
Die Fachhochschule Offenburg bietet seit dem Wintersemester 1990/91 den Studenten des Fachbereichs Nachrichtentechnik das Wahlpflichtfach ASIC-Design an. Schon kurz nach der Errichtung des ASIC-Design-Centers im Frühjahr 1990 ermöglicht sie damit künftigen Ingenieuren eine Ausbildung in einem Bereich, der in der modernen Schaltungsentwicklung nicht mehr wegzudenken ist.
Im Rahmen eines GPS-Projektes ist an der Fachhochschule Offenburg ein Konzept für einen experimentellen Navigationsempfänger entstanden. Hierfür wurde der digitale Teil entwickelt und aufgebaut. Für die Realisierung der Schaltung sollten benutzerprogrammierbare Gate Arrays von Xilinx (LCAs) verwendet werden, die sich schon bei einer anderen Arbeit an der Fachhochschule bewährt hatten.
Nachfolgend möchte ich dem Leser einen Überblick über das GPS-System und die Entwicklung der LCAs geben.
An der FH Offenburg arbeiten seit Ende 1989 in einem Team die Professoren Dr. Jansen, Dr. Schüssele, die wissenschaftlichen Mitarbeiter Bernd Reinke, Martin Jörger und die Diplomanden Hans Fiesel, Otmar Feißt an dem Entwurf eines Nachrichtenempfängers. Im Rahmen dieses Projekts, genannt GPS-Projekt (GPS = Global Positioning System), wurde im Herbst 1990 ein experimenteller Empfänger in Betrieb genommen. Nachdem die Testergebnisse gezeigt hatten,daß das Konzept der Anlage stimmte, ging es nun um die Miniaturisieriung, Integration und Optimierung der Schaltung. Außerdem sollte der bisher verwendete PC durch einen auf der Platine befindlichen Mikroprozessor ersetzt werden. Im Zusammenhang mit dem GPS-Projekt wurden bisher im Offenburger ASIC-Labor eine Analogschaltung auf einem B500, drei LCA Designs und diverse GAL's entwickelt.
Zur Zeit arbeiten mehrere Diplomanden an der zweiten Generation des Empfängers. Meine Aufgabe besteht darin, die dort noch in drei LCA's untergebrachte digitale Logik sowie einen Teil des bisherigen PC-Interface in einem IMS Gate Forrest zu integrieren. Außerdem muß die Logik von 8 Bit auf einen 16 Bit breiten Datenbus umgestellt und an die neue Peripherie des Mikroprozessors angepasst werden. Damit soll die jetzige Digital-Platine noch weiter verkleinert werden. Wesentlich ist dabei die Umsetzung der zahlreichen Zähler- und Registerstrukturen in einem Gate Forrest. Als Arbeitsmittel stehen Apollo Workstations mit Mentor Software zur Verfügung.
Die Elektronikindustrie bietet für die Realisierung digitaler Logik eine Vielzahl integrierter Bausteine an, die ein Höchstmaß an Zuverlässigkeit als auch Integrationsdichte ermöglichen.
Je nach Integrationsdichte unterscheidet man hierbei zwischen Standardlogik (TTL,CMOS,DTL...), programmierbarer Logik (PLA, GAL...), Gate Arrays und ASIC-Bausteinen. Mit steigender Integrationsdichte werden Systemeigenschaften verbessert, wie Leistungsverbrauch, Platzbedarf, und Zuverlässigkeit.
Jedoch steht ihr auch ein stark erhöhter Kosten- und Entwicklungsaufwand gegenüber, der den Einsatz hochintegrierter Bausteine in Einzelfertigung bzw. Kleinserien verhindert.
Xilinx bietet nun mit seiner LCA-Produktreihe (logic cell array) eine Alternative zu bestehender hochintegrierbarer Logik an, mit der es möglich sein soll, Vorteile der genannten Einzelproduktgruppen zu übernehmen, und deren Nachteile zu beseitigen.
Im Rahmen einer Diplomarbeit wurde ein solcher LCA-Baustein (XC3020) eingesetzt. Anhand der gegebenen konkreten Anwendung konnte hierbei untersuch twerden, wie schnell sich ein solcher Baustein in bestehende Hardware eingliedern läßt, und welche Integrationsdichte er ermöglicht.
Im Folgenden sollen nun als Schwerpunkte das Einsatzgebiet, die Entwicklung und die Simulation des LCA bei vorliegender Aufgabenstellung aufgezeigt werden.
Seit einiger Zeit wird an der Fachhochschule in Offenburg ein Entwicklungsprojekt verfolgt, an dessen Ende ein GPS Empfänger stehen soll. Dabei handelt es sich um einen Satellitenempfänger, mit dem weltweit eine genaue dreidimensionale Standortbestimmung durchgeführt werden kann. Für diesen Empfänger sollte ein Großteil der Analogschaltung, bestehend aus ZF Verstärker, Costas Loop Synchrondemodulator und Pegeldetektor, in das Transistorarray B500a von AEG intgriert werden. Das Chipdesign wurde im Labor für ASIC Design an der FH Offenburg während des Wintersemesters 1990/91 erstellt. Gefertigt wurde der Chip von der Firma AEG in Ulm, wobei die Fertigungszeit des ASIC 6 Wochen betragen hat.
In an extensive research project, we have assessed the application of different service models by export credit agencies (ECAs) and export-import banks (EXIMs). We conducted interviews with 35 representatives of ECAs and EXIMs from 27 countries. The question guiding this study is: How do ECAs and EXIMs adopt public service models for supporting exporters? We conducted a holistic multiple case study, investigating if and how these organisations apply public service models developed by Schedler and Guenduez, and which roles of the state are relevant. We find that there is a variety of different service models used by ECAs and EXIMs, and that the service model approaches have great potential to learn from each other and innovate existing services.
In this paper, we propose an approach for gait phase detection for flat and inclined surfaces that can be used for an ankle-foot orthosis and the humanoid robot Sweaty. To cover different use cases, we use a rule-based algorithm. This offers the required flexibility and real-time capability. The inputs of the algorithm are inertial measurement unit and ankle joint angle signals. We show that the gait phases with the orthosis worn by a human participant and with Sweaty are reliably recognized by the algorithm under the condition of adapted transition conditions. E.g., the specificity for human gait on flat surfaces is 92 %. For the robot Sweaty, 95 % results in fully recognized gait cycles. Furthermore, the algorithm also allows the determination of the inclination angle of the ramp. The sensors of the orthosis provide 6.9 and that of the robot Sweaty 7.7 when walking onto the reference ramp with slope angle 7.9.