Refine
Year of publication
Document Type
- Conference Proceeding (1253) (remove)
Conference Type
- Konferenzartikel (950)
- Konferenz-Abstract (156)
- Konferenzband (77)
- Sonstiges (42)
- Konferenz-Poster (32)
Language
- English (934)
- German (317)
- Multiple languages (1)
- Russian (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (453)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (286)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (213)
- Fakultät Wirtschaft (W) (164)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (113)
- INES - Institut für nachhaltige Energiesysteme (59)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Open Access (560)
- Closed Access (456)
- Closed (223)
- Bronze (214)
- Diamond (29)
- Grün (13)
- Gold (6)
- Hybrid (6)
Established robot manufacturers have developed methods to determine and optimize the accuracy of their robots. These methods vary from robot manufacturers to their competitors. Due to the lack of published data, a comparison of robot performance is difficult. The aim of this article is to find methods to evaluate important characteristics of a robot with an accurate and cost-effective setup. A laser triangulation sensor and geometric referenced spheres were used as a base to compare the robot performance.
The additive manufacturing processes have developed significantly in recent years. Currently, new generative processes are coming onto the market. Likewise, the number of available materials that can be processed using additive processes is steadily increasing. Therefore, an important task is to integrate these new processes and materials into the university education of engineers. Due to the rapid change and the constant development in the field of additive manufacturing, a pure transfer of knowledge is not expedient, because this obsolete very quickly. Rather, the students should be enabled to use their skills in such a way that they can always handle new technologies and materials independently and meaningfully.
In this paper, therefore, a new course is developed in which the students largely independently work with additive manufacturing processes. For this purpose, teams of four to five students from different technical programs are formed. The teams have the task of developing and manufacturing a product using additive processes. The goal is to create a powerful product by taking into account the optimization of costs and use of resources.
As an example, the development and additive manufacturing of an ornithopter (aircraft that flies by flapping its wings) will be presented in this contribution. The students have to analyze and optimize the mechanics and aerodynamics of the aircraft. In addition, the rules for production-oriented design must be determined and applied. Further more, they should assess the costs and material consumption during development and production.
This contribution shows how the students have achieved the different learning outcomes. In addition, it becomes clear how the students independently acquired and applied their knowledge in development, design and additive manufacturing. Also, it will be demonstrated how much time the students spent on learning the different technologies.
Having 22 GW of nominal power installed Germany is the leading nation in wind energy conversion. While the number of suitable installation sites ashore is limited, and the average windspeed and thus the utilization level offshore is significantly higher, more and more offshore wind farms are planned. In order to reduce the cost of building the foundations and of connecting the wind turbines to the power grid, the single plant is designed as powerful as possible and therefore the components become huge and weighty. For instance: In order to lift the nacelle with around 500 tons of weight up on the tower - which can be up to 120 m above the water level - at the time special ships and cranes are designed and built. But those firstly will be very expensive and secondly will be available only on a limited scale. Hence the installation cost of those huge wind turbines significantly influence the rentability of a wind farm. Against this background a joint research project supported by the German Federal Ministry for the Environment, Nature Conservation and Nuclear Safety (BMU) was started comprising the project partners Ed. Züblin AG, Berg-idl GmbH (an engineering company and a maker of special purpose machines in Altlußheim, Germany), the IPEK (institute for product development) at the university of Karlsruhe and the Hochschule Offenburg, university of applied science. Project target is the conceptual design of a heavy-duty elevator, which can be used to install the tower segments and the nacelle of a wind turbine offshore without a crane. The most relevant challenges in this context result of holding up extreme loads by means of comparatively filigree carrying structures. The paper shows some examples of structural analysis and optimization work accomplished during the project. For the structural analysis of the heavy loaded components ANSYS workbench was used. The development process was also supported by optimization tools like TOSCA and OPTIMUS. The linking of the FE solver and the optimizer provides important hints concerning improvement of the topology and the dimensions of the components. Examples of designs illustrate the development process and the methods applied.
Massiv- Heiz-/Kühldecken bestehen im wesentlichen aus in der tragenden Deckenkonstruktion eingegossenen Kunststoffrohren, die je nach Jahreszeit von Heiz- bzw. Kühlwasser durchströmt verden. Das instationäre Verhalten wurde mit dem Anlagen und Simulationsprogramm TRNSYS untersucht. Anhand ausgesuchter Ergebnisse können die Anforderungen an die notwendigen Regelkonzepte abgeleitet werden. Es eignen sich für derartige Systeme, insbesondere wegen ihrer großen Speicherfähigkeit und damit Trägheit, modifizierende selbstadaptierende Regler. Aufgrund der großen Trägheit der Massivdecke ist eine direkte außentemperaturgeführte Vorlauftemperaturregelung des Wasserstromes nicht sinnvoll. Es sollte ein selbstadaptierendes Regelsystem verwendet werden, das anhand der vorangegangenen Lastverläufe die Gebäudekonstante und damit das Speicherverhalten der Räume ermittelt. Damit der Regelaufwand reduziert werden kann, sollen mehrere Räume als Gruppe zusammengefaßt werden. Als weitere Größe für das Regelkonzept ist die Speichertemperatur am Ende der Ladezeit zu messen und in das Programm mit einzubinden.
Auch wenn kaum eine andere Branche in Deutschland durch die weltweite Finanzkrise so stark getroffen wurde wie der Maschinenbau, gehört dieser nach wie vor zu den erfolgreichsten Branchen (vgl. VDMA 2010). In vielen Produktsegmenten sind deutsche Maschinenbauer Weltmarktführer. Jedoch stehen sie seit einigen Jahren Herausforderungen gegenüber, die zu einem Umdenken und einer Neuausrichtung führen. Insbesondere das nach wie vor existente Problem der Produktpiraterie sowie die zumeist aus den asiatischen Räumen eingetretenen Wettbewerber, die für einen enormen Preisdruck sorgen, haben in den letzten Jahren dazu geführt, dass Standardmaschinen nur noch eine geringe Profitabilität aufweisen. Mit produktbegleitenden Dienstleistungen (Services) hingegen lasst sich mit durchschnittlich 21 % eine rund viermal so hohe operative Marge realisieren wie mit Maschinen (vgl. Schmiedeberg/Strahle/Bendig 2010, S. 3). Annahmen gehen davon aus, dass Services das Potenzial haben, einen Umsatzanteil von bis zu 35 % und damit einen Gewinnanteil von bis zu 60 % zu erreichen.
Wireless Sensor Networks (WSN) have emerged as interesting topic in the research community due to its manifold applications. One of the main challenges of this field is the energy consumption of the nodes, which typically is quite restricted due to the required lifetime of such WSNs. To solve that problem several energy-saving MAC protocols have been developed so far. One of them recently presented by the authors is the so-called SmartMAC as an extension to the IEEE802.15.4 standard. In this paper, we present the implementation details of the porting of the SmartMAC protocol to the discrete event network simulator NS3. We develop this module for NS3 to simulate the performance, multi node execution, and multi node configuration. Along with this model, we also present an energy model for the evaluation of the energy consumption. The current implementation in NS3 is based on the LR-WPAN (Low-Rate Wireless Personal Area Networks) as specified by the IEEE802.15.4 (2006) standard. The simulation results show that the SmartMAC with its sleep and wake-up mechanisms for the transceivers, is significantly more efficient than the current NS3 MAC (Medium Access Control) scheme.
Seit einiger Zeit wird an der Fachhochschule in Offenburg ein Entwicklungsprojekt verfolgt, an dessen Ende ein GPS Empfänger stehen soll. Dabei handelt es sich um einen Satellitenempfänger, mit dem weltweit eine genaue dreidimensionale Standortbestimmung durchgeführt werden kann. Für diesen Empfänger sollte ein Großteil der Analogschaltung, bestehend aus ZF Verstärker, Costas Loop Synchrondemodulator und Pegeldetektor, in das Transistorarray B500a von AEG intgriert werden. Das Chipdesign wurde im Labor für ASIC Design an der FH Offenburg während des Wintersemesters 1990/91 erstellt. Gefertigt wurde der Chip von der Firma AEG in Ulm, wobei die Fertigungszeit des ASIC 6 Wochen betragen hat.
Blockchain frameworks enable the immutable storage of data. A still open practical question is the so called "oracle" problem, i.e. the way how real world data is actually transferred into and out of a blockchain while preserving its integrity. We present a case study that demonstrates how to use an existing industrial strength secure element for cryptographic software protection (Wibu CmDongle / the "dongle") to function as such a hardware-based oracle for the Hyperledger blockchain framework. Our scenario is that of a dentist having leased a 3D printer. This printer is initially supplied with an amount of x printing units. With each print action the local unit counter on the attached dongle is decreased and in parallel a unit counter is maintained in the Hyperledger-based blockchain. Once a threshold is met, the printer will stop working (by means of the cryptographically protected invocation of the local print method). The blockchain is configured in such a way that chaincode is executed to increase the units again automatically (and essentially trigger any payment processes). Once this has happened, the new unit counter value will be passed from the blockchain to the local dongle and thus allow for further execution of print jobs.
Additive manufacturing processes have evolved rapidly in recent years and now offer a wide range of manufacturing technologies and workable materials. This range from plastics and metals to paper and even polymer plaster composites. Due to the layer by layer structure of the components the additive processes have in comparison with conventional manufacturing processes the advantage of freedom of design, that means the simple implementation of complex geometries. Moreover, the additive processes provide the advantage of reduced consumption of resources, since essentially only the material is consumed, which is required for the actual component, since no waste in the form of chips is produced. In order to use these advantages, the potentials of additive manufacturing and the requirements of sustainable design must already be observed in the product development process. So the design of the components and products must be made so as little as possible construction and supporting material is required for the generative production and therefore little resources are consumed. Also, all steps of the additive manufacturing process must be considered properly, that includes the post processing. This allows components be designed so that for instance the effort for removing the support structure is considerably reduced. This leads to a significant reduction in manufacturing time and thus energy consumption. The implementation of these potentials in product development can be demonstrated by means of a multiple-stages model. A case study shows how this model is applied in the training of Master students in the field of product development. In a workshop the students work as a group while implementing the task of developing a miniature racing car under the rules of sustainable design in compliance with the boundary conditions for an additive manufacturing. In this case, Fused Deposition Modelling FDM using plastics as a building material is applied. The results show how the students have dealt with the different requirements and how they have implemented them in product development and in the subsequent additive manufacturing.
The automatic processing of handwritten forms remains a challenging task, wherein detection and subsequent classification of handwritten characters are essential steps. We describe a novel approach, in which both steps - detection and classification - are executed in one task through a deep neural network. Therefore, training data is not annotated by hand, but manufactured artificially from the underlying forms and yet existing datasets. It can be demonstrated that this single-task approach is superior in comparison to the state-of-the-art two task approach. The current study focuses on hand-written Latin letters and employs the EMNIST data set. However, limitations were identified with this data set, necessitating further customization. Finally, an overall recognition rate of 88.28% was attained on real data obtained from a written exam.
A simple model is introduced that describes the interaction of surface acoustic waves (SAWs) with a 2D periodic array of objects on the surface that give rise to internal resonances. Such objects may be high-aspect ratio structures like micro-pillars fabricated of a material different from that of the substrate. The model allows for an approximate determination of the band structure for the acoustic modes in such systems. Results are presented for the dependence on structural parameters of a total bandgap in the non-radiative regime of a semi-infinite substrate, and it is shown how the frequency and radiation damping of vibrational modes can be determined that are associated with defects in the periodic 2D array.
The title expresses goals the Kansas Geological Survey (KGS) has been working toward for some time. This report extends concepts and objectives developed while working on an earlier effort for effective interactive digital maps on the Internet. That work was reported to the 1998 DMT Workshop in Champaign, Illinois (Ross, 1998). The current project goes beyond previous efforts that focused on methods for serving the contents of a geographic information system (GIS); the points, lines, and polygons representing features of the digital geologic map and the data in the attribute tables of the GIS describing those features.
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.
Not only is the number of new devices constantly increasing, but so is their application complexity and power. Most of their applications are in optics, photonics, acoustic and mobile devices. Working speed and functionality is achieved in most of media devices by strategic use of digital signal processors and microcontrollers of the new generation. Considering all these premises of media development dynamics, the authors present how to integrate microcontrollers and digital signal processors in the curricula of media technology lectures by using adequate content. This also includes interdisciplinary content that consists of using the acquired knowledge in media software. These entries offer a deeper understanding of photonics, acoustics and media engineering.
After the successful International Year of Light 2015, the idea of sustainability became increasingly imminent. After a preparatory year on 16 May 2018, the International Day of Light was launched for the first time. This event was celebrated with a public celebration in Paris at the UNESCO headquarters. In this paper we will present our projects dedicated to the International Day of Light in Paris. Together with a group of students from our university, we had the special opportunity to be integrated in the program of the opening ceremony at UNESCO in Paris. With our interdisciplinary projects we have tried to build a bridge between optics, photonics, art and media installations.
The United Nations have declared 2015 as the International Year of Light (IYL2015) and light-based technologies [1]. As a main result, the public interest is focused on both the achievements and the new frontiers of optics and photonics. This opens up new perspectives in the teaching and training of optics and photonics. In the first part of the paper, the author presents the numerous anniversaries occurring in the International Year of Light 2015 together with their importance to the development of science and technology. In the second part, we report on an interactive video projection at the opening ceremony of the IYL2015 in Paris on January 19-20, 2015. Students of Offenburg University have established an interactive video projection which visualizes Twitter and Facebook messages posted with the hashtag #iyl2015 in a mapping technique. Thus, the worldwide community can be interactively part of the opening ceremony. Finally, upcoming global community projects related to optics and astronomy events are presented.
Background: Cardiac resynchronization therapy (CRT) with biventricular (BV) pacing is an established therapy for heart failure (HF) patients (P) with sinus rhythm, reduced left ventricular (LV) ejection fraction (EF) and electrical ventricular desynchronization. The aim of the study was to evaluate electrical interventricular delay (IVD) and left ventricular delay (LVD) in right ventricular (RV) pacemaker pacing before upgrading to CRT BV pacing.
Methods: HF P (n=11, age 69.0 ± 7.9 years, 1 female, 10 males) with DDD pacemaker (n=10), DDD defibrillator (n=1), RV pacing, New York Heart Association (NYHA) class 3.0 ± 0.2 and 24.5 ± 4.9 % LVEF were measured by surface ECG and transesophageal bipolar LV ECG before upgrading to CRT defibrillator (n=8) and CRT pacemaker (n=3). IVD was measured between onset of QRS in the surface ECG and onset of LV signal in the transesophageal ECG. LVD was measured between onset and offset of LV signal in the transesophageal ECG. CRT atrioventricular (AV) and BV pacing delay were optimized by impedance cardiography.
Results: Interventricular and intraventricular desynchronization in RV pacemaker pacing were 228.2 ± 44.8 ms QRS duration, 86.5 ± 32.8ms IVD, 94.4 ± 23.8ms LVD, 2.6 ± 0.8 QRS-IVD-ratio with correlation between IVD and QRS-IVD-ratio (r=-0.668 P=0.0248) and 2.3 ± 0.7 QRS-LVD-ratio. The LVEF-IVD-ratio was 0.3 ± 0.1 with correlation between IVD and LVEF-IVD-ratio (r=-0.8063 P=0.00272) and with correlation between QRS duration and LVEF-IVD-ratio (r=-0.7251 P=0.01157). Optimal sensing and pacing AV delay were 128.3 ± 24.8 ms AV delay after atrial sensing (n=6) and 173.3 ± 40.4 ms AV delay after atrial pacing (n=3). Optimal BV pacing delay was -4.3 ± 11.3 ms between LV and RV pacing (n=7). During 30.4 ± 29.6 month CRT follow-up, the NYHA class improved from 3.1 ± 0.2 to 2.2 ± 0.3.
Conclusions: Transesophageal electrical IVD and LVD in RV pacemaker pacing may be additional useful ventricular desynchronization parameters to improve P selection for upgrading RV pacemaker pacing to CRT BV pacing.
Data is ever increasing in the computing world. Due to advancement of cloud technology the dynamics of volumes of data and its capacity has increased within a short period of time and will keep increasing further. Providing transparency, privacy, and security to the cloud users is becoming more and more challenging along with the volume of data and use of cloud services. We propose a new approach to address the above mentioned challenge by recording the user events in the cloud ecosystem into log files and applying MAR principle namely 1) Monitoring 2) Analyzing and 3) Reporting.
The Datagram Transport Layer Security (DTLS) protocol has been designed to provide end-to-end security over unreliable communication links. Where its connection establishment is concerned, DTLS copes with potential loss of protocol messages by implementing its own loss detection and retransmission scheme. However, the default scheme turns out to be suboptimal for links with high transmission error rates and low data rates, such as wireless links in electromagnetically harsh industrial environments. Therefore, in this paper, as a first step we provide an analysis of the standard DTLS handshake's performance under such adverse transmission conditions. Our studies are based on simulations that model message loss as the result of bit transmission errors. We consider several handshake variants, including endpoint authentication via pre-shared keys or certificates. As a second step, we propose and evaluate modifications to the way message loss is dealt with during the handshake, making DTLS deployable in situations which are prohibitive for default DTLS.
A fundamental and still largely unsolved question in the context of Generative Adversarial Networks is whether they are truly able to capture the real data distribution and, consequently, to sample from it. In particular, the multidimensional nature of image distributions leads to a complex evaluation of the diversity of GAN distributions. Existing approaches provide only a partial understanding of this issue, leaving the question unanswered. In this work, we introduce a loop-training scheme for the systematic investigation of observable shifts between the distributions of real training data and GAN generated data. Additionally, we introduce several bounded measures for distribution shifts, which are both easy to compute and to interpret. Overall, the combination of these methods allows an explorative investigation of innate limitations of current GAN algorithms. Our experiments on different data-sets and multiple state-of-the-art GAN architectures show large shifts between input and output distributions, showing that existing theoretical guarantees towards the convergence of output distributions appear not to be holding in practice.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
Investigation of the Angle Dependency of Self-Calibration in Multiple-Input-Multiple-Output Radars
(2021)
Multiple-Input-Multiple-Output (MIMO) is a key technology in improving the angular resolution (spatial resolution) of radars. In MIMO radars the amplitude and phase errors in antenna elements lead to increase in the sidelobe level and a misalignment of the mainlobe. As the result the performance of the antenna channels will be affected. Firstly, this paper presents analysis of effect of the amplitude and phase errors on angular spectrum using Monte-Carlo simulations. Then, the results are compared with performed measurements. Finally, the error correction with a self-calibration method is proposed and its angle dependency is evaluated. It is shown that the values of the errors change with an incident angle, which leads to a required angle-dependent calibration.
Investigation on Bowtie Antennas Operating at Very Low Frequencies for Ground Penetrating Radar
(2023)
The efficiency of Ground Penetrating Radar (GPR) systems significantly depends on the antenna performance as the signal has to propagate through lossy and inhomogeneous media. GPR antennas should have a low operating frequency for greater penetration depth, high gain and efficiency to increase the receiving power and should be compact and lightweight for ease of GPR surveying. In this paper, two different designs of Bowtie antennas operating at very low frequencies are proposed and analyzed.
Ultra wide band (UWB) signals are well suited both for short-range wireless communication and for high-precision localization applications. Channel impulse response (CIR) analysis in UWB systems is a major element in localization estimation. In this paper, practical aspects of CIR are presented. I.e. a technique for the construction of the accumulated echo-gram of a multipath delayed signal is proposed. Decawave hardware was used to demonstrate the technique of analysis of fine structure of signals with a sub-nanosecond resolution. Temporal stability, reliability and two-way characteristics of such echo-grams are discussed as well. The results of using two EVK1000 radio modules as a radar installation to detect a target in indoor environments prove that a low cost UWB intrusion detection and through-the-wall-vision systems might be developed using the proposed technique.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
Theoretical details about optics and photonics are not common knowledge nowadays. Physicists are keen to scientifically explain ‘light,’ which has a huge impact on our lives. It is necessary to examine it from multiple perspectives and to make the knowledge accessible to the public in an interdisciplinary, scientifically well-grounded and appealing medial way. To allow an information exchange on a global scale, our project “Invisible Light” establishes a worldwide accessible platform. Its contents will not be created by a single instance, but user-generated, with the help of the global community. The article describes the infotainment portal “Invisible Light,” which stores scientific articles about light and photonics and makes them accessible worldwide. All articles are tagged with geo-coordinates, so they can be clearly identified and localized. A smartphone application is used for visualization, transmitting the information to users in real time by means of an augmented reality application. Scientific information is made accessible for a broad audience and in an attractive manner.
Due to its potential in improving the efficiency of energy supply, smart energy metering (SEM) has become an area of interest with the surge in Internet of Things (IoT). SEM entails remote monitoring and control of the sensors and actuators associated with the energy supply system. This provides a flexible platform to conceive and implement new data driven Demand Side Management (DSM) mechanisms. The IoT enablement allows the data to be gathered and analyzed at requisite granularity. In addition to efficient use of energy resources and provisioning of power, developing countries face an additional challenge of temporal mismatch in generation capacity and load factors. This leads to widespread deployment of inefficient and expensive Uninterruptible Power Supply (UPS) solutions for limited power provisioning during resulting blackouts. Our proposed “Soft-UPS” allows dynamic matching of load and generation through a combination of managed curtailment. This eliminates inefficiencies in the energy and power value chain and allows a data-driven approach to solving a widespread problem in developing countries, simultaneously reducing both upfront and running costs of conventional UPS and storage. A scalable and modular platform is proposed and implemented in this paper. The architecture employs “WiMODino” using LoRaWAN with a “Lite Gateway” and SQLite repository for data storage. Role based access to the system through an android application has also been demonstrated for monitoring and control.
IPv6 over LoRaWAN™
(2016)
Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.
Recently, RobustBench (Croce et al. 2020) has become a widely recognized benchmark for the adversarial robustness of image
classification networks. In it’s most commonly reported sub-task, RobustBench evaluates and ranks the adversarial robustness of trained neural networks on CIFAR10 under AutoAttack (Croce and Hein 2020b) with l∞ perturbations limited to ϵ = 8/255. With leading scores of the currently best performing models of around 60% of the baseline, it is fair to characterize this benchmark to be quite challenging. Despite it’s general acceptance in recent literature, we aim to foster discussion about the suitability of RobustBench as a key indicator for robustness which could be generalized to practical applications. Our line of argumentation against this is two-fold and supported by excessive experiments presented in this paper: We argue that I) the alternation of data by AutoAttack with l∞, ϵ = 8/255 is unrealistically strong, resulting in close to perfect detection rates of adversarial samples even by simple detection algorithms and human observers.
We also show that other attack methods are much harder to detect while achieving similar success rates. II) That results on low resolution data sets like CIFAR10 do not generalize well to higher resolution images as gradient based attacks appear to become even more detectable with increasing resolutions.
iSign - internet based simulation of guided wave propagation - ist eine Lernumgebung für Online-Laborversuche. Die Client-Serverarchitektur nutzt server-seitig das Tool F3D, das elektromagnetische Felder in 3D-Strukturen berechnet. Ein Apache-Webserver (unter Linux) bedient den Theorie-/Aufgaben-Teil und die Lernsystemadministration. Ein HPUX Simulationsserver steuert und kontrolliert den mehrstufigen Simulationsvorgang. Eine MySQL-Datenbank erlaubt dynmaische Webseiten-Generierung und Simulations-, Projekt- und Userdatenhaltung. Java-Applets, JavaServer Pages und JavaBeans erzeugen die interaktive Client-Oberfläche zur Eingabe, Ergebnisdarstellung und für Online-Virtual Reality. Die einheitlich gestaltete Benutzeroberfläche verbirgt die Systemkomplexität.
The authors set the focus in this paper on the description of polarization with the help of the Jones calculus and the application of polarization in photography. Furthermore, the effect of the circular polarization filter is described by using the Jones calculus. Also, an enhancement of artistic and creative possibilities in photography through quantization or parametrization by the Jones matrices is presented.
The collection of selected papers of the TRIZ Future Conference 2017 is in open access and is included to the Innovator, the journal of the European TRIZ Assocation.
Deafblindness, a form of dual sensory impairment, signifcantly impacts communication, access to information and mobility. Inde- pendent navigation and wayfnding are main challenges faced by individuals living with combined hearing and visual impairments. We developed a haptic wearable that provides sensory substitution and navigational cues for users with deafblindness by conveying vibrotactile signals onto the body. Vibrotactile signals on the waist area convey directional and proximity information collected via a fisheye camera attached to the garment, while semantic informa- tion is provided with a tapping system on the shoulders. A playful scenario called “Keep Your Distance” was designed to test the navigation system: individuals with deafblindness were “secret agents” that needed to follow a “suspect”, but they should keep an opti- mal distance of 1.5 meters from the other person to win the game. Preliminary fndings suggest that individuals with deafblindness enjoyed the experience and were generally able to follow the directional cues.
This paper presents the development of an energy harvesting solution for a driven tool holder. The tool holder environment was analysed, a test stand built and the designed electromagnetic rotation harvester was evaluated. The reported harvester is based on low cost off-the-shelf components and 3D printed parts. The utilisation of SMD coils allows easy adaptation to changing parameters of the integration area. Energy harvesting in tool holders enables predictive maintenance or condition monitoring in the industrial production. These capabilities are mandatory nowadays in regards of IIoT. A reliable energy source is key for continuous monitoring. Changing batteries becomes obsolete. The results provide useful insight for future harvesters.
The purpose of this study was to 1) compare knee joint kinematics and kinetics of fake-and-cut tasks of varying complexity in 51 female handball players and 2) present a case study of one athlete who ruptured her ACL three weeks post data collection. External knee joint moments and knee joint angles in all planes at the instance of the peak external knee abduction moment (KAM) as well as moment and angle time curves were analyzed. Peak KAMs and knee internal rotation moments were substantially higher than published values obtained during simple change-of-direction tasks and, along with flexion angles, differed significantly between the tasks. Introducing a ball reception and a static defender increased joint loads while they partially decreased again when anticipation was lacking. Our results suggest to use game-specific assessments of injury risk while complexity levels do not directly increase knee loading. Extreme values of several risk factors for a post-test injured athlete highlight the need and usefulness of appropriate screenings.
Duplikaterkennung, -suche und -konsolidierung für Kunden- und Geschäftspartnerdaten, sog. „Identity Resolution“, ist die Voraussetzung für erfolgreiches Customer Relationship Management und Customer Experience Management, aber auch für das Risikomanagement zur Minimierung von Betrugsrisiken und Einhaltung regulatorischer Vorschriften und viele weitere Anwendungsfälle. Diese Systeme sind jedoch hochkomplex und müssen individuell an die kundenspezifischen Anforderungen angepasst werden. Der Einsatz lernbasierter Verfahren bietet großes Potenzial zur automatisierten Anpassung. In diesem Beitrag präsentieren wir für ein KMU praxisfähige, lernbasierte Verfahren zur automatischen Konfiguration von Business-Regeln in Duplikaterkennungssystemen. Dabei wurden für Fachanwender Möglichkeiten entwickelt, um beispielgetrieben das Match-System an individuelle Business-Regeln (u.a. Umzugserkennung, Sperrlistenabgleich) anzupassen und zu konfigurieren. Die entwickelten Verfahren wurden evaluiert und in einer prototypischen Lösung integriert. Wir konnten zeigen, dass unser Machine-Learning-Verfahren, die von einem Domainexperten erstellten Business-Regeln für das Duplikaterkennungssystem „identity“ verbessern konnte. Zudem konnte der hierzu erforderliche Zeitaufwand verkürzt werden.
Durch den verstärkten Einsatz von EDV-Geräten an den Arbeitsplätzen in Büroräumen stiegen die flächenbezogenen inneren Kühllasten deutlich an. Für die Arbeitsplätze in Fensternähe müssen EDV-gerechte Lichtverhältnisse vorhanden sein. Es werden Versuchsergebnisse eines neu entwickelten Kühlkonvektors im Fensterbereich im Zusammenhang mit Innenjalousien vorgestellt. Dieser Kühlkonvektor eignet sich auch für den kombinierten Einsatz mit Kühldecken und raumlufttechnischen (RLT)-Anlagen, wie weitere Versuche bestätigen. Der Kühlkonvektor kann auch beim Ersatz von Induktionsgeräten angewandt werden, die z.B. nach dem 4-Leiter-Prinzip angeschlossen sind. Die Untersuchungen haben gezeigt, daß mit der Kombination von Kühldecke, Kühlkonvektor und RLT-Anlage große flächenbezogene Kühllasten abgeführt werden können, ohne daß dadurch die thermische Behaglichkeit negativ beeinflußt wird. Der Luftstrom läßt sich auf das hygienisch bzw. auf das zur Entfeuchtung notwendige Maß reduzieren, wodurch sich die Betriebskosten senken. Der Anteil der drei Komponenten an der gesamten Kühllast beträgt im Mittel: 15 % RLT-Anlage, 15 % Kühlkonvektor und 70 % Kühldecke.
Fallstudien sollen theoretische Lerninhalte zu Konzepten von Business Intelligence und Data Warehousing veranschaulichen und in einen praxisnahen Kontext bringen. Außerdem sollen Studierende umsetzungsorientierte Kompetenzen mit praxisrelevanten Systemen erwerben. Um diese Kompetenzen abzuprüfen und um die Auseinandersetzung mit Software und Konzepten zu vertiefen, haben sich Projekte als Ergänzung zu Fallstudien und Klausuren vielfach bewährt. Der Vortrag stellt dar, welche Möglichkeiten Dozierende im Rahmen der vom UCC zur Verfügung gestellten Plattform SAP Data Warehouse Cloud (SAP DWC) haben, um studentische Projekte zu Data Warehousing und Analytics durchzuführen. Der Autor berichtet über seine Erfahrung aus der Betreuung von über 30 Projekten mit SAP DWC aus verschiedenen Studiengängen seit 2020. Neben einer Übersicht über die von Studierenden gewählten Themen werden ausgewählte Projektergebnisse vorgestellt. Außerdem wird auf den Modus der Durchführung sowie existierende systemseitige Limitationen eingegangen. Für Dozierende, die mit ihren Studierenden eigene Projekte erfolgreich durchführen möchten, werden konkrete Hinweise und Maßnahmen dargestellt.
Kundendaten im E-Commerce – Optimierungspotenzial im Checkout-Prozess des deutschen Online-Handels
(2023)
Die Gestaltung eines benutzungsfreundlichen Checkout-Prozesses ist für den Erfolg des E-Commerce von großer Bedeutung. Die Abfrage der Kundendaten bildet einen wichtigen Teil der Customer Journey. Auf der einen Seite wollen die Handelsunternehmen so viel wie möglich über ihre Kundschaft erfahren, um möglichst zielgenaue Angebote und Marketingmaßnahmen ausspielen und das perfekte Einkaufserlebnis generieren zu können. Auf der anderen Seite möchten sich die Kundinnen und Kunden beim Online-Shopping auf den Kauf konzentrieren und erwarten einen reibungslosen Ablauf. Der Checkout-Prozess ist in diesem Zusammenhang ein kritischer Punkt. Dies spiegelt sich auch in den hohen Warenkorbabbruchraten wider. Um Online-Shoppende nachhaltig zu begeistern, gibt es noch viel Raum für Verbesserungen. Mit dem Ziel, den Status quo im deutschen Online-Handel besser zu verstehen und Usability und User Experience für eine höhere Konvertierungsrate zu optimieren, untersuchte die hier vorgestellte Forschungsarbeit den Anmelde- und Checkout-Prozess der 100 umsatzstärksten Online-Shops in Deutschland. Es werden die Ergebnisse der Studie präsentiert und aufgezeigt, an welchen Stellen Optimierungspotenzial besteht – bspw. bei zu komplizierten Formularen, unnötigen Datenabfragen oder erzwungenen Registrierungen – sowie Vorschläge für die Praxis des Online-Handels diskutiert.
Sowohl Angebot als auch Nachfrage nach Videos mit Lehrinhalt verzeichnen aktuell ein nahezu explosionsartiges Wachstum; MOOCs machen dabei jedoch nur einen geringen Anteil aus. Während Studierende einen positiven Effekt von Lehrvideos auf den eigenen Lernerfolg wahrnehmen, ist dieser bei objektiver Prüfung jedoch oft nicht nachweisbar. Für Dozenten eröffnet dies ein Spannungsfeld, für dessen Lösung dieser Beitrag die im Selbstversuch erprobte Möglichkeit der Produktion von Kurzvideos und deren Veröffentlichung auf YouTube vorstellt.
Die Fachhochschule Offenburg bietet seit dem Wintersemester 1990/91 den Studenten des Fachbereichs Nachrichtentechnik das Wahlpflichtfach ASIC-Design an. Schon kurz nach der Errichtung des ASIC-Design-Centers im Frühjahr 1990 ermöglicht sie damit künftigen Ingenieuren eine Ausbildung in einem Bereich, der in der modernen Schaltungsentwicklung nicht mehr wegzudenken ist.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
In anisotropic media, the existence of leaky surface acoustic waves is a well-known phenomenon. Very recently, their analogs at the apex of an elastic silicon wedge have been found in experiments using laser-ultrasonics. In addition to a wedge-wave (WW) pulse with low speed, a pseudo-wedge wave (p-WW) pulse was found with a velocity higher than the velocity of shear bulk waves, propagating in the same direction. With a probe-beam-deflection technique, the propagation of the WW pulses was monitored on one of the faces of the wedge at variable distance from the apex. In this way, their depth structure and the leakage of the p-WW could be visualized directly. Calculations were carried out using a method based on a representation of the displacement field in Laguerre functions. This method has been validated by calculating the surface density of states in anisotropic media and comparing the results with those obtained from the surface Green's tensor. The approach has then been extended to the continuum of acoustic modes in infinite wedges with fixed wave-vector along the apex. These calculations confirmed the measured speeds of the WW and p-WW pulses.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Nowadays, it is assumed of many applications, companies and parts of the society to be always available online. However, according to [Times, Oct, 31 2011], 73% of the world population do not use the internet and thus aren't “online” at all. The most common reasons for not being “online” are expensive personal computer equipment and high costs for data connections, especially in developing countries that comprise most of the world’s population (e.g. parts of Africa, Asia, Central and South America). However it seems that these countries are leap-frogging the “PC and landline” age and moving directly to the “mobile” age. Decreasing prices for smart phones with internet connectivity and PC-like operating systems make it more affordable for these parts of the world population to join the “always-online” community. Storing learning content in a way accessible to everyone, including mobile and smart phones, seems therefore to be beneficial. This way, learning content can be accessed by personal computers as well as by mobile and smart phones and thus be accessible for a big range of devices and users. A new trend in the Internet technologies is to go to “the cloud”. This paper discusses the changes, challenges and risks of storing learning content in the “cloud”. The experiences were gathered during the evaluation of the necessary changes in order to make our solutions and systems “cloud-ready”.
In this paper we show that a model-free approach to learn behaviors in joint space can be successfully used to utilize toes of a humanoid robot. Keeping the approach model-free makes it applicable to any kind of humanoid robot, or robot in general. Here we focus on the benefit on robots with toes which is otherwise more difficult to exploit. The task has been to learn different kick behaviors on simulated Nao robots with toes in the RoboCup 3D soccer simulator. As a result, the robot learned to step on its toe for a kick that performs 30% better than learning the same kick without toes.
Learning to Walk With Toes
(2020)
This paper explains how a model-free (with respect to the robot model and the behavior to learn) approach can facilitate learning to walk from scratch. It is applied to a simulated Nao robot with toes. Results show an improvement of 30% in speed compared to a model without toes and also compared to our model-based approach, but with less stability.
Cardiac resynchronization therapy (CRT) with biventricular (BV) pacing is an established therapy in approximately two-thirds of symptomatic heart failure (HF) patients (P) with left bundle branch block (LBBB). The aim of this study was to evaluate left atrial (LA) conduction delay (LACD) and left ventricular (LV) conduction delay (LVCD) using pre-implantational transesophageal electrocardiography (ECG) in sinus rhythm (SR) CRT responder (R) and non-responder (NR).
Methods: SR HF P (n=52, age 63.6±10.4 years; 6 females, 46 males) with New York Heart Association (NYHA) class 3.0±0.2, 24.4±7.1 % LV ejection fraction and 171.2±37.6 ms QRS duration (QRSD) were measured by bipolar filtered transesophageal LA and LV ECG recording with hemispherical electrodes (HE) TO catheter (Osypka AG, Rheinfelden, Germany). LACD was measured between onset of P-wave in the surface ECG and onset of LA deflection in the LA ECG. LVCD was measured between onset of QRS in the surface ECG and onset of LV deflection in the LV ECG.
Results: There were 78.8 % SR CRT R (n=41) with 171.2±36.9 ms QRSD, 73.3±25.7 ms LACD, 80.0±24.0 ms LVCD and 2.3±0.5 QRSD-LVCD-ratio. SR CRT R QRSD correlated with LACD (r=0.688, P<0.001) and LVCD (r=0.699, P<0.001). There were 21.2 % SR CRT NR (n=11) with 153.4±22.4 ms QRSD (P=0.133), 69.8±24.8 ms LACD (n=6, P=0.767), 54.2±31.0 ms LVCD (P<0.0046) and 3.9±2.5 QRSD-LVCD-ratio (P<0.001). SR CRT NR QRSD not corre-lated with IACD (r=-0.218, P=0.678) and IVCD (r=0.042, P=0.903). During a 22.8±21.3 month CRT follow-up, the CRT R NYHA class improved from 3.1±0.3 to 1.9±0.3 (P<0.001). In CRT NR, NYHA class not improved (2.9±0.4 to 2.9±0.2, P=1) during 11.2±9.8 months BV pacing.
Conclusions: Transesophageal LA and LV ECG with HE can be utilized to analyse LACD and LVCD in HF P. Pre-implantational LVCD and QRSD-LVCD-ratio may be additional useful parameters to improve P selection for SR CRT.
AV delay (AVD) optimization can improve hemodynamics and avoid nonresponding to cardiac resynchronization therapy (CRT). AVD can be approximated by the sum of the individual implant-related interatrial conduction interval and a mean electromechanical interval of about 50ms. We searched for methods to facilitate automatic, implant-based AV delay optimization. In 25 patients (19m, 6f, age: 65±8yrs.) with Medtronic Insync III Marquis CRT-D series systems and left ventricular electrode at lateral or posterolateral wall, we determined interatrial conduction intervals by telemetric left ventricular tip versus superior vena cava coil electrogram (LVCE). Compared with esophageal measurements, the duration of optimal AV delay by LVCE showed good correlation (k=0.98, p=0.01) with a difference of 1.5±4.9ms, only. Therefore, LVCE is feasible to determine interatrial conduction intervals in order to automate AV delay optimization in CRT-D pacing promising increased accuracy compared to other algorithms.
Legacy industrial communication protocols are proved robust and functional. During the last decades, the industry has invented completely new or advanced versions of the legacy communication solutions. However, even with the high adoption rate of these new solutions, still the majority industry applications run on legacy, mostly fieldbus related technologies. Profibus is one of those technologies that still keep on growing in the market, albeit a slow in market growth in recent years. A retrofit technology that would enable these technologies to connect to the Internet of Things, utilize the ever growing potential of data analysis, predictive maintenance or cloud-based application, while at the same time not changing a running system is fundamental.
Die Möglichkeit zur digitalen Verbindung geographischer Orte mit Aufgaben, Herausforderungen oder Lernmaterialien hat eine Vielzahl von Anwendungen auch außerhalb der Mathematikbildung inspiriert. Dieser Beitrag stellt eine exemplarische Auswahl solcher Applikationen vor und versucht, die technischen, organisatorischen und konzeptionellen Gestaltungselemente zu systematisieren. Die Ausführungen sollen als Anregung bei der Anlage von Mathematiktrails sowie bei der Weiterentwicklung technischer Lösungen für den Lehreinsatz dienen.
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
In this paper the fatigue life of three cast iron materials, namely EN-GJS-700, EN-GJV-450 and EN-GJL-250, is predicted for combined thermomechanical fatigue and high cycle fatigue loading. To this end, a mechanism-based model is used, which is based on microcrack growth. The model considers crack growth due to low frequency loading (thermomechanical and low cycle fatigue) and due to high cycle fatigue. To determine the model parameters for the cast iron materials, fatigue tests are performed under combined loading and crack growth is measured at room temperature using the replica technique. Superimposed high cycle fatigue leads to an accelerated crack growth as soon as a critical crack length and thus the threshold stress intensity factor is exceeded. The model takes this effect into account and predicts the fatigue lives of all cast iron materials investigated under combined loadings very well.
In the development of new vehicles, increasing customer comfort requirements and rising safety regulations often result in an increase in weight. Nevertheless, in order to be able to meet the demand for reduced fuel consumption, it is necessary within product development process to implement complex and filigree lightweight structures. This contribution therefore addresses the potential of generatively developed components for fiber-reinforced additive manufacturing (FRAM). Currently, several commercial systems for this application are available on the market. Therefore, a comparison of the systems is first made to determine a suitable system. Then, a highly stressed and safety-relevant chassis component of a race car is generatively designed and manufactured using FRAM. A matrix with short fiber reinforcement and additional long fiber reinforcement with carbon fibers is applied. Finally, tensile tests are carried out to check the mechanical properties. In addition, relevant properties such as weight and cost are obtained in order to be able to compare them with conventionally developed and manufactured components.
Live streaming of events over an IP network as a catalyst in media technology education and training
(2020)
The paper describes how students are involved in applied research when setting up the technology and running a live event. Real-time IP transmission in broadcast environments via fiber optics will become increasingly important in the future. Therefore, it is necessary to create a platform in this area where students can learn how to handle IP infrastructure and fiber optics. With this in mind, we have built a fully functional TV control room that is completely IP-based. The authors present the steps in the development of the project and show the advantages of the proposed digital solutions. The IP network proves to be a synergy between the involved teams: participants of the robot competition and the members of the media team. These results are presented in the paper. Our activities aim to awaken enthusiasm for research and technology in young people. Broadcasts of live events are a good opportunity for "hands on" activities.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
The main advantage of mobile context-aware applications is to provide effective and tailored services by considering the environmental context, such as location, time, nearby objects and other data, and adapting their functionality according to the changing situations in the context information without explicit user interaction. The idea behind Location-Based Services (LBS) and Object-Based Services (OBS) is to offer fully-customizable services for user needs according to the location or the objects in a mobile user's vicinity. However, developing mobile context-aware software applications is considered as one of the most challenging application domains due to the built-in sensors as part of a mobile device. Visual Programming Languages (VPL) and hybrid visual programming languages are considered to be innovative approaches to address the inherent complexity of developing programs. The key contribution of our new development approach for location and object-based mobile applications is a use case driven development approach based on use case templates and visual code templates to enable even programming beginners to create context-aware mobile applications. An example of the use of the development approach is presented and open research challenges and perspectives for further development of our approach are formulated.
Im Rahmen eines GPS-Projektes ist an der Fachhochschule Offenburg ein Konzept für einen experimentellen Navigationsempfänger entstanden. Hierfür wurde der digitale Teil entwickelt und aufgebaut. Für die Realisierung der Schaltung sollten benutzerprogrammierbare Gate Arrays von Xilinx (LCAs) verwendet werden, die sich schon bei einer anderen Arbeit an der Fachhochschule bewährt hatten.
Nachfolgend möchte ich dem Leser einen Überblick über das GPS-System und die Entwicklung der LCAs geben.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
This study presents some results from a monitoring project with night ventilation and earthto-air heat exchanger. Both techniques refer to air-based low-energy cooling. As these technologies are limited to specific boundary conditions (e.g. moderate summer climate, low temperatures during night, or low ground temperatures, respectively), water-based low-energy cooling may be preferred in many projects. A comparison of the night-ventilated building with a ground-cooled building shows major differences in both concepts.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
This paper describes a taxonomy which allows to assess and compare different implementations of master data objects. A systematic breakdown of core entities provides a framework to tell apart four subdividing categories of master data objects: independent and dependent objects, relational objects, and reference objects that serve to attribute information. This supports the preparation of data migrations from one system to another.
Der Übergang Schule-Studium wird an der Hochschule Offenburg im Vorbereitungskurs Mathematik per Smartphone bzw. Tablet unterstützt. Eine Mathe-App gibt zu den Trainingsaufgaben bei Bedarf Tipps, Teilschritte und ausführliche Erklärungen und hilft so den Studierenden, die Lösungen in ihrer individuellen Lerngeschwindigkeit zu entwickeln. Der mobile Ansatz erlaubt, die ca. 400 Teilnehmer des Präsenz-Kurses in normalen Klassenräumen ohne PC-Ausstattung mit E-Learning vertraut zu machen und unterstützt die Flexibilisierung von Übungszeit und -ort über die Präsenzzeit hinaus. Durch die inhaltliche Orientierung am hochschulübergreifenden COSH (Cooperation Schule Hochschule) Mindestanforderungskatalog Mathematik entstand eine Lösung, die jedem Studienanfänger zur Vorbereitung auf das Studium nutzen kann, die zu den Brückenkurs-Inhalten vieler Hochschulen passt und für die aktuell schon Kooperationsprojekte mit Schulen starten.
In the railway technical centers, scheduling the maintenance activities is a very complex task, it consists in ordering, in the time, all the maintenance operations on the workstations, while respecting the number of resources, precedence constraints, and the workstations' availabilities. Currently, this process is not completely automatic. For improving this situation, this paper presents a mathematical model for the maintenance activities scheduling in the case of railway remanufacturing systems. The studied problem is modeled as a flexible job-shop, with the possibility for a job to be executed several times on a stage. MILP formulation is implemented with the Makespan as an objective, representing the time for remanufacturing the train. The aim is to create a generic model for optimizing the planning of the maintenance activities and improving the performance of the railway technical centers. At last, numerical results are presented, discussing the impact of the instances size on the computing time to solve the described problem.
Mathematik lässt sich in vielen Objekten finden. Sei es die lineare Steigung eines Handlaufs zum Schulgebäude oder die nahezu zylindrische Form einer Litfaßsäule in der Innenstadt. Das Bestreben, Schüler*innen diese Zusammenhänge entdecken zu lassen, steht im Zentrum des MathCityMap Projekts (Ludwig et al., 2013). Auf sogenannten mathematischen Wanderpfaden (bzw. Mathtrails) werden Schüler*innen durch eine App zu Mathematikaufgaben an realen Objekten bzw. in realen Situationen ihrer Umwelt geleitet. Um die Aufgaben zu lösen, werden Daten erhoben, z. B. durch Messungen oder Zählen. Entscheidend ist, dass die Aufgaben so gestellt sind, dass der Schritt der Datenbeschaffung nur vor Ort stattfinden kann und somit direkt mit dem Objekt bzw. der Situation verknüpft wird.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.