Refine
Year of publication
Document Type
- Conference Proceeding (1253) (remove)
Conference Type
- Konferenzartikel (950)
- Konferenz-Abstract (156)
- Konferenzband (77)
- Sonstiges (42)
- Konferenz-Poster (32)
Language
- English (934)
- German (317)
- Multiple languages (1)
- Russian (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (453)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (286)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (213)
- Fakultät Wirtschaft (W) (164)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (113)
- INES - Institut für nachhaltige Energiesysteme (59)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Open Access (560)
- Closed Access (456)
- Closed (223)
- Bronze (214)
- Diamond (29)
- Grün (13)
- Gold (6)
- Hybrid (6)
MPC-Workshop Februar 2014
(2014)
MPC-Workshop Februar 2013
(2013)
MPC-Workshop Februar 2012
(2012)
MPC-Workshop Februar 2011
(2011)
MPC-Workshop Februar 2007
(2007)
MPC-Workshop Februar 2006
(2006)
MPC-Workshop Februar 2005
(2005)
MPC-Workshop Februar 2004
(2004)
MPC-Workshop Februar 2001
(2001)
With our society moving towards Industry 4.0, an increasing number of tasks and procedures in manual workplaces are augmented with a digital component. While the research area of Internet-of-Things focuses on combining physical objects with their digital counterpart, the question arises how the interface to human workers should be designed in such Industry 4.0 environments. The project motionEAP focuses on using Augmented Reality for creating an interface between workers and digital products in interactive workplace scenarios. In this paper, we summarize the work that has been done in the motionEAP project over the run-time of 4 years. Further, we provide guidelines for creating interactive workplaces using Augmented Reality, based on the experience we gained.
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
Monitoring of the molecular structure of lubricant oil using a FT-Raman spectrometer prototype
(2014)
The determination of the physical state of the lubricant materials in complex mechanical systems is highly critical from different points of view: operative, economical, environmental, etc. Furthermore, there are several parameters that a lubricant oil must meet for a proper performance inside a machine. The monitoring of these lubricants can represent a serious issue depending on the analytical approach applied. The molecular change of aging lubricant oils have been analyzed using an all-standard-components and self-designed FT-Raman spectrometer. This analytical tool allows the direct and clean study of the vibrational changes in the molecular structure of the oils without having direct contact with the samples and without extracting the sample from the machine in operation. The FT-Raman spectrometer prototype used in the analysis of the oil samples consist of a Michelson interferometer and a self-designed photon counter cooled down on a Peltier element arrangement. The light coupling has been accomplished by using a conventional 62.5/125μm multi-mode fiber coupler. The FT-Raman arrangement has been able to extract high resolution and frequency precise Raman spectra, comparable to those obtained with commercial FT-Raman systems, from the lubricant oil samples analyzed. The spectral information has helped to determine certain molecular changes in the initial phases of wearing of the oil samples. The proposed instrument prototype has no additional complex hardware components or costly software modules. The mechanical and thermal irregularities influencing the FT-Raman spectrometer have been removed mathematically by accurately evaluating the optical path difference of the Michelson interferometer. This has been achieved by producing an additional interference pattern signal with a λ= 632.8 nm helium-neon laser, which differs from the conventional zero-crossing sampling (also known as Connes advantage) commonly used by FT-devices. It enables the FT-Raman system to perform reliable and clean spectral measurements from the analyzed oil samples.
Lithium-ion batteries show strongly nonlinear behaviour regarding the battery current and state of charge. Therefore, the modelling of lithium-ion batteries is complex. Combining physical and data-driven models in a grey-box model can simplify the modelling. Our focus is on using neural networks, especially neural ordinary differential equations, for grey-box modelling of lithium-ion batteries. A simple equivalent circuit model serves as a basis for the grey-box model. Unknown parameters and dependencies are then replaced by learnable parameters and neural networks. We use experimental full-cycle data and data from pulse tests of a lithium iron phosphate cell to train the model. Finally, we test the model against two dynamic load profiles: one consisting of half cycles and one dynamic load profile representing a home-storage system. The dynamic response of the battery is well captured by the model.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. In comparison with the engineers, the students often demonstrate lower motivation in learning systematic inventive techniques, like for example TRIZ methodology, and prefer random brainstorming for idea generation. The quality of obtained solutions also depends on the level of completeness of the problem analysis, which is more complex and time consuming in the case of interdisciplinary systems. The paper briefly describes one-semester-course of 60 hours in new product development with the Advanced Innovation Design Approach and TRIZ methodology, in which a typical industrial innovation process for one selected interdisciplinary mechatronic product is modelled.
Modelling detailed chemistry in lithium-ion batteries: Insight into performance, ageing and safety
(2018)
Modelling and Simulation of Microscale Trigeneration Systems Based on Real- Life Experimental Data
(2017)
For the shift of the energy grid towards a smarter decentralised system flexible microscale trigeneration systems will play an important role due to their ability to support the demand side management in buildings. However to harness their potential modern control methods like model predictive control must be implemented for their optimal scheduling and control. To implement such supervisory control methods, first, simple analytical models representing the behaviour of the components need to be developed. At the Institute of Energy System Technologies in Offenburg we have built a real-life microscale trigeneration plant and present in this paper the models based on experimental data. These models are qualitatively validated and their application in the future for the optimal scheduling problem is briefly motivated.
Hintergrund: Die Pulmonalvenenisolation (PVI) mit Hilfe von Kryoballonkathetern ist eine anerkannte Methode zur Behandlung von Vorhofflimmern (AF). Diese Methode bietet eine kürzere Behandlungsdauer als die klassische Therapie durch die Hochfrequenzablation (HF). Ziel dieser Studie war es, verschiedene Kryoballonkatheter, HF-Katheter und Ösophaguskatheter in ein Herzrhythmusmodell zu integrieren und mittels statischer und dynamischer Simulation elektrische und thermische Felder bei PVI unter Vorhofflimmern zu untersuchen.
Methodik: Die Modellierung und Simulation erfolgte mit der elektromagnetischen und thermischen Simulationssoftware CST (CST Darmstadt). Zwei Kryoballons, ein HF-Ablationskatheter und ein Ösophaguskatheter wurden auf der Grundlage der technischen Handbücher der Hersteller Medtronic und Osypka modelliert. Der 23 mm Kryoballon und ein kreisförmiger Mappingkatheter wurden in das Offenburger Herzrhythmusmodell integriert, insbesondere die left inferior pulmonary vein (LIPV) zur Simulation der thermischen Feldausbreitung während einer PVI. Die Simulation einer PVI mit HF-Energie wurde mit dem integrierten HF-Ablationskatheter in der Nähe der LIPV durchgeführt. Der im Herzrhythmusmodell platzierte TO8 Ösophaguskatheter ermöglichte die Ableitung linksatrialer elektrischer Felder bei AF und die Analyse thermischer Felder während PVI.
Ergebnisse: Elektrische Felder konnten bei Sinusrhythmus und AF mit einem AF-Fokus in der LIVP statisch und dynamisch im Herzen und Ösophagus simuliert werden. Bei einer simulierten 20 Sekunden Applikation eines Kryoballon-Katheters bei -50°C wurde eine Temperatur von -24°C in einer Tiefe von 0,5 mm im Myokard gemessen. In einer Tiefe von 1 mm betrug die Temperatur -3°C, bei 2 mm Tiefe 18°C und bei 3 mm Tiefe 29°C. Unter der 15 sekündigen Anwendung eines HF-Katheters mit einer 8-mm-Elektrode und einer Leistung von 5 W bei 420 kHz betrug die Temperatur an der Spitze der Elektrode 110°C. In einer Tiefe von 0,5 mm im Myokard betrug die Temperatur 75°C, in einer Tiefe von 1 mm 58°C, in einer Tiefe von 2 mm 45°C und in einer Tiefe von 3 mm 38°C. Im Ösophagus konnte bei den meisten Simulationen eine konstante Temperatur von 37°C gemessen und die Gefahr einer Ösophagus-Fistel ausgeschlossen werden. Bei Kryoablation der LIPV wurde eine Abkühlung des Ösophagus auf 30°C gemessen.
Schlussfolgerungen: Die Herzrhythmussimulation elektrischer und thermaler Felder ermöglichen mit Anwendung unterschiedlicher Herzkatheter eine statische und dynamische Simulation von PVI durch Kryoablation, HF-Ablation und Temperaturanalyse im Ösophagus. Unter Einbeziehung von MRT- oder CT-Daten können elektrische und thermale Simulationen möglicherweise zur Optimierung von PVIs genutzt werden.
Für langfaserverstärkte Thermoplaste (LFT) wird ein repräsentatives Volumenelement (RVE) für FEM-Simulationen generiert. Dies geschieht unter Berücksichtigung von mikrostrukturellen Kenngrößen wie Faserorientierungsverteilung, -volumengehalt und -längenverteilung, die für einen charakteristischen Werkstoffzustand experimentell ermittelt wurden. Mittels Mikrostruktursimulationen wird das Kriechverhalten von LFT untersucht. Das viskoelastische Verhalten der Matrix wird experimentell an Substanzproben aus Polypropylen ermittelt und in die RVE-Simulationen mit einem modifizierten Burgers-Modell implementiert. Schließlich werden die Rechnungen mit verschiedenen, fiktiven sowie experimentell ermittelten Faserlängenverteilungen mit Kriechversuchen am LFT verglichen. Es zeigt sich eine starke Abhängigkeit des Kriechverhaltens von der Faserlänge und eine hohe Prognosegüte der Simulationen, die die experimentell ermittelte Längenverteilung berücksichtigen.
Im vorliegenden Beitrag wird ein Strommarktsimulationsmodell entwickelt, mit dessen Hilfe die Bereitstellung von Flexibilität auf dem Strom- und Regelleistungsmarkt in Deutschland modell-gestützt analysiert werden soll. Das Modell bildet dabei zwei parallel verlaufende, zentrale Wettbewerbsmärkte ab, an denen Akteure durch die individuelle Gebotsermittlung handeln können. Die entsprechend hierzu entwickelte Gebotslogik wird detailliert erläutert, wobei der Fokus auf der Flexibilität fossil-thermischer Kraftwerke liegt. In der anschließenden Gegen-überstellung mit realen Marktpreisen zeigt sich, dass die verwendete Methodik und die Ge-botslogik den bestehenden Markt und dessen Marktergebnis in geeigneter Form wiederspie-geln, wodurch zukünftig unterschiedlichste Flexibilitätsszenarien analysiert und Aussagen zu deren Auswirkungen auf den Markt und seine Akteure getroffen werden können.
Modeling of Random Variations in a Switched Capacitor Circuit based Physically Unclonable Function
(2020)
The Internet of Things (IoT) is expanding to a wide range of fields such as home automation, agriculture, environmental monitoring, industrial applications, and many more. Securing tens of billions of interconnected devices in the near future will be one of the biggest challenges. IoT devices are often constrained in terms of computational performance, area, and power, which demand lightweight security solutions. In this context, hardware-intrinsic security, particularly physically unclonable functions (PUFs), can provide lightweight identification and authentication for such devices. In this paper, random capacitor variations in a switched capacitor PUF circuit are used as a source of entropy to generate unique security keys. Furthermore, a mathematical model based on the ordinary least square method is developed to describe the relationship between random variations in capacitors and the resulting output voltages. The model is used to filter out systematic variations in circuit components to improve the quality of the extracted secrets.
Combined heat and power production (CHP) based on solid oxide fuel cells (SOFC) is a very promising technology to achieve high electrical efficiency to cover power demand by decentralized production. This paper presents a dynamic quasi 2D model of an SOFC system which consists of stack and balance of plant and includes thermal coupling between the single components. The model is implemented in Modelica® and validated with experimental data for the stack UI-characteristic and the thermal behavior. The good agreement between experimental and simulation results demonstrates the validity of the model. Different operating conditions and system configurations are tested, increasing the net electrical efficiency to 57% by implementing an anode offgas recycle rate of 65%. A sensitivity analysis of characteristic values of the system like fuel utilization, oxygen-to-carbon ratio and electrical efficiency for different natural gas compositions is carried out. The result shows that a control strategy adapted to variable natural gas composition and its energy content should be developed in order to optimize the operation of the system.
In their famous work on prospect theory Kahneman and Tversky have presented a couple of examples where human decision making deviates from rational decision making as defined by decision theory. This paper describes the use of extended behavior networks to model human decision making in the sense of prospect theory. We show that the experimental findings of non-rational decision making described by Kahneman and Tversky can be reproduced using a slight variation of extended behavior networks.
The internal crowdsourcing-based ideation within a company can be defined as an involvement of its staff, specialists, managers, and other employees, to propose solution ideas for a pre-defined problem. This paper addresses a question, how many participants of the company-internal ideation process are required to nearly reach the ideation limit for the problems with a finite number of workable solutions. To answer the research question, the author proposes a set of metrics and a non-linear ideation performance function with a positive decreasing slope and ideation limit for the closed-ended problems. Three series of experiments helped to explore relationships between the metric attributes and resulted in a mathematical model which allows companies to predict the productivity metrics of their crowdsourcing ideation activities such as quantity of different ideas and ideation limit as a function of the number of contributors, their average personal creativity and ideation efficiency of a contributors’ group.
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions and environment. We present a model-based analysis of lithium-ion battery degradation in smart microgrids, in particular, a single-family house and an office tract with photovoltaics generator. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including SEI formation as ageing mechanism. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics, inverter, power consumption profiles, grid interaction, and energy management system, fed with historic weather data. The behavior of the cell in terms of degradation propensity, performance, state of charge and other internal states is predicted over an annual operation cycle. As result, we have identified a peak in degradation rate during the battery charging process, caused by charging overpotentials. Ageing strongly depends on the load situation, where the predicted annual capacity fade is 1.9 % for the single-family house and only 1.3 % for the office tract.
Model-based analysis of Electrochemical Pressure Impedance Spectroscopy (EPIS) for PEM Fuel Cells
(2019)
Electrochemical impedance spectroscopy (EIS) is a widely-used diagnostic technique to characterize electrochemical processes. It is based on the dynamic analysis of two electrical observables, that is, current and voltage. Electrochemical cells with gaseous reactants or products, in particular fuel cells, offer an additional observable, that is, the gas pressure. The dynamic coupling of current or voltage with gas pressure gives rise to a number of additional impedance definitions, for which we have previously introduced the term electrochemical pressure impedance spectroscopy (EPIS) [1,2]. EPIS shows a particular sensitivity towards transport processes of gas-phase or dissolved species, in particular, diffusion coefficients and transport pathway lengths. It is as such complementary to standard EIS, which is mainly sensitive towards electrochemical processes. First EPIS experiments on PEM fuel cells have recently been shown [3].
We present a detailed modeling and simulation analysis of EPIS of a PEM fuel cell. We use a 1D+1D continuum model of a fuel/air channel pair with GDL and MEA. Backpressure is dynamically varied, and the resulting simulated oscillation in cell voltage is evaluated to yield the ▁Z_( V⁄p_ca ) EPIS signal. Results are obtained for different transport situations of the fuel cell, giving rise to very complex EPIS shapes in the Nyquist plot. This complexity shows the necessity of model-based interpretation of the complex EPIS shapes. Based on the simulation results, specific features in the EPIS spectra can be assigned to different transport domains (gas channel, GDL, membrane water transport).
In the recent two years the authors have developed a light weight and low power flight control system for model helicopters consisting of an attitude and heading reference system (AHRS), a navigator (INS) augmented with GPS, barometric altitude sensor and a magnetic sensor, a flight control computer (FCC) and bidirectional ground data links. The system has been tested on a commercial stunt flight model helicopter. The AHRS consists of three MEMS-gyros, two 2-axis MEMS accelerometers and a microcontroller performing the required sensor compensation and data processing to generate attitude angles and true rate and acceleration data of the flying platform. The heading angle is augmented with a 2-axis magnetic sensor. The AHRS is stunt flight capable. The INS integrates the acceleration data to obtain velocity and position data. All data are calculated in both the helicopter and the local earth frame with 50 Hz rate. The algorithm is augmented with GPS data for the lateral movement and with a barometric altitude sensor for the vertical movement. The barometric data are compensated for air pressure changes due to the helicopter main rotor. The FCC contains a set of control loops in order to stabilize the helicopter in all axis and to perform commanded velocity and position tasks. The sampling rate for the control loops is again 50 Hz allowing flight control with high bandwidth. Various safety features are implemented in the software. The bidirectional data link is based on a 2.4 GHz Bluetooth Class I RF-link with a 115 kbaud data rate. A dipole antenna is used on the helicopter, an automatically tracking patch antenna is used on the ground. For commanded velocity flight a standard 35 MHz RF-link is used. For data sampling, monitoring and mode control a laptop is used on the ground. Several operating modes are implemented ranging from commanded velocity flight to simple automatic stunt flight according to predefined flight tracks. The model helicopter is an ALIGN TREX 600 with 3 kg flight mass and a brushless electric motor. The rotor diameter is 1.40 m. The helicopter is able to carry a payload which mass depends on the size of the installed LiPo-cells and the purpose of the flight mission. The system has been tested in quite a few flight tests and missions. The helicopter is controlled safely up to wind loads of at least 5 Beaufort - 6 Beaufort. Data and video captures will be presented. If permission is granted, a demonstration flight will be performed on the premises of the conference.
In the field of smart metering it can be observed that standardized protocol, like Wireless M-Bus or ZigBee, enjoy a rapidly increasing popularity. For the protocol implementations, however, up to now, mostly legacy engineering processes and technologies are used, and modern approaches such as model driven design processes or open software platform are disregarded. Therefore, within the WiMBex project, it shall be demonstrated that it is possible to develop a commercial class Wireless M-Bus implementation following state-of-the art design process and using TinyOS as an open source platform. This contribution describes the overall approach of the project, as well as the state and the first experiences of the current work in progress.
Amongst all the major hazard aspects for the health of people in big conglomerates is the increase of the particulate matter concentration. Traditional systems for particulate matter (PM) monitoring have a great number of drawbacks but the main issues are economical and are related to the installation costs and never ending periodical maintenance expenses. After all there are installations of such systems but their number is limited and having in mind the growth of population, cities and industry areas, there is even a bigger need for more information on air quality because PM changes non-linearly, has a wide range and different sources. In this paper, we propose an approach, based on low-cost sensor nodes, for real-time measuring and obtaining information about the PM concentration. The adoption of that approach allows for a detailed study of the intensities of pollution and its sources. The system power supply is powered by a PV module. The power supply unit is designed using a model-based design that is a new approach to prototyping power-operated electronic devices with guaranteed performance.
Ein besonderes Merkmal mobiler Dienste ist die Möglichkeit, kontextuelle Gegebenheiten, wie etwa die individuelle Benutzungssituation, bei der Diensterbringung zu berücksichtigen. Mit der Entkoppelung von Lernort und -zeit lässt sich eine Flexibilisierung des Lernprozesses und zugleich eine Integration in reale Arbeitsprozesse, wie z.B. Fertigungsprozesse, erreichen. Durch dieVerwendung mobiler Geräte sind Lernmaterialien direkt am Ort des Geschehens verfügbar. Ziel des kontextbezogenen Lernens ist es daher einen unmittelbaren Zusammenhang zwischen den angebotenen Lernmedien und der Situation, in der sich der Lernende befindet, herzustellen. Existierende Kategorie-Systeme zurKlassifizierung von Kontext genügen dieser Anforderung in der Regel nicht. In diesem Beitrag beschreiben wir Szenarien für kontextbezogenes mobile Learning am Beispiel von Fertigungsprozessen sowie Lösungsansätze für kontextbezogene mobile Dienste.
The developed solution enables the presentation of animations and 3D virtual reality (VR) on mobile devices and is well suited for mobile learning, thus creating new possibilities in the area of e-learning worldwide. Difficult relations in physics as well as intricate experiments in optics can be visualised on mobile devices without need for a personal computer.
This paper explores the potential of an m-learning environment by introducing the concept of mLab, a remote laboratory environment accessible through the use of handheld devices.
We are aiming to enhance the existing e-learning platform and internet-assisted laboratory settings, where students are offered in-depth tutoring, by providing compact tuition and tools for controlling simulations that are made available to learners via handheld devices. In this way, students are empowered by having access totheir simulations from any place and at any time.
Threat Modelling is an accepted technique to identify general threats as early as possible in the software development lifecycle. Previous work of ours did present an open-source framework and web-based tool (OVVL) for automating threat analysis on software architectures using STRIDE. However, one open problem is that available threat catalogues are either too general or proprietary with respect to a certain domain (e.g. .Net). Another problem is that a threat analyst should not only be presented (repeatedly) with a list of all possible threats, but already with some automated support for prioritizing these. This paper presents an approach to dynamically generate individual threat catalogues on basis of the established CWE as well as related CVE databases. Roughly 60% of this threat catalogue generation can be done by identifying and matching certain key values. To map the remaining 40% of our data (~50.000 CVE entries) we train a text classification model by using the already mapped 60% of our dataset to perform a supervised machine-learning based text classification. The generated entire dataset allows us to identify possible threats for each individual architectural element and automatically provide an initial prioritization. Our dataset as well as a supporting Jupyter notebook are openly available.
Die Studienanfänger in den technischen Studiengängen der Hochschulen für angewandte Wissenschaften haben nicht nur in Mathematik sondern auch in Physik sehr unterschiedliche Vorkenntnisse. Obwohl diese Fächer für das grundlegende Verständnis technischer Vorgänge von großer Bedeutung sind, kann die Ausbildung in diesen Bereichen angesichts der begrenzten dafür im Verlauf des Studiums zur Verfügung stehenden Zeitfenster nicht bei Null anfangen. Für Mathematik wurde daher von der Arbeitsgruppe cosh ein Mindestanforderungskatalog zusammengestellt und 2014 veröffentlicht. Er beschreibt Kenntnisse und Fertigkeiten, die Studienanfänger zur erfolgreichen Aufnahme eines WiMINT-Studiums (Wirtschaft, Mathematik, Informatik, Naturwissenschaft, Technik) an einer Hochschule benötigen. Inzwischen hat sich nun eine Arbeitsgruppe von Physikerinnen und Physikern an Hochschulen in Baden-Württemberg gebildet, deren Ziel es ist, einen analogen Mindestanforderungskatalog für den Bereich Physik zu erstellen. Hier wird der aktuell erreichte Stand der Arbeiten vorgestellt.
The identification and quantification of compounds in the gas phase becomes of increasing interest in the context of environmental protection, as well as in the analytical field. In this respect, the high extinction coefficients of vapours and gases in the ultraviolet wavelength region allow a very sensitive measurement system. In addition, the increased performance of the components necessary for setting up a measurement system, such as fibres, light sources and detectors has been improved. In particular the light sources and detectors offer improved stability, and the deep UV performance and solarisation resistance of fused silica fibres allow have been significantly optimized in the past years. Therefore a compact and reliable detection system with high measuring accuracy is developed. Within this paper possible applications of the system under development and recent results will be discussed.
Die Lehre auf dem Gebiet der rechnerunterstützten Methoden in der Produktentwicklung verkörpert einen zentralen Schwerpunkt der Ingenieursausbildung. Dies bedingt eine kontinuierliche Weiterentwicklung der Inhalte und der didaktischen Unterrichtsmethoden. In diesem Artikel wird die Entwicklung eines didaktischen Konzepts für die Konstruktionsausbildung zur Verbesserung der Präsentationskompetenz und Teamfähigkeit der Studierenden beschrieben und über erste Erfahrungen aus der Umsetzung in die Lehrveranstaltung „CAD/CAE“ berichtet. Die Studierenden erarbeiten in nach der Rundlitzenseilmethode strukturierten Gruppen numerische Lösungen zu Variantenrechnungen einer FEM-Aufgabe, nämlich „Berechnung der Formzahlen an Profilwellen mit Entlastungskerben“. Sie stellen ihre Ergebnisse in Form von 100-Sekunden-Vorträgen dar. Die Bewertung dieser Leistungen erfolgt nach dem Ampelschema. Eine detaillierte statistisch-psychologische Evaluation dieses didaktischen Konzepts ist Ziel weiterführender Untersuchungen.
TSN, or Time Sensitive Networking, is becoming an essential technology for integrated networks, enabling deterministic and best effort traffic to coexist on the same infrastructure. In order to properly configure, run and secure such TSN, monitoring functionality is a must. The TSN standard already has some preparations to provide such functionality and there are different methods to choose from. We implemented different methods to measure the time synchronisation accuracy between devices as a C library and compared the measurement results. Furthermore, the library has been integrated into the ControlTSN engineering framework.
In an experience economy market competition in software branches is becoming more and more intense. Technical innovations, global retail practices and the multidimensional conception of experiences provide both opportunities and challenges for companies worldwide. Retailers strive for an optimized conversion rate, but poor UX still abound. Particularly Germany-based companies are less evolved in an international comparison of industrialized economies. The value of integrating users in the development process is recognized, but methodologies must carefully be incorporated into existing agile workflows. The goal of this study is to bridge the gaps between internal agency and external client and user interests. The contribution is four-fold: an overview of the current status of customer centricity in the E-Commerce branch of trade is provided (I). Based on this corpus, a methodical framework, aiming to incorporate the experience logic in UX practices within an agile project team, is presented (II). The framework is applied by a single case study - the shop relaunch of a motorbike accessory store (III). Finally, all interest groups (UX, development and project management) are incorporated in the qualitative content analysis (IV).
The proposed method includes identification and documentation of the elementary TRIZ inventive principles from the TRIZ body of knowledge, extension and enhancement of inventive principles by patents and technologies analysis, avoiding overlapping and redundant principles, classification and adaptation of principles to at least following categories such as working medium, target object, useful action, harmful effect, environment, information, field, substance, time, and space, assignment of the elementary inventive principles to the at least following underlying engineering domains such as universal, design, mechanical, acoustic, thermal, chemical, electromagnetic, intermolecular, biological, and data processing. The method includes classification of abstraction level of the elementary principles, definition of the statistical ranking of principles for different problem types, and specific engineering or non-technical domains, definition of strategies for selection of principles sets with high solution potential for predefined problems, automated semantic transformation of the elementary inventive principles into solution ideas, evaluation of automatically generated ideas and transformation of ideas to innovation or inventive concepts.
Energy and environment continue to be major issues of human mankind. This holds true on the regional, the national, and the global level. And it is one of the problems, where engineers and scientists in conjunction with political will and people's awareness, can find new approaches and solutions to save the natural resources and to make their use more efficient.
Message co chairmen
(2017)
Die direkte Vermarktung von Strom aus Wind und Sonne stellt einen wichtigen Schritt der Energiewende dar. Einerseits kann durch die Marktintegration die Unabhängigkeit von EEG-Subventionen gelingen. Andererseits wird über diese Mechanismen die Stromerzeugung an der Nachfrage orientiert, wodurch zur Stabilität des Stromnetzes beigetragen wird. Ein Beispiel dafür ist die lokale Vermarktung von PV-Strom in einem Mietshaus. Für deren Umsetzung benötigen die Akteure ein Mess- und Steuerungssystem, dass vor Ort Zähler- und Anlagendaten erfasst und die Abrechnung der Mieter vereinfacht. Außerdem sollte es Kennwerte wie beispielsweise den PV-Anteil berechnen und gegebenenfalls ein Blockheizkraftwerk steuern. Weder die Zählersysteme der Messstellenbetreiber noch die Steuerungssysteme von PV- oder Blockheizkraftwerken erfüllen diese Anforderungen ausreichend. In der Forschung ist man währenddessen bereits einen Schritt weiter und arbeitet an technischen Systemen, die für wesentlich komplexere Energiesystem- und Markttopologien ausgelegt werden. In dieser Arbeit werden die neuen technischen Anforderungen der Direktvermarktung in einem Mietshaus identifiziert und mit dem Stand aktueller Marktprodukte sowie dem System »OpenMUC« aus der Forschung verglichen.
The paper describes the methodology and experimental results for revealing similarities in thermal dependencies of biases of accelerometers and gyroscopes from 250 inertial MEMS chips (MPU-9250). Temperature profiles were measured on an experimental setup with a Peltier element for temperature control. Classification of temperature curves was carried out with machine learning approach.
A perfect sensor should not have thermal dependency at all. Thus, only sensors inside the clusters with smaller dependency (smaller total temperature slopes) might be pre-selected for production of high accuracy inertial navigation modules. It was found that no unified thermal profile (“family” curve) exists for all sensors in a production batch. However, obviously, sensors might be grouped according to their parameters. Therefore, the temperature compensation profiles might be regressed for each group. 12 slope coefficients on 5 degrees temperature intervals from 0°C to +60°C were used as the features for the k-means++ clustering algorithm.
The minimum number of clusters for all sensors to be well separated from each other by bias thermal profiles in our case is 6. It was found by applying the elbow method. For each cluster a regression curve can be obtained.
The paper is addressing the needs of the universities regarding qualification of students as future R&D specialists in efficient techniques for successfully running innovation process. It briefly describes the program of a novel one-semester-course of 150 hours in new product development and inventive problem solving with TRIZ methodology, offered for the master students at the Beuth University of Applied Sciences in Berlin. The paper outlines multi-source educational approach, which includes a new product development project (about 50% of the complete course), theory, practical work, self-learning with the software tools for computer-aided innovation, and demonstrates examples of the students work. The research part analyses the learning experience, identifies the factors that impact the innovation and problem solving performance of the students, and underlines the main difficulties faced by the students in the course. It describes a method for measurement of education efficiency and compares the results with educational experience in the industry. The presented results can help universities to establish the education in new product development or to improve its performance.
Machine-to-machine communication is continuously extending to new application fields. Especially smart metering has the potential to become the first really large-scale M2M application. Although in the future distributed meter devices will be mainly connected via dedicated primary communication protocols, like ZigBee, Wireless
M-Bus or alike, a major percentage of all meters will be connected via point to point communication using GPRS or UMTS platforms. Thus, such meter devices have to be extremely cost and energy efficient, especially if the devices are battery based and powered several years by a single battery. This paper presents the development of an automated measurement unit for power and time, thus energy characteristics can be recorded. The measurement unit includes a hardware platform for the device
under test (DUT) and a database based software environment for a smooth execution and analysis of the measurements.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.
Mathematik lässt sich in vielen Objekten finden. Sei es die lineare Steigung eines Handlaufs zum Schulgebäude oder die nahezu zylindrische Form einer Litfaßsäule in der Innenstadt. Das Bestreben, Schüler*innen diese Zusammenhänge entdecken zu lassen, steht im Zentrum des MathCityMap Projekts (Ludwig et al., 2013). Auf sogenannten mathematischen Wanderpfaden (bzw. Mathtrails) werden Schüler*innen durch eine App zu Mathematikaufgaben an realen Objekten bzw. in realen Situationen ihrer Umwelt geleitet. Um die Aufgaben zu lösen, werden Daten erhoben, z. B. durch Messungen oder Zählen. Entscheidend ist, dass die Aufgaben so gestellt sind, dass der Schritt der Datenbeschaffung nur vor Ort stattfinden kann und somit direkt mit dem Objekt bzw. der Situation verknüpft wird.
In the railway technical centers, scheduling the maintenance activities is a very complex task, it consists in ordering, in the time, all the maintenance operations on the workstations, while respecting the number of resources, precedence constraints, and the workstations' availabilities. Currently, this process is not completely automatic. For improving this situation, this paper presents a mathematical model for the maintenance activities scheduling in the case of railway remanufacturing systems. The studied problem is modeled as a flexible job-shop, with the possibility for a job to be executed several times on a stage. MILP formulation is implemented with the Makespan as an objective, representing the time for remanufacturing the train. The aim is to create a generic model for optimizing the planning of the maintenance activities and improving the performance of the railway technical centers. At last, numerical results are presented, discussing the impact of the instances size on the computing time to solve the described problem.
Der Übergang Schule-Studium wird an der Hochschule Offenburg im Vorbereitungskurs Mathematik per Smartphone bzw. Tablet unterstützt. Eine Mathe-App gibt zu den Trainingsaufgaben bei Bedarf Tipps, Teilschritte und ausführliche Erklärungen und hilft so den Studierenden, die Lösungen in ihrer individuellen Lerngeschwindigkeit zu entwickeln. Der mobile Ansatz erlaubt, die ca. 400 Teilnehmer des Präsenz-Kurses in normalen Klassenräumen ohne PC-Ausstattung mit E-Learning vertraut zu machen und unterstützt die Flexibilisierung von Übungszeit und -ort über die Präsenzzeit hinaus. Durch die inhaltliche Orientierung am hochschulübergreifenden COSH (Cooperation Schule Hochschule) Mindestanforderungskatalog Mathematik entstand eine Lösung, die jedem Studienanfänger zur Vorbereitung auf das Studium nutzen kann, die zu den Brückenkurs-Inhalten vieler Hochschulen passt und für die aktuell schon Kooperationsprojekte mit Schulen starten.
This paper describes a taxonomy which allows to assess and compare different implementations of master data objects. A systematic breakdown of core entities provides a framework to tell apart four subdividing categories of master data objects: independent and dependent objects, relational objects, and reference objects that serve to attribute information. This supports the preparation of data migrations from one system to another.
This paper describes the concept and some results of the project "Menschen Lernen Maschinelles Lernen" (Humans Learn Machine Learning, ML2) of the University of Applied Sciences Offenburg. It brings together students of different courses of study and practitioners from companies on the subject of Machine Learning. A mixture of blended learning and practical projects ensures a tight coupling of machine learning theory and application. The paper details the phases of ML2 and mentions two successful example projects.
The importance of machine learning has been increasing dramatically for years. From assistance systems to production optimisation to support the health sector, almost every area of daily life and industry comes into contact with machine learning. Besides all the benefits that ML brings, the lack of transparency and the difficulty in creating traceability pose major risks. While there are solutions that make the training of machine learning models more transparent, traceability is still a major challenge. Ensuring the identity of a model is another challenge. Unnoticed modification of a model is also a danger when using ML. One solution is to create an ML birth certificate and an ML family tree secured by blockchain technology. Important information about training and changes to the model through retraining can be stored in a blockchain and accessed by any user to create more security and traceability about an ML model.
The 40 Altshuller Inventive Principles with numerous sub-principles remain over decades the most frequently applied tool of the Theory of Inventive Problem Solving TRIZ for systematic idea generation. However, their application often requires a concentrated, creative and abstract way of thinking that can be fairly challenging for the newcomers to TRIZ. This paper describes an approach to reduce the abstraction level of inventive sub-principles and presents the results of the idea generation experiment conducted with three groups of undergraduate and graduate students from different years of study in mechanical and process engineering. The students were asked to generate and to record their individual ideas for three design problems using a pre-defined set of classical and modified sub-principles within 10 minutes. The overall outcomes of the experiment support the assumption that the less abstract wording of the modified sub-principles leads to higher number of ideas. The distribution of ideas between the fields of MATCHEM-IBD (Mechanical, Acoustic, Thermal, Chemical, Electrical, Magnetic, Intermolecular, Biological and Data processing) differs significantly between groups using modified and abstract sub-principles.
This study presents some results from a monitoring project with night ventilation and earthto-air heat exchanger. Both techniques refer to air-based low-energy cooling. As these technologies are limited to specific boundary conditions (e.g. moderate summer climate, low temperatures during night, or low ground temperatures, respectively), water-based low-energy cooling may be preferred in many projects. A comparison of the night-ventilated building with a ground-cooled building shows major differences in both concepts.
Vehicle-to-Everything (V2X) communication promises improvements in road safety and efficiency by enabling low-latency and reliable communication services for vehicles. Besides using Mobile Broadband (MBB), there is a need to develop Ultra Reliable Low Latency Communications (URLLC) applications with cellular networks especially when safety-related driving applications are concerned. Future cellular networks are expected to support novel latencysensitive use cases. Many applications of V2X communication, like collaborative autonomous driving requires very low latency and high reliability in order to support real-time communication between vehicles and other network elements. In this paper, we classify V2X use-cases and their requirements in order to identify cellular network technologies able to support them. The bottleneck problem of the medium access in 4G Long Term Evolution(LTE) networks is random access procedure. It is evaluated through simulations to further detail the future limitations and requirements. Limitations and improvement possibilities for next generation of cellular networks are finally detailed. Moreover, the results presented in this paper provide the limits of different parameter sets with regard to the requirements of V2X-based applications. In doing this, a starting point to migrate to Narrowband IoT (NB-IoT) or 5G - solutions is given.
Im Rahmen eines GPS-Projektes ist an der Fachhochschule Offenburg ein Konzept für einen experimentellen Navigationsempfänger entstanden. Hierfür wurde der digitale Teil entwickelt und aufgebaut. Für die Realisierung der Schaltung sollten benutzerprogrammierbare Gate Arrays von Xilinx (LCAs) verwendet werden, die sich schon bei einer anderen Arbeit an der Fachhochschule bewährt hatten.
Nachfolgend möchte ich dem Leser einen Überblick über das GPS-System und die Entwicklung der LCAs geben.
The main advantage of mobile context-aware applications is to provide effective and tailored services by considering the environmental context, such as location, time, nearby objects and other data, and adapting their functionality according to the changing situations in the context information without explicit user interaction. The idea behind Location-Based Services (LBS) and Object-Based Services (OBS) is to offer fully-customizable services for user needs according to the location or the objects in a mobile user's vicinity. However, developing mobile context-aware software applications is considered as one of the most challenging application domains due to the built-in sensors as part of a mobile device. Visual Programming Languages (VPL) and hybrid visual programming languages are considered to be innovative approaches to address the inherent complexity of developing programs. The key contribution of our new development approach for location and object-based mobile applications is a use case driven development approach based on use case templates and visual code templates to enable even programming beginners to create context-aware mobile applications. An example of the use of the development approach is presented and open research challenges and perspectives for further development of our approach are formulated.
The term “attribute transfer” refers to the tasks of altering images in such a way, that the semantic interpretation of a given input image is shifted towards an intended direction, which is quantified by semantic attributes. Prominent example applications are photo realistic changes of facial features and expressions, like changing the hair color, adding a smile, enlarging the nose or altering the entire context of a scene, like transforming a summer landscape into a winter panorama. Recent advances in attribute transfer are mostly based on generative deep neural networks, using various techniques to manipulate images in the latent space of the generator. In this paper, we present a novel method for the common sub-task of local attribute transfers, where only parts of a face have to be altered in order to achieve semantic changes (e.g. removing a mustache). In contrast to previous methods, where such local changes have been implemented by generating new (global) images, we propose to formulate local attribute transfers as an inpainting problem. Removing and regenerating only parts of images, our “Attribute Transfer Inpainting Generative Adversarial Network” (ATI-GAN) is able to utilize local context information to focus on the attributes while keeping the background unmodified resulting in visually sound results.
Live streaming of events over an IP network as a catalyst in media technology education and training
(2020)
The paper describes how students are involved in applied research when setting up the technology and running a live event. Real-time IP transmission in broadcast environments via fiber optics will become increasingly important in the future. Therefore, it is necessary to create a platform in this area where students can learn how to handle IP infrastructure and fiber optics. With this in mind, we have built a fully functional TV control room that is completely IP-based. The authors present the steps in the development of the project and show the advantages of the proposed digital solutions. The IP network proves to be a synergy between the involved teams: participants of the robot competition and the members of the media team. These results are presented in the paper. Our activities aim to awaken enthusiasm for research and technology in young people. Broadcasts of live events are a good opportunity for "hands on" activities.
In the development of new vehicles, increasing customer comfort requirements and rising safety regulations often result in an increase in weight. Nevertheless, in order to be able to meet the demand for reduced fuel consumption, it is necessary within product development process to implement complex and filigree lightweight structures. This contribution therefore addresses the potential of generatively developed components for fiber-reinforced additive manufacturing (FRAM). Currently, several commercial systems for this application are available on the market. Therefore, a comparison of the systems is first made to determine a suitable system. Then, a highly stressed and safety-relevant chassis component of a race car is generatively designed and manufactured using FRAM. A matrix with short fiber reinforcement and additional long fiber reinforcement with carbon fibers is applied. Finally, tensile tests are carried out to check the mechanical properties. In addition, relevant properties such as weight and cost are obtained in order to be able to compare them with conventionally developed and manufactured components.
In this paper the fatigue life of three cast iron materials, namely EN-GJS-700, EN-GJV-450 and EN-GJL-250, is predicted for combined thermomechanical fatigue and high cycle fatigue loading. To this end, a mechanism-based model is used, which is based on microcrack growth. The model considers crack growth due to low frequency loading (thermomechanical and low cycle fatigue) and due to high cycle fatigue. To determine the model parameters for the cast iron materials, fatigue tests are performed under combined loading and crack growth is measured at room temperature using the replica technique. Superimposed high cycle fatigue leads to an accelerated crack growth as soon as a critical crack length and thus the threshold stress intensity factor is exceeded. The model takes this effect into account and predicts the fatigue lives of all cast iron materials investigated under combined loadings very well.
One of the major challenges impeding the energy transition is the intermittency of solar and wind electricity generation due to their dependency on weather changes. The demand-side energy flexibility contributes considerably to mitigate the energy supply/demand imbalances resulting from external influences such as the weather. As one of the largest electricity consumers, the industrial enterprises present a high demand-side flexibility potential from their production processes and on-site energy assets. In this direction, methods are needed with a focus on enabling the energy flexibility and ensure an active participation of such enterprises in the electricity markets especially with variable prices of electricity. This paper presents a generic model library for an industrial enterprise implemented with optimal control for energy flexibility purposes. The components in the model library represent the typical technical units of an industrial enterprise on material, media, and energy flow levels with their operative constraints. A case study of a plastic manufacturing plant using the generic model library is also presented, in which the results of two simulation with different electricity prices are compared and the behavior of the model can be assessed. The results show that the model provides an optimal scheduling of the manufacturing system according to the variations in the electricity prices, and ensures an optimal control for utilities and energy systems needed for the production.
Die Möglichkeit zur digitalen Verbindung geographischer Orte mit Aufgaben, Herausforderungen oder Lernmaterialien hat eine Vielzahl von Anwendungen auch außerhalb der Mathematikbildung inspiriert. Dieser Beitrag stellt eine exemplarische Auswahl solcher Applikationen vor und versucht, die technischen, organisatorischen und konzeptionellen Gestaltungselemente zu systematisieren. Die Ausführungen sollen als Anregung bei der Anlage von Mathematiktrails sowie bei der Weiterentwicklung technischer Lösungen für den Lehreinsatz dienen.
Legacy industrial communication protocols are proved robust and functional. During the last decades, the industry has invented completely new or advanced versions of the legacy communication solutions. However, even with the high adoption rate of these new solutions, still the majority industry applications run on legacy, mostly fieldbus related technologies. Profibus is one of those technologies that still keep on growing in the market, albeit a slow in market growth in recent years. A retrofit technology that would enable these technologies to connect to the Internet of Things, utilize the ever growing potential of data analysis, predictive maintenance or cloud-based application, while at the same time not changing a running system is fundamental.
AV delay (AVD) optimization can improve hemodynamics and avoid nonresponding to cardiac resynchronization therapy (CRT). AVD can be approximated by the sum of the individual implant-related interatrial conduction interval and a mean electromechanical interval of about 50ms. We searched for methods to facilitate automatic, implant-based AV delay optimization. In 25 patients (19m, 6f, age: 65±8yrs.) with Medtronic Insync III Marquis CRT-D series systems and left ventricular electrode at lateral or posterolateral wall, we determined interatrial conduction intervals by telemetric left ventricular tip versus superior vena cava coil electrogram (LVCE). Compared with esophageal measurements, the duration of optimal AV delay by LVCE showed good correlation (k=0.98, p=0.01) with a difference of 1.5±4.9ms, only. Therefore, LVCE is feasible to determine interatrial conduction intervals in order to automate AV delay optimization in CRT-D pacing promising increased accuracy compared to other algorithms.
Cardiac resynchronization therapy (CRT) with biventricular (BV) pacing is an established therapy in approximately two-thirds of symptomatic heart failure (HF) patients (P) with left bundle branch block (LBBB). The aim of this study was to evaluate left atrial (LA) conduction delay (LACD) and left ventricular (LV) conduction delay (LVCD) using pre-implantational transesophageal electrocardiography (ECG) in sinus rhythm (SR) CRT responder (R) and non-responder (NR).
Methods: SR HF P (n=52, age 63.6±10.4 years; 6 females, 46 males) with New York Heart Association (NYHA) class 3.0±0.2, 24.4±7.1 % LV ejection fraction and 171.2±37.6 ms QRS duration (QRSD) were measured by bipolar filtered transesophageal LA and LV ECG recording with hemispherical electrodes (HE) TO catheter (Osypka AG, Rheinfelden, Germany). LACD was measured between onset of P-wave in the surface ECG and onset of LA deflection in the LA ECG. LVCD was measured between onset of QRS in the surface ECG and onset of LV deflection in the LV ECG.
Results: There were 78.8 % SR CRT R (n=41) with 171.2±36.9 ms QRSD, 73.3±25.7 ms LACD, 80.0±24.0 ms LVCD and 2.3±0.5 QRSD-LVCD-ratio. SR CRT R QRSD correlated with LACD (r=0.688, P<0.001) and LVCD (r=0.699, P<0.001). There were 21.2 % SR CRT NR (n=11) with 153.4±22.4 ms QRSD (P=0.133), 69.8±24.8 ms LACD (n=6, P=0.767), 54.2±31.0 ms LVCD (P<0.0046) and 3.9±2.5 QRSD-LVCD-ratio (P<0.001). SR CRT NR QRSD not corre-lated with IACD (r=-0.218, P=0.678) and IVCD (r=0.042, P=0.903). During a 22.8±21.3 month CRT follow-up, the CRT R NYHA class improved from 3.1±0.3 to 1.9±0.3 (P<0.001). In CRT NR, NYHA class not improved (2.9±0.4 to 2.9±0.2, P=1) during 11.2±9.8 months BV pacing.
Conclusions: Transesophageal LA and LV ECG with HE can be utilized to analyse LACD and LVCD in HF P. Pre-implantational LVCD and QRSD-LVCD-ratio may be additional useful parameters to improve P selection for SR CRT.
Learning to Walk With Toes
(2020)
This paper explains how a model-free (with respect to the robot model and the behavior to learn) approach can facilitate learning to walk from scratch. It is applied to a simulated Nao robot with toes. Results show an improvement of 30% in speed compared to a model without toes and also compared to our model-based approach, but with less stability.
In this paper we show that a model-free approach to learn behaviors in joint space can be successfully used to utilize toes of a humanoid robot. Keeping the approach model-free makes it applicable to any kind of humanoid robot, or robot in general. Here we focus on the benefit on robots with toes which is otherwise more difficult to exploit. The task has been to learn different kick behaviors on simulated Nao robots with toes in the RoboCup 3D soccer simulator. As a result, the robot learned to step on its toe for a kick that performs 30% better than learning the same kick without toes.
Nowadays, it is assumed of many applications, companies and parts of the society to be always available online. However, according to [Times, Oct, 31 2011], 73% of the world population do not use the internet and thus aren't “online” at all. The most common reasons for not being “online” are expensive personal computer equipment and high costs for data connections, especially in developing countries that comprise most of the world’s population (e.g. parts of Africa, Asia, Central and South America). However it seems that these countries are leap-frogging the “PC and landline” age and moving directly to the “mobile” age. Decreasing prices for smart phones with internet connectivity and PC-like operating systems make it more affordable for these parts of the world population to join the “always-online” community. Storing learning content in a way accessible to everyone, including mobile and smart phones, seems therefore to be beneficial. This way, learning content can be accessed by personal computers as well as by mobile and smart phones and thus be accessible for a big range of devices and users. A new trend in the Internet technologies is to go to “the cloud”. This paper discusses the changes, challenges and risks of storing learning content in the “cloud”. The experiences were gathered during the evaluation of the necessary changes in order to make our solutions and systems “cloud-ready”.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
In this work, we evaluate two different image clustering objectives, k-means clustering and correlation clustering, in the context of Triplet Loss induced feature space embeddings. Specifically, we train a convolutional neural network to learn discriminative features by optimizing two popular versions of the Triplet Loss in order to study their clustering properties under the assumption of noisy labels. Additionally, we propose a new, simple Triplet Loss formulation, which shows desirable properties with respect to formal clustering objectives and outperforms the existing methods. We evaluate all three Triplet loss formulations for K-means and correlation clustering on the CIFAR-10 image classification dataset.
Environmentally-friendly implementation of new technologies and eco-innovative solutions often faces additional secondary ecological problems. On the other hand, existing biological systems show a lesser environmental impact as compared to the human-made products or technologies. The paper defines a research agenda for identification of underlying eco-inventive principles used in the natural systems created through evolution. Finally, the paper proposes a comprehensive method for capturing eco-innovation principles in biological systems in addition and complementary to the existing biomimetic methods and TRIZ methodology and illustrates it with an example.
As engineering graduates and specialists frequently lack the advanced skills and knowledge required to run eco-innovation systematically, the paper proposes a new learning materials and educational tools in the field of eco-innovation and evaluates the learning experience and outcomes. This programme is aimed at strengthening student’s skills and motivation to identify and creatively overcome secondary eco-contradictions in case if additional environmental problems appear as negative side effects of eco-friendly solutions. The paper evaluates the efficiency of the proposed interdisciplinary tool for systematic eco-innovation including creative semi-automatic knowledge-based idea generation and concept development. It analyses the learning experience and identifies the factors that impact the eco-innovation performance of the students.
In anisotropic media, the existence of leaky surface acoustic waves is a well-known phenomenon. Very recently, their analogs at the apex of an elastic silicon wedge have been found in experiments using laser-ultrasonics. In addition to a wedge-wave (WW) pulse with low speed, a pseudo-wedge wave (p-WW) pulse was found with a velocity higher than the velocity of shear bulk waves, propagating in the same direction. With a probe-beam-deflection technique, the propagation of the WW pulses was monitored on one of the faces of the wedge at variable distance from the apex. In this way, their depth structure and the leakage of the p-WW could be visualized directly. Calculations were carried out using a method based on a representation of the displacement field in Laguerre functions. This method has been validated by calculating the surface density of states in anisotropic media and comparing the results with those obtained from the surface Green's tensor. The approach has then been extended to the continuum of acoustic modes in infinite wedges with fixed wave-vector along the apex. These calculations confirmed the measured speeds of the WW and p-WW pulses.
Generative adversarial networks are the state of the art approach towards learned synthetic image generation. Although early successes were mostly unsupervised, bit by bit, this trend has been superseded by approaches based on labelled data. These supervised methods allow a much finer-grained control of the output image, offering more flexibility and stability. Nevertheless, the main drawback of such models is the necessity of annotated data. In this work, we introduce an novel framework that benefits from two popular learning techniques, adversarial training and representation learning, and takes a step towards unsupervised conditional GANs. In particular, our approach exploits the structure of a latent space (learned by the representation learning) and employs it to condition the generative model. In this way, we break the traditional dependency between condition and label, substituting the latter by unsupervised features coming from the latent space. Finally, we show that this new technique is able to produce samples on demand keeping the quality of its supervised counterpart.
Enabling ultra-low latency is one of the major drivers for the development of future cellular networks to support delay sensitive applications including factory automation, autonomous vehicles and tactile internet. Narrowband Internet of Things (NB-IoT) is a 3 rd Generation Partnership Project (3GPP) Release 13 standardized cellular network currently optimized for massive Machine Type Communication (mMTC). To reduce the latency in cellular networks, 3GPP has proposed some latency reduction techniques that include Semi Persistent Scheduling (SPS) and short Transmission Time Interval (sTTI). In this paper, we investigate the potential of adopting both techniques in NB-IoT networks and provide a comprehensive performance evaluation. We firstly analyze these techniques and then implement them in an open-source network simulator (NS3). Simulations are performed with a focus on Cat-NB1 User Equipment (UE) category to evaluate the uplink user-plane latency. Our results show that SPS and sTTI have the potential to greatly reduce the latency in NB-IoT systems. We believe that both techniques can be integrated into NB-IoT systems to position NB-IoT as a preferred technology for low data rate Ultra-Reliable Low-Latency Communication (URLLC) applications before 5G has been fully rolled out.
The next generation cellular networks are expected to improve reliability, energy efficiency, data rate, capacity and latency. Originally, Machine Type Communication (MTC) was designed for low-bandwidth high-latency applications such as, environmental sensing, smart dustbin, etc., but there is additional demand around applications with low latency requirements, like industrial automation, driver-less cars, and so on. Improvements are required in 4G Long Term Evolution (LTE) networks towards the development of next generation cellular networks for providing very low latency and high reliability. To this end, we present an in-depth analysis of parameters that contribute to the latency in 4G networks along with a description of latency reduction techniques. We implement and validate these latency reduction techniques in the open-source network simulator (NS3) for narrowband user equipment category Cat-Ml (LTE-M) to analyze the improvements. The results presented are a step towards enabling narrowband Ultra Reliable Low Latency Communication (URLLC) networks.
The excessive control signaling in Long Term Evolution networks required for dynamic scheduling impedes the deployment of ultra-reliable low latency applications. Semi-persistent scheduling was originally designed for constant bit-rate voice applications, however, very low control overhead makes it a potential latency reduction technique in Long Term Evolution. In this paper, we investigate resource scheduling in narrowband fourth generation Long Term Evolution networks through Network Simulator (NS3) simulations. The current release of NS3 does not include a semi-persistent scheduler for Long Term Evolution module. Therefore, we developed the semi-persistent scheduling feature in NS3 to evaluate and compare the performance in terms of uplink latency. We evaluate dynamic scheduling and semi-persistent scheduling in order to analyze the impact of resource scheduling methods on up-link latency.