Refine
Year of publication
Document Type
- Conference Proceeding (1253) (remove)
Conference Type
- Konferenzartikel (950)
- Konferenz-Abstract (156)
- Konferenzband (77)
- Sonstiges (42)
- Konferenz-Poster (32)
Language
- English (934)
- German (317)
- Multiple languages (1)
- Russian (1)
Keywords
- Mikroelektronik (62)
- RoboCup (32)
- Gamification (12)
- Machine Learning (12)
- injury (10)
- Biomechanik (9)
- Finite-Elemente-Methode (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (453)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (286)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (213)
- Fakultät Wirtschaft (W) (164)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (120)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (113)
- INES - Institut für nachhaltige Energiesysteme (59)
- IMLA - Institute for Machine Learning and Analytics (46)
- ACI - Affective and Cognitive Institute (40)
- Fakultät Medien (M) (ab 22.04.2021) (33)
Open Access
- Open Access (560)
- Closed Access (456)
- Closed (223)
- Bronze (214)
- Diamond (29)
- Grün (13)
- Gold (6)
- Hybrid (6)
iSign - internet based simulation of guided wave propagation - ist eine Lernumgebung für Online-Laborversuche. Die Client-Serverarchitektur nutzt server-seitig das Tool F3D, das elektromagnetische Felder in 3D-Strukturen berechnet. Ein Apache-Webserver (unter Linux) bedient den Theorie-/Aufgaben-Teil und die Lernsystemadministration. Ein HPUX Simulationsserver steuert und kontrolliert den mehrstufigen Simulationsvorgang. Eine MySQL-Datenbank erlaubt dynmaische Webseiten-Generierung und Simulations-, Projekt- und Userdatenhaltung. Java-Applets, JavaServer Pages und JavaBeans erzeugen die interaktive Client-Oberfläche zur Eingabe, Ergebnisdarstellung und für Online-Virtual Reality. Die einheitlich gestaltete Benutzeroberfläche verbirgt die Systemkomplexität.
Ensuring that software applications present their users the most recent version of data is not trivial. Self-adjusting computations are a technique for automatically and efficiently recomputing output data whenever some input changes.
This article describes the software architecture of a large, commercial software system built around a framework for coarse-grained self-adjusting computations in Haskell. It discusses advantages and disadvantages based on longtime experience. The article also presents a demo of the system and explains the API of the framework.
Strings
(2020)
This article presents the currently ongoing development of an audiovisual performance work with the title Strings. This work provides an improvisation setting for a violinist, two laptop performers, and two generative systems. At the core of Strings lies an approach that establishes a strong correlation among all participants by means of a shared physical principle. The physical principle is that of a vibrating string. The article discusses how this principle is used in both natural and simulated forms as main interaction layer between all performers and as natural or generative principle for creating audio and video.
A simple model is introduced that describes the interaction of surface acoustic waves (SAWs) with a 2D periodic array of objects on the surface that give rise to internal resonances. Such objects may be high-aspect ratio structures like micro-pillars fabricated of a material different from that of the substrate. The model allows for an approximate determination of the band structure for the acoustic modes in such systems. Results are presented for the dependence on structural parameters of a total bandgap in the non-radiative regime of a semi-infinite substrate, and it is shown how the frequency and radiation damping of vibrational modes can be determined that are associated with defects in the periodic 2D array.
IPv6 over LoRaWAN™
(2016)
Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.
Signal detection and bandwidth estimation, also known as channel segmentation or information channel estimation, is a perpetual topic in communication systems. In the field of radio monitoring this issue is extremely challenging, since unforeseeable effects like fading occur accidentally. In addition, most radio monitoring devices normally scan a wide frequency range of several hundred MHz and have to detect a multitude of different signals, varying in signal power, bandwidth and spectral shape. Since narrowband sensing techniques cannot be directly applied, most radio monitoring devices use Nyquist wideband sensing to discover the huge frequency range. In practice, sensing is normally conducted by an FFT sweep spectrum analyzer that delivers the power spectral density (PSD) values to the radio monitoring system. The channel segmentation is the initial step of a comprehensive signal analysis in a radio monitoring system based on the PSD values. In this paper, a novel approach for channel segmentation is presented that is based on a quantization and a histogram evaluation of the measured PSD. It will be shown that only the combination of both evaluations will lead to an successful automatic channel segmentation. The performance of the proposed algorithm is shown in a real radio monitoring szenario.
In an experience economy market competition in software branches is becoming more and more intense. Technical innovations, global retail practices and the multidimensional conception of experiences provide both opportunities and challenges for companies worldwide. Retailers strive for an optimized conversion rate, but poor UX still abound. Particularly Germany-based companies are less evolved in an international comparison of industrialized economies. The value of integrating users in the development process is recognized, but methodologies must carefully be incorporated into existing agile workflows. The goal of this study is to bridge the gaps between internal agency and external client and user interests. The contribution is four-fold: an overview of the current status of customer centricity in the E-Commerce branch of trade is provided (I). Based on this corpus, a methodical framework, aiming to incorporate the experience logic in UX practices within an agile project team, is presented (II). The framework is applied by a single case study - the shop relaunch of a motorbike accessory store (III). Finally, all interest groups (UX, development and project management) are incorporated in the qualitative content analysis (IV).
Wow, You Are Terrible at This!: An Intercultural Study on Virtual Agents Giving Mixed Feedback
(2020)
While the effects of virtual agents in terms of likeability, uncanniness, etc. are well explored, it is unclear how their appearance and the feedback they give affects people's reactions. Is critical feedback from an agent embodied as a mouse or a robot taken less serious than from a human agent? In an intercultural study with 120 participants from Germany and the US, participants had to find hidden objects in a game and received feedback on their performance by virtual agents with different appearances. As some levels were designed to be unsolvable, critical feedback was unavoidable. We hypothesized that feedback would be taken more serious, the more human the agent looked. Also, we expected the subjects from the US to react more sensitively to criticism. Surprisingly, our results showed that the agents' appearance did not significantly change the participants' perception. Also, while we found highly significant differences in inspirational and motivational effects as well as in perceived task load between the two cultures, the reactions to criticism were contrary to expectations based on established cultural models. This work improves our understanding on how affective virtual agents are to be designed, both with respect to culture and to dialogue strategies.
Security in IT systems, particularly in embedded devices like Cyber Physical Systems (CPSs), has become an important matter of concern as it is the prerequisite for ensuring privacy and safety. Among a multitude of existing security measures, the Transport Layer Security (TLS) protocol family offers mature and standardized means for establishing secure communication channels over insecure transport media. In the context of classical IT infrastructure, its security with regard to protocol and implementation attacks has been subject to extensive research. As TLS protocols find their way into embedded environments, we consider the security and robustness of implementations of these protocols specifically in the light of the peculiarities of embedded systems. We present an approach for systematically checking the security and robustness of such implementations using fuzzing techniques and differential testing. In spite of its origin in testing TLS implementations we expect our approach to likewise be applicable to implementations of other cryptographic protocols with moderate efforts.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.
PROFINET Security: A Look on Selected Concepts for Secure Communication in the Automation Domain
(2023)
We provide a brief overview of the cryptographic security extensions for PROFINET, as defined and specified by PROFIBUS & PROFINET International (PI). These come in three hierarchically defined Security Classes, called Security Class 1,2 and 3. Security Class 1 provides basic security improvements with moderate implementation impact on PROFINET components. Security Classes 2 and 3, in contrast, introduce an integrated cryptographic protection of PROFINET communication. We first highlight and discuss the security features that the PROFINET specification offers for future PROFINET products. Then, as our main focus, we take a closer look at some of the technical challenges that were faced during the conceptualization and design of Security Class 2 and 3 features. In particular, we elaborate on how secure application relations between PROFINET components are established and how a disruption-free availability of a secure communication channel is guaranteed despite the need to refresh cryptographic keys regularly. The authors are members of the PI Working Group CB/PG10 Security.
The Datagram Transport Layer Security (DTLS) protocol has been designed to provide end-to-end security over unreliable communication links. Where its connection establishment is concerned, DTLS copes with potential loss of protocol messages by implementing its own loss detection and retransmission scheme. However, the default scheme turns out to be suboptimal for links with high transmission error rates and low data rates, such as wireless links in electromagnetically harsh industrial environments. Therefore, in this paper, as a first step we provide an analysis of the standard DTLS handshake's performance under such adverse transmission conditions. Our studies are based on simulations that model message loss as the result of bit transmission errors. We consider several handshake variants, including endpoint authentication via pre-shared keys or certificates. As a second step, we propose and evaluate modifications to the way message loss is dealt with during the handshake, making DTLS deployable in situations which are prohibitive for default DTLS.
Short-term load forecasting (STLF) has been playing a key role in the electricity sector for several decades, due to the need for aligning energy generation with the demand and the financial risk connected with forecasting errors. Following the top-down approach, forecasts are calculated for aggregated load profiles, meaning the sum of singular loads from consumers belonging to a balancing group. Due to the emerging flexible loads, there is an increasing relevance for STLF of individual factories. These load profiles are typically more stochastic compared to aggregated ones, which imposes new requirements to forecasting methods and tools with a bottom-up approach. The increasing digitalization in industry with enhanced data availability as well as smart metering are enablers for improved load forecasts. There is a need for STLF tools processing live data with a high temporal resolution in the minute range. Furthermore, behin-the-meter (BTM) data from various sources like submetering and production planning data should be integrated in the models. In this case, STLF is becoming a big data problem so that machine learning (ML) methods are required. The research project “GaIN” investigates the improvement of the STLF quality of an energy utility using BTM data and innovative ML models. This paper describes the project scope, proposes a detailed definition for a benchmark and evaluates the readiness of existing STLF methods to fulfil the described requirements as a reviewing paper.
The review highlights that recent STLF investigations focus on ML methods. Especially hybrid models gain more and more importance. ML can outperform classical methods in terms of automation degree and forecasting accuracy. Nevertheless, the potential for improving forecasting accuracy by the use of ML models depends on the underlying data and the types of input variables. The described methods in the analyzed publications only partially fulfil the tool requirements for STLF on company level. There is still a need to develop suitable ML methods to integrate the expanded data base in order to improve load forecasts on company level.
Colored glass products with various printing technologies are becoming more important in industry. The aim is to achieve individual solution in a very short delivery time. Conventional thermal treatment of burning glasses in oven for tempered color printing has predominant issues with high time consumption, energy consumption and manufacturing cost. It requires alternative process development.
This paper proposes laser process to overcome issues in conventional treatment with the latest results of tempering colored glass. Samples have been analyzed with the scanning electron microscope (SEM). Two different laser systems have been applied and the glass has been printed with black paste.
Combined heat and power production (CHP) based on solid oxide fuel cells (SOFC) is a very promising technology to achieve high electrical efficiency to cover power demand by decentralized production. This paper presents a dynamic quasi 2D model of an SOFC system which consists of stack and balance of plant and includes thermal coupling between the single components. The model is implemented in Modelica® and validated with experimental data for the stack UI-characteristic and the thermal behavior. The good agreement between experimental and simulation results demonstrates the validity of the model. Different operating conditions and system configurations are tested, increasing the net electrical efficiency to 57% by implementing an anode offgas recycle rate of 65%. A sensitivity analysis of characteristic values of the system like fuel utilization, oxygen-to-carbon ratio and electrical efficiency for different natural gas compositions is carried out. The result shows that a control strategy adapted to variable natural gas composition and its energy content should be developed in order to optimize the operation of the system.
Complex tourism products with intangible service components are difficult to explain to potential customers. This research elaborates the use of virtual reality (VR) in the field of shore excursions. A theoretical research model based on the technology acceptance model was developed, and hypotheses were proposed. Cruise passengers were invited to test 360° excursion images on a landing page. Data was collected using an online questionnaire. Finally, data was analyzed using the PLS-SEM method. The results provide theoretical implications on technology acceptance model (TAM) research in the field of cruise tourism. Furthermore, the results and implications indicate the potential of virtual 360° shore excursion presentations for the cruise industry.
One of the challenges for autonomous driving in general is to detect objects in the car's camera images. In the Audi Autonomous Driving Cup (AADC), among those objects are other cars, adult and child pedestrians and emergency vehicle lighting. We show that with recent deep learning networks we are able to detect these objects reliably on the limited Hardware of the model cars. Also, the same deep network is used to detect road features like mid lines, stop lines and even complete crossings. Best results are achieved using Faster R-CNN with Inception v2 showing an overall accuracy of 0.84 at 7 Hz.
Social Haptic Communication (SHC) is one of the many tactile modes of communication used by persons with deafblindness to access information about their surroundings. SHC usually involves an interpreter executing finger and hand signs on the back of a person with multi-sensory disabilities. Learning SHC, however, can become challenging and time-consuming, particularly to those who experience deafblindness later in life. In this work, we present PatRec: a mobile game for learning SHC concepts. PatRec is a multiple-choice quiz game connected to a chair interface that contains a 3x3 array of vibration motors emulating different SHC signs. Players collect scores and badges whenever they guess the right SHC vibration pattern, leading to continuous engagement and a better position on a leaderboard. The game is also meant for family members to learn SHC. We report the technical implementation of PatRec and the findings from a user evaluation.
When designing and installing Indoor Positioning Systems, several interrelated tasks have to be solved to find an optimum placement of the Access Points. For this purpose, a mathematical model for a predefined number of access points indoors is presented. Two iterative algorithms for the minimization of localization error of a mobile object are described. Both algorithms use local search technique and signal level probabilities. Previously registered signal strengths maps were used in computer simulation.
HiSiMo cast irons are frequently used as material for high temperature components in engines as e.g. exhaust manifolds and turbo chargers. These components must withstand severe cyclic mechanical and thermal loads throughout their service life. The combination of thermal transients with mechanical load cycles results in a complex evolution of damage, leading to thermomechanical fatigue (TMF) of the material and, after a certain number of loading cycles, to failure of the component. In this paper (Part I), the low-cycle fatigue (LCF) and TMF properties of HiSiMo are investigated in uniaxial tests and the damage mechanisms are addressed. On the basis of the experimental results a fatigue life model is developed which is based on elastic, plastic and creep fracture mechanics results of short cracks, so that time and temperature dependent effects on damage are taken into account. The model can be used to estimate the fatigue life of components by means of finite-element calculations (Part II of the paper).
Als Fortsetzung des FHOP-Projektes wurde an der Fachhochschule Offenburg auf Basis des bestehenden Mikroprozessorkerns im Rahmen einer Diplomarbeit ein Mikrocontroller in ES2-0.7 μm-Technologie entworfen. Der Controller wurde modular aufgebaut mit den Komponenten: FHOP-Mikroprozessor, Buscontroller, Waitstate-Chipselect-Einheit, 16x16 Bit Multiplizierer, 2KB ROM, 256 Byte RAM, Watchdog, PIO mit 16 konfigurierbaren Ports, SIO, 2 Timer und ein Interruptcontroller für 8 Interrputquellen.
Der Chip benötigt bei einer Komplexität von ca. 65400 Transistoren eine Siliziumfläche von etwa 27 mm². Er wurde im September 1996 zur Fertigung gegeben und mittlerweile erfolgreich getestet. Das interne ROM des Mikrocontrollers enthält das BIOS sowie ein Testprogramm. Zur Erstellung der Software steht eine komplette Entwicklungsumgebung zur Verfügung. Sämtliche Komponenten stehen im FHOP-Design-Kit in Kürze zur Verfügung.
The Institute of Applied Research Offenburg is working in the field of autonomous data loggers since many years. In collaboration with industry, a new RFID based active sensor data logger for continuous recording of temperature has been developed and is now manufactured in mass production. Compared to existing systems, an unusual large data memory is integrated, which can be used via a simplified file system in a flexible way. The system will be used to accompany and monitor temperature sensitive goods of high value. The transponder is the first member of a new class of logging devices, the smallest will be not larger than a 2 Euro-coin with a fully integrated ASIC frontend.
Remote measurement of the physiology, so-called biotelemetry, is a key technology in the modern veterinary medicine. The usage of wireless implants has less impact on the behavior of animals than manual measurement methods and cause less disturbance than wired devices. But, common biotelemetry still uses proprietary communication and power concepts focused on small systems with one animal. Therefore, the University of Applied Sciences Offenburg is developing a low-cost RFID system called muTrans1, which is able to measure ECG, pressure, temperature, oxygen saturation and activity. The muTrans uses an own RFID sensor transponder and standardized commercial components and combines them to a scalable RFID system able to build-up RFID sensor networks with a nearly unlimited size.
RFID- Frontend ISO 15693
(2008)
The conversion of space heating for private households to climate-neutral energy sources is an essential component of the energy transition, as this sector as of 2018 was responsible for 9.4 % of Germany’s carbon dioxide emissions. In addition to reducing demand through better insulation, the use of heat pumps fed with electricity from renewable energy sources, such as on-site photovoltaics (PV) systems, is an important solution approach.
Advanced energy management and control can help to make optimal use of such heating systems. Optimal here can e.g. refer to maximizing self-consumption of self-generated PV power, extended component lifetime or a grid-friendly behavior that avoids load peaks. A powerful method for this is model predictive control (MPC), which calculates optimal schedules for the controllable influence variables based on models of the system dynamics, current measurements of system states and predictions of future external influence parameters.
In this paper, we will discuss three different use cases that show how artificial intelligence can contribute to the realization of such an MPC-based energy management and control system. This will be done using the example of a real inhabited single family home that has provided the necessary data for this purpose and where the methods are implemented and tested. The heating system consists of an air-water heat pump with direct condensation, a thermal stratified storage tank, a pellet burner and a heating rod and provides both heating and hot water. The house generates a significant portion of its electricity needs through a rooftop PV system.
The Paper presents the design and development of a blended learning concept for an engineering course in the field of color representation and display technologies. A suitable learning environment is crucial for the success of the teaching scenario. A mixture of theoretical lectures and hands-on activities with practical applications and experiments, combined with the advantages of modern digital media is the main topic of the paper. Blended learning describes the didactical change of attendance periods and online periods. The e-learning environment for the online period is designed toward an easy access and interaction. Present digital media extends the established teaching scenarios and enables the presentation of videos, animations and augmented reality (AR). Visualizations are effective tools to impart learning contents with lasting effect. The preparation and evaluation of the theoretical lectures and the hands-on activities are stimulated and affects positively the attendance periods. The tasks and experiments require the students to work independently and to develop individual solution strategies. This engages and motivates the students, deepens the knowledge. The authors will present their experience with the implemented blended learning scenario in this field of optics and photonics. All aspects of the learning environment will be introduced.
Monitors are in the center of media productions and hold an important function as the main visual interface. Tablets and smartphones are becoming more and more important work tools in the media industry. As an extension to our lecture contents an intensive discussion of different display technologies and its applications is taking place now. The established LCD (Liquid Crystal Display) technology and the promising OLED (Organic Light Emitting Diode) technology are in the focus.
The classic LCD is currently the most important display technology. The paper will present how the students should develop sense for display technologies besides the theoretical scientific basics. The workshop focuses increasingly on the technical aspects of the display technology and has the goal of deepening the students understanding of the functionality by building simple Liquid Crystal Displays by themselves.
The authors will present their experience in the field of display technologies. A mixture of theoretical and practical lectures has the goal of a deeper understanding in the field of digital color representation and display technologies. The design and development of a suitable learning environment with the required infrastructure is crucial. The main focus of this paper is on the hands-on optics workshop “Liquid Crystal Display in the do-it-yourself”.
VR-based implementation of interactive laboratory experiments in optics and photonics education
(2022)
Within the framework of a developed blended learning concept, a lot of experience has already been gained with a mixture of theoretical lectures and hands-on activities, combined with the advantages of modern digital media. Here, visualizations using videos, animations and augmented reality have proven to be effective tools to convey learning content in a sustainable way. In the next step, ideas and concepts were developed to implement hands-on laboratory experiments in a virtual environment. The main focus is on the realization of virtual experiments and environments that give the students a deep insight into selected subfields of optics and photonics.
This paper explains the realization of a concept for research-oriented photonics education. Using the example of the integration of an actual PhD project, it is shown how students are familiarized with the topic of research and scientific work in the first semesters. Typical research activities are included as essential parts of the learning process. Research should be made visible and tangible for the students. The authors will present all aspects of the learning environment, their impressions and experiences with the implemented scenario, as well as first evaluation results of the students.
The authors explain a developed concept for research-oriented education in optics and photonics. It is presented which goals are to be achieved, which strategies have been developed and how these can be implemented in a blended learning scenario. The goal of our education is the best possible qualification of the students on the basis of a strong scientific and research-oriented education, which also includes the acquisition of important interdisciplinary competences. All phases of a research process are to be mapped in the learning process and offer students an insight into current research topics in optics and photonics.
Increased knowledge transfer through the integration of research projects into university teaching
(2019)
This paper describes the integration of the research project "Characterization of Color Vision using Spectroscopy and Nanotechnology: Application to Media Photonics" into an engineering course in the field of media technology. The aim is to develop the existing learning concept towards a more research-oriented teaching. Involving students in research projects as part of the learning process provides a deeper insight into current research topics and the key elements of scientific work. This makes it easier for students to recognize the importance of the acquired theoretical knowledge for the practice, which enables them to derive new insights of their own.
Redesigning a curriculum for teaching media technology is a major challenge. Up-to-date teaching and learning concepts are necessary that meet the constant technological progress and prepare students specifically for their professional life. Teaching and studying should be characterized by a student-oriented teaching and learning culture. In order to achieve this goal, consistent evaluation is essential. The aim of the evaluation concept presented here is to generate structured information regarding the quality of content-related, didactic and organizational aspects of teaching. The exchange of opinions between students and lecturers should be encouraged in order to continuously improve the teaching and learning processes.
Cardiac resynchronization therapy (CRT) with biventricular pacing is an established therapy for heart failure (HF) patients (P) with ventricular desynchronization and reduced left ventricular (LV) ejection fraction. The aim of this study was to evaluate electrical right atrial (RA), left atrial (LA), right ventricular (RV) and LV conduction delay with novel telemetric signal averaging electrocardiography (SAECG) in implantable cardioverter defibrillator (ICD) P to better select P for CRT and to improve hemodynamics in cardiac pacing.
Methods: ICD-P (n=8, age 70.8 ± 9.0 years; 2 females, 6 males) with VVI-ICD (n=4), DDD-ICD (n=3) and CRT-ICD (n=1) (Medtronic, Inc., Minneapolis, MN, USA) were analysed with telemetric ECG recording by Medronic programmer 2090, ECG cable 2090AB, PCSU1000 oscilloscope with Pc-Lab2000 software (Velleman®) and novel National Intruments LabView SAECG software.
Results: Electrical RA conduction delay (RACD) was measured between onset and offset of RA deflection in the RAECG. Interatrial conduction delay (IACD) was measured between onset of RA deflection and onset of far-field LA deflection in the RAECG. Interventricular conduction delay (IVCD) was measured between onset of RV deflection in the RVECG and onset of LV deflection in the LVECG. Telemetric SAECG recording was possible in all ICD-P with a mean of 11.7 ± 4.4 SAECG heart beats, 97.6 ± 33.7 ms QRS duration, 81.5 ± 44.6 ms RACD, 62.8 ± 28.4 ms RV conduction delay, 143.7 ± 71.4 ms right cardiac AV delay, 41.5 ms LA conduction delay, 101.6 ms LV conduction delay, 176.8 ms left cardiac AV delay, 53.6 ms IACD and 93 ms IVCD.
Conclusions: Determination of RA, LA, RV and LV conduction delay, IACD, IVCD, right and left cardiac AV delay by telemetric SAECG recording using LabView SAECG technique may be useful parameters of atrial and ventricular desynchronization to improve P selection for CRT and hemodynamics in cardiac pacing.
Spectral analysis of signal averaging electrocardiography in atrial and ventricular tachyarrhythmias
(2017)
Background: Targeting complex fractionated atrial electrograms detected by automated algorithms during ablation of persistent atrial fibrillation has produced conflicting outcomes in previous electrophysiological studies. The aim of the investigation was to evaluate atrial and ventricular high frequency fractionated electrical signals with signal averaging technique.
Methods: Signal averaging electrocardiography (ECG) allows high resolution ECG technique to eliminate interference noise signals in the recorded ECG. The algorithm uses automatic ECG trigger function for signal averaged transthoracic, transesophageal and intracardiac ECG signals with novel LabVIEW software (National Instruments, Austin, Texas, USA). For spectral analysis we used fast fourier transformation in combination with spectro-temporal mapping and wavelet transformation for evaluation of detailed information about the frequency and intensity of high frequency atrial and ventricular signals.
Results: Spectral-temporal mapping and wavelet transformation of the signal averaged ECG allowed the evaluation of high frequency fractionated atrial signals in patients with atrial fibrillation and high frequency ventricular signals in patients with ventricular tachycardia. The analysis in the time domain evaluated fractionated atrial signals at the end of the signal averaged P-wave and fractionated ventricular signals at the end of the QRS complex. The analysis in the frequency domain evaluated high frequency fractionated atrial signals during the P-wave and high frequency fractionated ventricular signals during QRS complex. The combination of analysis in the time and frequency domain allowed the evaluation of fractionated signals during atrial and ventricular conduction.
Conclusions: Spectral analysis of signal averaging electrocardiography with novel LabVIEW software can utilized to evaluate atrial and ventricular conduction delays in patients with atrial fibrillation and ventricular tachycardia. Complex fractionated atrial electrograms may be useful parameters to evaluate electrical cardiac arrhythmogenic signals in atrial fibrillation ablation.
Skin cancer detection proves to be complicated and highly dependent on the examiner’s skills. Millimeter-wave technologies seem to be a promising aid for the detection of skin cancer. The different water content of the skin area affected by cancer compared to healthy skin changes its reflective property. Due to limited available resources on the dielectric properties of skin cancer, especially in comparison to surrounding healthy skin, accurate simulations and evaluations are quite challenging. Therefore, comparing different results for different approaches and starting points can be difficult. In this paper, the Effective Medium Theory is applied to model skin cancer, which provides permittivity values dependent on the water content.
A method for evaluating skin cancer detection based on millimeter-wave technologies is presented. For this purpose, the relative permittivities are calculated using the effective medium theory for the benign and cancerous lesion, considering the change in water content between them. These calculated relative permittivities are further used for the simulation and evaluation of skin cancer detection using a substrate-integrated waveguide probe. A difference in the simulated scattering parameters S 11 of up to 13dB between healthy and cancerous skin can be determined in the best-case.
During the coronavirus crisis, labs had to be offered in digital form in mechanical engineering at short notice. For this purpose, digital twins of more complex test benches in the field of fluid energy machines were used in the mechanical engineering course, with which the students were able to interact remotely to obtain measurement data. The concept of the respective lab was revised with regard to its implementation as a remote laboratory. Fortunately, real-world labs were able to be fully replaced by remote labs. Student perceptions of remote labs were mostly positive. This paper explains the concept and design of the digital twins and the lab as well as the layout, procedure, and finally the results of the accompanying evaluation. However, the implementation of the digital twins to date does not yet include features that address the tactile experience of working in real-world labs.
Agile Business Intelligence als Beispiel für ein domänenspezifisch angepasstes Vorgehensmodell
(2016)
Business-Intelligence-Systeme stellen durch ihre Unterstützung bei der Entscheidungsfindung für Unternehmen eine wichtige Rolle dar. Mit einer stetig dynamischeren Unternehmensumwelt geht daher die Anforderung nach der agilen Entwicklung dieser Systeme einher, so dass in der BI-Domäne zunehmend erfolgreich agile Methoden und Vorgehensmodelle eingesetzt werden. Die Weiterentwicklung und Anpassung von BI-Systemen ist dahingehend besonders, dass diese in der Regel langjährig gewachsenen Systemen und Strukturen betreffen, die strengen regulatorischen Rahmenbedingungen unterliegen, was eine Herausforderung für agile Vorgehensweisen darstellt. Wurden die Werte und Prinzipien des agilen Manifests [AM01] und die daraus abgeleiteten Methoden zu Beginn meist eins zu eins auf den Bereich BI übertragen, so hat sich das Verständnis von BI- Agilität als ganzheitliche Eigenschaft der BI im deutschsprachigen Raum etabliert, und agile Me- thoden wurden auf die Besonderheiten der BI-Domäne adaptiert. In diesem Beitrag werden BI-Agilität und Agile BI erläutert, ein Ordnungsrahmen für Maßnahmen zur Steigerung der BI-Agilität eingeführt sowie Herausforderungen bei Agile BI erläutert.
Im Projekt bwLehrpool wurde ein verteiltes System für die flexible Nutzung von Rechnerpools durch Desktop-Virtualisierung entwickelt. Auf Basis eines zentral gebooteten Linux- Grundsystems können beliebige virtualisierbare Betriebssysteme für Lehrund Prüfungszwecke zentral bereitgestellt und lokal auf den Maschinen aus-gewählt werden. Die verschiedenen Ar- beitsumgebungen müssen nicht mehr auf den PCs installiert werden und erlauben so eine multifunktionale Nutzung von PCs und Räumen für vielfältige Lehrund Lernszenarien sowie für elektronische Prüfungen. bwLehrpool abstrahiert von der PC-Hardware vor Ort und ermöglicht den Dozenten die eigene Gestaltung und Verwaltung ihrer Softwareumgebungen als Self-Service. Darüber hinaus fördert bwLehrpool den hochschulübergreifenden Austausch von Kursumgebungen.
In public transportation, the motor pool often consists of various different vehicles bought over a duration of many years. Sometimes, they even differ within one batch bought at the same time. This poses a considerable challenge in the storage and allocation of spare parts, especially in the event of damage to a vehicle. Correctly assigning these parts before the vehicle reaches the workshop could significantly reduce both the downtime and, therefore, the actual costs for companies. In order to achieve this, the current software uses a simple probability calculation. To improve the performance, the data of specific companies was analysed, preprocessed and used with several modelling techniques to classify and, therefore, predict the spare parts to be used in the event of a faulty vehicle. We summarize our experience running through the steps of the Cross Industry Standard Process for Data Mining and compare the performance to the previously used probability. Gradient Boosting Trees turned out to be the best modeling technique for this special case.
This paper describes the use of the single-linkage hierarchical clustering method in outlier detection for manufactured metal work pieces. The main goal of the study is to group defects that occur 5 mm into a work piece from the edge, i.e., the border of the metal work piece. The goal is to remove defects outside the area of interest as outliers. According to the assumptions made for the performance criteria, the single-linkage method has achieved better results compared to other agglomeration methods.
Due to its numerous application fields and benefits, virtualization has become an interesting and attractive topic in computer and mobile systems, as it promises advantages for security and cost efficiency. However, it may bring additional performance overhead. Recently, CPU virtualization has become more popular for embedded platforms, where the performance overhead is especially critical. In this article, we present the measurements of the performance overhead of the two hypervisors Xen and Jailhouse on ARM processors in the context of the heavy load “Cpuburn-a8” application and compare it to a native Linux system running on ARM processors.
In the railway technical centers, scheduling the maintenance activities is a very complex task, it consists in ordering, in the time, all the maintenance operations on the workstations, while respecting the number of resources, precedence constraints, and the workstations' availabilities. Currently, this process is not completely automatic. For improving this situation, this paper presents a mathematical model for the maintenance activities scheduling in the case of railway remanufacturing systems. The studied problem is modeled as a flexible job-shop, with the possibility for a job to be executed several times on a stage. MILP formulation is implemented with the Makespan as an objective, representing the time for remanufacturing the train. The aim is to create a generic model for optimizing the planning of the maintenance activities and improving the performance of the railway technical centers. At last, numerical results are presented, discussing the impact of the instances size on the computing time to solve the described problem.
Deafblindness, also known as dual sensory loss, is the combination of sight and hearing impairments of such extent that it becomes difficult for one sense to compensate for the other. Communication issues are a key concern for the Deafblind community. We present the design and technical implementation of the Tactile Board: a mobile Augmentative and Alternative Communication (AAC) device for individuals with deafblindness. The Tactile Board allows text and speech to be translated into vibrotactile signs that are displayed real-time to the user via a haptic wearable. Our aim is to facilitate communication for the deafblind community, creating opportunities for these individuals to initiate and engage in social interactions with other people without the direct need of an intervener.
Co-Designing Assistive Tools to Support Social Interactions by Individuals Living with Deafblindness
(2020)
Deafblindness is a dual sensory impairment that affects many aspects of life, including mobility, access to information, communication, and social interactions. Furthermore, individuals living with deafblindness are under a high risk of social isolation. Therefore, we identified opportunities for applying assistive tools to support social interactions through co-ideation activities with members of the deafblind community. This work presents our co-design approach, lessons learned and directions for designing meaningful assistive tools for dual sensory loss.