Refine
Year of publication
Document Type
- Conference Proceeding (349)
- Article (reviewed) (141)
- Article (unreviewed) (94)
- Book (34)
- Part of a Book (31)
- Patent (30)
- Letter to Editor (13)
- Contribution to a Periodical (8)
- Report (2)
- Doctoral Thesis (1)
Conference Type
- Konferenzartikel (240)
- Konferenz-Abstract (75)
- Sonstiges (22)
- Konferenz-Poster (9)
- Konferenzband (3)
Language
- English (469)
- German (230)
- Other language (2)
- Multiple languages (1)
- Russian (1)
- Spanish (1)
Has Fulltext
- no (704) (remove)
Keywords
- RoboCup (20)
- Kommunikation (15)
- Mathematik (12)
- Eingebettetes System (8)
- Intelligentes Stromnetz (8)
- Brennstoffzelle (7)
- CST (7)
- Energieversorgung (7)
- HF-Ablation (7)
- Herzkrankheit (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (704) (remove)
Open Access
- Closed Access (244)
- Open Access (222)
- Closed (84)
- Bronze (56)
- Diamond (1)
- Grün (1)
BACKGROUND:
While hearing aids for a contralateral routing of signals (CROS-HA) and bone conduction devices have been the traditional treatment for single-sided deafness (SSD) and asymmetric hearing loss (AHL), in recent years, cochlear implants (CIs) have increasingly become a viable treatment choice, particularly in countries where regulatory approval and reimbursement schemes are in place. Part of the reason for this shift is that the CI is the only device capable of restoring bilateral input to the auditory system and hence of possibly reinstating binaural hearing. Although several studies have independently shown that the CI is a safe and effective treatment for SSD and AHL, clinical outcome measures in those studies and across CI centers vary greatly. Only with a consistent use of defined and agreed-upon outcome measures across centers can high-level evidence be generated to assess the safety and efficacy of CIs and alternative treatments in recipients with SSD and AHL.
METHODS:
This paper presents a comparative study design and minimum outcome measures for the assessment of current treatment options in patients with SSD/AHL. The protocol was developed, discussed, and eventually agreed upon by expert panels that convened at the 2015 APSCI conference in Beijing, China, and at the CI 2016 conference in Toronto, Canada.
RESULTS:
A longitudinal study design comparing CROS-HA, BCD, and CI treatments is proposed. The recommended outcome measures include (1) speech in noise testing, using the same set of 3 spatial configurations to compare binaural benefits such as summation, squelch, and head shadow across devices; (2) localization testing, using stimuli that rove in both level and spectral content; (3) questionnaires to collect quality of life measures and the frequency of device use; and (4) questionnaires for assessing the impact of tinnitus before and after treatment, if applicable.
CONCLUSION:
A protocol for the assessment of treatment options and outcomes in recipients with SSD and AHL is presented. The proposed set of minimum outcome measures aims at harmonizing assessment methods across centers and thus at generating a growing body of high-level evidence for those treatment options.
Die drei großen Hersteller von Cochlea-Implantat (CI)-Systemen ermöglichen es klinischen Audiologen, die Mikrofoneigenschaften der meisten CI-Sprachprozessoren zu prüfen. Dazu können bei diesen Sprachprozessoren Monitorkopfhörer angeschlossen und das/die Mikrofon(e) inklusive eines Teils der Signalvorverarbeitung abgehört werden. Präzise Angaben dazu, mit welchen Stimuli, bei welchem Pegel und nach welchem Kriterium diese Prüfung stattfinden soll, machen die CI-Hersteller nicht. Auf Basis dieser Prüfung soll der Audiologe dann über die Funktion der Mikrofone und damit darüber entscheiden, ob der betreffende Sprachprozessor an den Hersteller eingeschickt wird oder nicht.
Zur Objektivierung der CI-Sprachprozessor-Mikrofon-Prüfung haben wir eine Testbox entwickelt, mit der alle abhörbaren aktuellen CI-Sprachprozessoren der drei großen Hersteller geprüft werden können. Die Box wurde im 3D-Druck-Verfahren hergestellt. Der zu prüfende Sprachprozessor wird in die Messbox eingehängt und über einen darin verbauten Lautsprecher mit definierten Prüfsignalen (Sinustöne unterschiedlicher Frequenz) beschallt. Das Mikrofonsignal wird über das Kabel der Monitorkopfhörer herausgeführt und mit einer Shifting- and Scaling-Schaltung in einen Spannungsbereich transformiert, der für die AD-Wandlung mit einem Mikrokontroller (ATmega1280 verbaut auf einem Arduino Mega) geeignet ist. Derselbe Mikrokontroller übernimmt über einen eigens gebauten DA-Wandler die Ausgabe der Sinustöne über den Lautsprecher. Signalaufnahme und –wiedergabe erfolgen mit jeweils 38,5 kHz Samplingrate. Der für jede Frequenz über mehrere Perioden des Prüfsignals ermittelte Effektivwert wird mit dem Effektivwert, der mit einem neuwertigen Referenzprozessor für diese Frequenz gemessen wurde, verglichen. Die Messergebnisse werden graphisch auf einem Display ausgegeben.
Derzeit läuft eine erste Datenerhebung mit in der Klinik subjektiv auffällig gewordenen CI-Sprachprozessoren, die anschließend in der Messbox untersucht werden. So sollen realistische Schwellen für kritische Abweichungen von den Referenz-Effektivwerten ermittelt werden. Im weiteren Verlauf sollen dann Hit und False Alarm-Raten der subjektiven Prüfung bestimmt werden.
Die Hersteller von Cochlea-Implantat (CI)-Systemen sehen für klinische Audiologen die Möglichkeit vor, die Mikrofonleistung der meisten aktuellen CI-Sprachprozessoren mittels anschließbarer Monitorkopfhörer zu prüfen. Nähere Angaben dazu, nach welchem Prozedere diese Prüfung stattfinden soll, z. B. welche Stimuli mit welchen Pegeln verwendet werden sollen, sind nach Wissen der Autoren seitens der CI-Hersteller nicht verfügbar. Auf der Basis dieser subjektiven Prüfung entscheidet dann der Audiologe, ob der betreffende Sprachprozessor an den Hersteller eingeschickt wird oder nicht. Wir haben eine Messbox entwickelt, mit der die Mikrofonleistung aller abhörbaren CI-Sprachprozessoren der Hersteller Advanced Bionics, Cochlear und MED-EL objektiv geprüft werden kann. Die Box wurde im 3-D-Druckverfahren hergestellt. Der zu prüfende Sprachprozessor wird in die Messbox eingehängt und über einen verbauten Lautsprecher mit definierten Prüfsignalen (Sinustönen unterschiedlicher Frequenz) beschallt. Das Signal des Mikronfons bzw. der Mikrofone wird über das in der Audio-/Abhörbuchse des Prozessors eingesteckte Kabel der Monitorkopfhörer herausgeführt und mit einer Shifting and Scaling-Schaltung in einen Spannungsbereich transformiert, der für die A/D-Wandlung mit einem Mikrokontroller (ATmega1280 verbaut auf einem Arduino Mega) geeignet ist. Derselbe Mikrokontroller übernimmt über einen eigens gebauten D/AWandler die Ausgabe der Prüfsignale über den Lautsprecher. Signalaufnahme und –wiedergabe erfolgt jeweils mit einer Samplingrate von 38,5 kHz. Der frequenzspezifische Effektivwert des abgegriffenen Mikrofonsignals wird mit einem Referenzwert verglichen. Die (frequenzspezifischen) Referenzwerte wurden mit einem neuwertigen Sprachprozessor gleichen Typs ermittelt und im Speicher des Mikrokontrollers abgelegt. Das Ergebnis wird nach Abschluss der Messung grafisch auf einem Touchscreen ausgegeben. Derzeit läuft eine erste Datenerhebung mit in der Klinik subjektiv auffällig gewordenen CI-Sprachprozessoren, die anschließend in der Messbox untersucht werden. Längerfristiges Ziel ist es, die hit und false alarm Raten der subjektiven Prüfung zu ermitteln.
The effect of fluctuating maskers on speech understanding of high-performing cochlear implant users
(2016)
Objective: The present study evaluated whether the poorer baseline performance of cochlear implant (CI) users or the technical and/or physiological properties of CI stimulation are responsible for the absence of masking release. Design: This study measured speech reception thresholds (SRTs) in continuous and modulated noise as a function of signal to noise ratio (SNR). Study sample: A total of 24 subjects participated: 12 normal-hearing (NH) listeners and 12 subjects provided with recent MED-EL CI systems. Results: The mean SRT of CI users in continuous noise was −3.0 ± 1.5 dB SNR (mean ± SEM), while the normal-hearing group reached −5.9 ± 0.8 dB SNR. In modulated noise, the difference across groups increased considerably. For CI users, the mean SRT worsened to −1.4 ± 2.3 dB SNR, while it improved for normal-hearing listeners to −18.9 ± 3.8 dB SNR. Conclusions: The detrimental effect of fluctuating maskers on SRTs in CI users shown by prior studies was confirmed by the current study. Concluding, the absence of masking release is mainly caused by the technical and/or physiological properties of CI stimulation, not just the poorer baseline performance of many CI users compared to normal-hearing subjects. Speech understanding in modulated noise was more robust in CI users who had a relatively large electrical dynamic range.
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPDs is highly desirable. Outcomes of a previous study (Zirn, Arndt et al. 2016) revealed that a subset of BiCI users showed improved IPD detection thresholds with the fine structure processing strategy FS4 compared to the constant rate strategy HDCIS using narrowband stimuli. In contrast, little differences between the coding strategies were found for broadband stimuli with regard to binaural speech intelligibility level differences (BILD) as an estimate of binaural unmasking. Compared to normalhearing listeners (7.5 ± 1.2 dB) BILD were small in BiCI users (around 0.5 dB with both coding strategies).
In the present work, we investigated the influence of binaural fitting parameters on BILD. In our cohort of BiCI users many were implanted with electrode arrays differing in length left versus right. Because this length difference typically corresponded to the distance of two electrode contacts the first modification of bilateral fitting was a tonotopic adjustment by deactivation of the most apical electrode contact on the side with the deeper inserted array (tonotopic approach).
The second modification was the isolation of the residual, most apical electrode contacts by deactivation of the basally adjacent electrode contact on each side (tonotopic sparse approach). Applying these modifications, BILD improved by up to 1.5 dB.
The ability to detect a signal masked by noise is improved in normal-hearing (NH) listeners when interaural phase differences (IPD) between the ear signals exist either in the masker or the signal. We determined the impact of different coding strategies in bilaterally implanted cochlear implant (BiCI) users with and without fine-structure coding (FSC) on masking level differences. First, binaural intelligibility level differences (BILD) were determined in NH listeners and BiCI users using their clinical speech processors. NH subjects (n=8) showed a significant mean BILD of 7.5 dB. In contrast, BiCI users (n=9) without FSC as well as with FSC revealed a barely significant mean BILD (0.4 dB respectively 0.6 dB). Second, IPD thresholds were measured in BiCI users using either their speech processors with FS4 or direct stimulation with FSC. With the latter approach, synchronized stimulation providing an interaural accuracy of stimulation timing of 1.67 µs was realized on pitch matched electrode pairs. The resulting individual IPD threshold was lower in most of the subjects with direct stimulation than with their speech processors. These outcomes indicate that some BiCI users can benefit from increased temporal precision of interaural FSC and adjusted interaural frequency-place mapping presumably resulting in improved BILD.
Das normalhörende auditorische System ist in der Lage, interaurale Zeit- bzw. Phasendifferenzen zur verbesserten Signaldetektion im Störgeräusch zu nutzen. Dieses Phänomen wird häufig als binaurale Entmaskierung bezeichnet und ist sowohl bei einfachen Signalen wie Sinustönen, als auch bei Sprachsignalen im Störgeräusch wirksam. Vorangegangene Studien haben gezeigt, dass binaurale Entmaskierung eingeschränkt auch bei bilateralen CI-Trägern beobachtbar ist (Zirn et al., 2016).
Aktuelle Ergebnisse zeigen, dass die binaurale Entmaskierung sensitiv gegenüber der bilateralen CI-Anpassung ist. So lässt sich der Effekt durch tonotopen Abgleich und Herausstellen eines apikalen Feinstrukturkanals modulieren. Steigerungen der binauralen Entmaskierung um bis zu 1,5 dB sind auf diese Weise gegenüber der konventionellen CI-Anpassung möglich. Allerdings variiert der Einfluss der CI-Anpassung interindividuell erheblich.
The ability to detect a target signal masked by noise is improved in normal-hearing listeners when interaural phase differences (IPDs) between the ear signals exist either in the masker or in the signal. To improve binaural hearing in bilaterally implanted cochlear implant (BiCI) users, a coding strategy providing the best possible access to IPD is highly desirable. In this study, we compared two coding strategies in BiCI users provided with CI systems from MED-EL (Innsbruck, Austria). The CI systems were bilaterally programmed either with the fine structure processing strategy FS4 or with the constant rate strategy high definition continuous interleaved sampling (HDCIS). Familiarization periods between 6 and 12 weeks were considered. The effect of IPD was measured in two types of experiments: (a) IPD detection thresholds with tonal signals addressing mainly one apical interaural electrode pair and (b) with speech in noise in terms of binaural speech intelligibility level differences (BILD) addressing multiple electrodes bilaterally. The results in (a) showed improved IPD detection thresholds with FS4 compared with HDCIS in four out of the seven BiCI users. In contrast, 12 BiCI users in (b) showed similar BILD with FS4 (0.6 ± 1.9 dB) and HDCIS (0.5 ± 2.0 dB). However, no correlation between results in (a) and (b) both obtained with FS4 was found. In conclusion, the degree of IPD sensitivity determined on an apical interaural electrode pair was not an indicator for BILD based on bilateral multielectrode stimulation.
BiCI users’ sensitivity to interaural phase differences for single- and multi-channel stimulation
(2016)
Objectives: Speech recognition on the telephone poses a challenge for patients with cochlear implants (CIs) due to a reduced bandwidth of transmission. This trial evaluates a home-based auditory training with telephone-specific filtered speech material to improve sentence recognition. Design: Randomised controlled parallel double-blind. Setting: One tertiary referral centre. Participants: A total of 20 postlingually deafened patients with CIs. Main outcome measures: Primary outcome measure was sentence recognition assessed by a modified version of the Oldenburg Sentence Test filtered to the telephone bandwidth of 0.3-3.4 kHz. Additionally, pure tone thresholds, recognition of monosyllables and subjective hearing benefit were acquired at two separate visits before and after a home-based training period of 10-14 weeks. For training, patients received a CD with speech material, either unmodified for the unfiltered training group or filtered to the telephone bandwidth in the filtered group. Results: Patients in the unfiltered training group achieved an average sentence recognition score of 70.0%±13.6% (mean±SD) before and 73.6%±16.5% after training. Patients in the filtered training group achieved 70.7%±13.8% and 78.9%±7.0%, a statistically significant difference (P=.034, t10 =2.292; two-way RM ANOVA/Bonferroni). An increase in the recognition of monosyllabic words was noted in both groups. The subjective benefit was positive for filtered and negative for unfiltered training. Conclusions: Auditory training with specifically filtered speech material provided an improvement in sentence recognition on the telephone compared to training with unfiltered material.
Uncontrollable manufacturing variations in electrical hardware circuits can be exploited as Physical Unclonable Functions (PUFs). Herein, we present a Printed Electronics (PE)-based PUF system architecture. Our proposed Differential Circuit PUF (DiffC-PUF) is a hybrid system, combining silicon-based and PE-based electronic circuits. The novel approach of the DiffC-PUF architecture is to provide a specially designed real hardware system architecture, that enables the automatic readout of interchangeable printed DiffC-PUF core circuits. The silicon-based addressing and evaluation circuit supplies and controls the printed PUF core and ensures seamless integration into silicon-based smart systems. Major objectives of our work are interconnected applications for the Internet of Things (IoT).
The importance of obtaining simultaneous particle size and concentration values has grown up with continuing discussion of the health effects, of internal combustion engine generated particulate emissions and in particular of Diesel soot emissions. In the present work an aerosol measurement system is described that delivers information about particle size and concentration directly from the undiluted exhaust gas.
Using three laser diodes of different wavelengths which form one parallel light beam, each spectral attenuation is analysed by a single detector and the particle diameter and concentration is evaluated by the use of the Mie theory and shown on-line at a frequency of 1 Hz. The system includes an optical long-path-cell (White principle) with an adjustable path length from 2.5 to 15 m, which allows the analysis within a broad concentration range.
On-line measurements of the particulate emissions in the hot, undiluted exhaust of Diesel engines are presented under stationary and transient engine’s load conditions. Mean particle diameters well below 100 nm are detected for modern Diesel engines. The measured particle concentration corresponds excellently with the traditional gravimetrical measurements of the diluted exhaust. Additionally, measurements of particle emissions (mostly condensed hydricarbons) from a two-stroke engine are presented and discussed.
The flow field-flow fractionation (FIFFF) technique is a promising method for separating and analysing particles and large size macromolecules from a few nanometers to approximately 50 μm. A new fractionation channel is described featuring well defined flow conditions even for low channel heights with convenient assembling and operations features. The application of the new flow field-flow fractionation channel is proved by the analysis of pigments and other small particles of technical interest in the submicrometer range. The experimental results including multimodal size distributions are presented and discussed.
In the last decade, IPv6 over Low power Wireless Personal Area Networks (IEEE802.15.4), also known as 6LoWPAN, has well evolved as a primary contender for short range wireless communications and holds the promise of an Internet of Things, which is completely based on the Internet Protocol. The authors' team has developed a 6LoWPAN protocol stack in C language, the stack without the necessity to use a specific design environment or operating system. It is highly flexible, modular, and portable and can be enhanced by several interesting modules, like a Wake-On-Radio-(WOR) MAC layer or a TLS1.2 based security sublayer. The stack is made available as open source at https://github.com/hso-esk/emb6. It was extensively tested on the Automated Physical Testbed (APTB) for Wireless Systems, which is available in the authors' lab and allows a flexible setup and full control of arbitrary topologies. The results of the measurements demonstrate a very good stability and short-term with long-term performance also under dynamic conditions.
The overview of public key infrastructure based security approaches for vehicular communications
(2015)
Modern transport infrastructure becomes a full member of globally connected network. Leading vehicle manufacturers have already triggered development process, output of which will open a new horizon of possibilities for consumers and developers by providing a new communication entity - a car, thus enabling Car2X communications. Nevertheless some of available systems already provide certain possibilities for vehicles to communicate, most of them are considered not sufficiently secured. During last 15 years a number of big research projects funded by European Union and USA governments were started and concluded after which a set of standards were published prescribing a common architecture for Car2X and vehicles onboard communications. This work concentrates on combining inner and outer vehicular communications together with a use of Public Key Infrastructure (PKI).
Wireless communication systems more and more become part of our daily live. Especially with the Internet of Things (IoT) the overall connectivity increases rapidly since everyday objects become part of the global network. For this purpose several new wireless protocols have arisen, whereas 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks) can be seen as one of the most important protocols within this sector. Originally designed on top of the IEEE802.15.4 standard it is a subject to various adaptions that will allow to use 6LoWPAN over different technologies; e.g. DECT Ultra Low Energy (ULE). Although this high connectivity offers a lot of new possibilities, there are several requirements and pitfalls coming along with such new systems. With an increasing number of connected devices the interoperability between different providers is one of the biggest challenges, which makes it necessary to verify the functionality and stability of the devices and the network. Therefore testing becomes one of the key components that decides on success or failure of such a system. Although there are several protocol implementations commonly available; e.g., for IoT based systems, there is still a lack of according tools and environments as well as for functional and conformance testing. This article describes the architecture and functioning of the proposed test framework based on Testing and Test Control Notation Version 3 (TTCN-3) for 6LoWPAN over ULE networks.
Extended Performance Measurements of Scalable 6LoWPAN Networks in an Automated Physical Testbed
(2015)
IPv6 over Low power Wireless Personal Area Networks, also known as 6LoWPAN, is becoming more and more a de facto standard for such communications for the Internet of Things, be it in the field of home and building automation, of industrial and process automation, or of smart metering and environmental monitoring. For all of these applications, scalability is a major precondition, as the complexity of the networks continuously increase. To maintain this growing amount of connected nodes a various 6LoWPAN implementations are available. One of the mentioned was developed by the authors' team and was tested on an Automated Physical Testbed for Wireless Systems at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, which allows the flexible setup and full control of arbitrary topologies. It also supports time-varying topologies and thus helps to measure performance of the RPL implementation. The results of the measurements prove an excellent stability and a very good short and long-term performance also under dynamic conditions. In all measurements, there is an advantage of minimum 10% with regard to the average times, like global repair time; but the advantage with reagr to average values can reach up to 30%. Moreover, it can be proven that the performance predictions from other papers are consistent with the executed real-life implementations.
In the last decade, IPv6 over Low power Wireless Personal Area Networks, also known as 6LoWPAN, has well evolved as a primary contender for short range wireless communication and holds the promise of an Internet of Things, which is completely based on the Internet Protocol. In the meantime, various 6LoWPAN implementations are available, be it open source or commercial. One of these implementations, which was developed by the authors' team, was tested on an Automated Physical Testbed for Wireless Systems at the Laboratory Embedded Systems and Communication Electronics of Offenburg University of Applied Sciences, which allows the flexible setup and full control of arbitrary topologies. It also supports time-varying topologies and thus helps to measure performance of the RPL implementation. The results of the measurements show a very good stability and short-term and long-term performance also under dynamic conditions. In addition, it can be proven that the performance predictions from other papers are consistent with real-life implementations.
The CAN bus still is an important fieldbus in various domains, e.g. for in-car communication or automation applications. To counter security threats and concerns in such scenarios we design, implement, and evaluate the use of an end-to-end security concept based on the Transport Layer Security protocol. It is used to establish authenticated, integrity-checked, and confidential communication channels between field devices connected via CAN. Our performance measurements show that it is possible to use TLS at least for non time-critical applications, as well as for generic embedded networks.
Modeling and Simulation the Influence of Solid Carbon Formation on SOFC Performance and Degradation
(2013)
Impedance of the Surface Double Layer of LSCF/CGO Composite Cathodes: An Elementary Kinetic Model
(2014)
A wide range catalyst screening with noble metal and oxide catalysts for a metal–air battery with an aqueous alkaline electrolyte was carried out. Suitable catalysts reduce overpotentials during the charge and discharge process, and therefore improve the round-trip efficiency of the battery. In this case, the electrodes will be used as optimized cathodes for a future lithium–air battery with an aqueous alkaline electrolyte. Oxide catalysts were synthesized via atmospheric plasma spraying. The screening showed that IrO2, RuO2, La0.6Ca0.4Co3, Mn3O4, and Co3O4 are promising bi-functional catalysts. Considering the high price for the noble metal catalysts further investigations of the oxide catalysts were carried out to analyze their electrochemical behavior at varied temperatures, molarities, and in case of La1−x Ca x CoO3 a varying calcium content. Additionally all catalysts were tested in a longterm test to proof cyclability at varied molarities. Further investigations showed that Co3O4 seems to be the most promising bi-functional catalyst of the tested oxide catalysts. Furthermore, it was shown that a calcium content of x = 0.4 in LCCO has the best performance.
Es sollte in dieser Schrift untersucht werden, inwieweit bei Änderungen im Bereich der Planung, Durchführung und Auswertung von klinischen Arzneimittelprüfungen die Bewertung mit wissenschaftlichen oder politischen Begründungen erfolgt und inwieweit wissenschaftliche Regeln für politische Zwecke genutzt werden.
Serendipities in der Medizin
(2016)
Zufälle begleiten unser Leben. Auch bei wichtigen Entdeckungen und Entwicklungen in der Medizin spielt der „Zufall“ (engl. „serendipity“) eine Rolle. Hierzu gehören u. a. die Mendelschen Gesetze, die Ermittlung der menschlichen Chromosomenzahl, die Entdeckung der DNS durch Watson und Crick, der PAP-Test oder die Entdeckung der Röntgenstrahlung und der Radioaktivität. Aber auch und gerade in der Pharmakologie gibt es viele Beispiele für Serendipitäten. Einige gehen eng mit Zufällen bei der Entdeckung der Bakteriologie einher.
The interaural time difference (ITD) is an important cue for the localization of sounds. ITD changes as little as 10 μs can be detected by the human auditory system. By provision of one ear with a cochlear implant (CI) ITD are altered due to the partial replacement of the peripheral auditory system. A hearing aid (HA), in contrast, does not replace but adds a processing delay component to the peripheral auditory system extending ITD. The aim of the present study was to quantify interaural stimulation timing between these different modalities to estimate the need for central auditory temporal compensation in single sided deaf CI users or bimodal CI/HA users. For this purpose, wave V latencies of auditory brainstem responses evoked either acoustically (ABR) or electrically via the CI (EABR) have been measured. The sum of delays consisting of CI signal processing measured in the MED-EL OPUS2 audio processor and EABR wave V latencies evoked on different intracochlear sites allowed an estimation of the entire CI channel-specific delay for MED-EL MAESTRO CI systems. We compared these values with ABR wave V latencies measured in the contralateral normal hearing or HA provided ear in different frequency bands. The results showed that EABR wave V latencies were consistently shorter than those evoked acoustically in the unaided normal hearing ear. Thus, artificial delays within the audio processor can be implemented to adjust interaural stimulation timing. The currently implemented group delays in the MED-EL CI system turned out to be reasonably similar to those of the unaided ear. For adjustment of CI and contralateral HA, in contrast, an adjustable additional across-frequency delay in the range of 1–11 ms implemented in the CI would be required. Especially for bimodal CI/HA users the adjustment of interaural stimulation timing may induce improved binaural hearing, reduced need for central auditory temporal compensation and increased acceptance of the CI/HA provision.
Im Rahmen der Cochleaimplantat (CI)-Versorgung werden sowohl intraoperativ als auch postoperativ verschiedene elektrische und elektrophysiologische Diagnostikverfahren eingesetzt, bei denen elektrische Messgrößen vom CI erfasst und elektrophysiologische Messungen bei CI-Patienten durchgeführt werden. Zu den elektrophysiologischen Diagnostikverfahren zählen die Messung der elektrisch evozierten Summenaktionspotenziale des Hörnervs, die Registrierung der elektrisch evozierten auditorischen Hirnstammpotenziale und die Erfassung der elektrisch evozierten auditorischen kortikalen Potenziale. Diese Potenziale widerspiegeln die Erregung des Hörnervs und die Reizverarbeitung in verschiedenen Stationen der aufsteigenden Hörbahn bei intracochleärer elektrischer Stimulation mittels eines CI. Bei den aktuellen CI sind die Beurteilung der Elektrodenlage sowie die Prüfung der Ankopplung des Implantats an den Hörnerv wichtige Anwendungsgebiete der elektrophysiologischen Diagnostikverfahren. Ein weiteres bedeutendes Einsatzfeld stellt die Prüfung der Reizverarbeitung in der Hörbahn dar. Das Hauptanwendungsgebiet dieser Verfahren bildet jedoch die Unterstützung der Anpassung der CI-Sprachprozessoren bei Säuglingen und Kleinkindern auf der Basis elektrophysiologischer Schwellen.
A benchmark analysis of Long Range (LoRaTM) Communication at 2.45 Ghz for safety applications
(2014)
The demand of wireless solutions in industrial applications increases since the early nineties. This trend is not only ongoing, it is further pushed by developments in the area of software stacks like the latest Bluetooth Low Energy Stack. It is also pushed by new chip-designs and powerful and highly integrated electronic hardware. The acceptance of wireless technologies as a possible solution for industrial applications, has overcome the entry barrier [1]. The first step to see wireless as standard for many industrial applications is almost accomplished. Nevertheless there is nearly none acceptance of wireless technology for Safety applications. One highly challenging and demanding requirement is still unsolved: The aspect safety and robustness. Those topics have been addressed in many cases but always in a similar manner. WirelessHART as an example addresses this topic with redundant so called multiple propagation paths and frequency hopping to handle with interferences and loss of network participants. So far the pure peer to peer link is rarely investigated and there are less safety solutions available. One product called LoRa™ can be seen as one possible solution to address this lack of safety within wireless links. This paper focuses on the safety performance evaluation of a modem-chip-design. The use of diverse and redundant wireless technologies like LoRa can lead to an increase acceptance of wireless in safety applications. Many measurements in real industrial application have been carried out to be able to benchmark the new chip in terms of the safety aspects. The content of this research results can help to raise the level of confidence in wireless. In this paper, the term “safety” is used for data transmission reliability.
Battery degradation is a complex physicochemical process that strongly depends on operating conditions and environment. We present a model-based analysis of lithium-ion battery degradation in smart microgrids, in particular, a single-family house and an office tract with photovoltaics generator. We use a multi-scale multi-physics model of a graphite/lithium iron phosphate (LiFePO4, LFP) cell including SEI formation as ageing mechanism. The cell-level model is dynamically coupled to a system-level model consisting of photovoltaics, inverter, power consumption profiles, grid interaction, and energy management system, fed with historic weather data. The behavior of the cell in terms of degradation propensity, performance, state of charge and other internal states is predicted over an annual operation cycle. As result, we have identified a peak in degradation rate during the battery charging process, caused by charging overpotentials. Ageing strongly depends on the load situation, where the predicted annual capacity fade is 1.9 % for the single-family house and only 1.3 % for the office tract.
Im Jahr 1504 verlor der deutsche Ritter Gottfried („Götz“) von Berlichingen seine
rechte Hand. Schon während seiner Genesung dachte er daran, die Hand zu ersetzen,
und beauftragte bald darauf die erste Handprothese, die sogenannte „Eiserne Hand“.
Jahre später wurde die aufwändigere zweite „Eiserne Hand“ gebaut. Wir haben die erste
Prothese auf der Basis früherer Literaturdaten von
Quasigroch (1982) mit Hilfe von 3-D
Computer-Aided Design (CAD) rekonstruiert. Dazu mussten einige Abmessungen angepasst
und ein paar Annahmen für das CAD-Modell gemacht werden. Die historische passive
Prothese des Götz von Berlichingen ist für die moderne Neuroprothetik interessant, da sie
eine Alternative zu komplexen invasiven Brain-Machine-Interface-Konzepten darstellen
könnte, wo diese Konzepte nicht notwendig, möglich oder vom Patienten gewünscht sind.
Auf Grundlage der Computer-Aided-Design (CAD)-rekonstruierten ersten „Eisernen Hand“ des Götz von Berlichingen wird ein umgebautes, controllergesteuertes sensomotorisches Fingersystem auf seine Funktionalität beim Greifen von unterschiedlichen Gegenständen beschrieben und geprüft. Die elektronischen Finger, die den „Pinzettengriff“ nachahmen und automatisch bei dem zuvor eingestellten Anpressdruck abschalten, bewiesen eine bemerkenswerte Alltagstauglichkeit. Das vorgestellte Grundkonzept könnte eine Alternative bei der Entwicklung einfacher und kostengünstiger, aber dennoch gut einsatzfähiger bionischer Hände sein und zeigt einmal mehr, wie historische Ideen in die Gegenwart transferiert werden können.
IPv6 over LoRaWAN™
(2016)
Although short-range wireless communication explicitly targets local and regional applications, range continues to be a highly important issue. The range directly depends on the so-called link budget, which can be increased by the choice of modulation and coding schemes. The recent transceiver generation in particular comes with extensive and flexible support for software-defined radio (SDR). The SX127× family from Semtech Corp. is a member of this device class and promises significant benefits for range, robust performance, and battery lifetime compared to competing technologies. This contribution gives a short overview of the technologies to support Long Range (LoRa™) and the corresponding Layer 2 protocol (LoRaWAN™). It particularly describes the possibility to combine the Internet Protocol, i.e. IPv6, into LoRaWAN™, so that it can be directly integrated into a full-fledged Internet of Things (IoT). The proposed solution, which we name 6LoRaWAN, has been implemented and tested; results of the experiments are also shown in this paper.
The automatic classification of the modulation format of a detected signal is the intermediate step between signal detection and demodulation. If neither the transmitted data nor other signal parameters such as the frequency offset, phase offset and timing information are known, then automatic modulation classification (AMC) is a challenging task in radio monitoring systems. The approach of clustering algorithms is a new trend in AMC for digital modulations. A novel algorithm called `highest constellation pattern matching' is introduced to identify quadrature amplitude modulation and phase shift keying signals. The obtained simulation and measurement results outperform the existing algorithms for AMC based on clustering. Finally, it is shown that the proposed algorithm works in a real monitoring environment.
Signal detection and bandwidth estimation, also known as channel segmentation or information channel estimation, is a perpetual topic in communication systems. In the field of radio monitoring this issue is extremely challenging, since unforeseeable effects like fading occur accidentally. In addition, most radio monitoring devices normally scan a wide frequency range of several hundred MHz and have to detect a multitude of different signals, varying in signal power, bandwidth and spectral shape. Since narrowband sensing techniques cannot be directly applied, most radio monitoring devices use Nyquist wideband sensing to discover the huge frequency range. In practice, sensing is normally conducted by an FFT sweep spectrum analyzer that delivers the power spectral density (PSD) values to the radio monitoring system. The channel segmentation is the initial step of a comprehensive signal analysis in a radio monitoring system based on the PSD values. In this paper, a novel approach for channel segmentation is presented that is based on a quantization and a histogram evaluation of the measured PSD. It will be shown that only the combination of both evaluations will lead to an successful automatic channel segmentation. The performance of the proposed algorithm is shown in a real radio monitoring szenario.
Security in IT systems, particularly in embedded devices like Cyber Physical Systems (CPSs), has become an important matter of concern as it is the prerequisite for ensuring privacy and safety. Among a multitude of existing security measures, the Transport Layer Security (TLS) protocol family offers mature and standardized means for establishing secure communication channels over insecure transport media. In the context of classical IT infrastructure, its security with regard to protocol and implementation attacks has been subject to extensive research. As TLS protocols find their way into embedded environments, we consider the security and robustness of implementations of these protocols specifically in the light of the peculiarities of embedded systems. We present an approach for systematically checking the security and robustness of such implementations using fuzzing techniques and differential testing. In spite of its origin in testing TLS implementations we expect our approach to likewise be applicable to implementations of other cryptographic protocols with moderate efforts.
The Transport Layer Security (TLS) protocol is a cornerstone of secure network communication, not only for online banking, e-commerce, and social media, but also for industrial communication and cyber-physical systems. Unfortunately, implementing TLS correctly is very challenging, as becomes evident by considering the high frequency of bugfixes filed for many TLS implementations. Given the high significance of TLS, advancing the quality of implementations is a sustained pursuit. We strive to support these efforts by presenting a novel, response-distribution guided fuzzing algorithm for differential testing of black-box TLS implementations. Our algorithm generates highly diverse and mostly-valid TLS stimulation messages, which evoke more behavioral discrepancies in TLS server implementations than other algorithms. We evaluate our algorithm using 37 different TLS implementations and discuss―by means of a case study―how the resulting data allows to assess and improve not only implementations of TLS but also to identify underspecified corner cases. We introduce suspiciousness as a per-implementation metric of anomalous implementation behavior and find that more recent or bug-fixed implementations tend to have a lower suspiciousness score. Our contribution is complementary to existing tools and approaches in the area, and can help reveal implementation flaws and avoid regression. While being presented for TLS, we expect our algorithm's guidance scheme to be applicable and useful also in other contexts. Source code and data is made available for fellow researchers in order to stimulate discussions and invite others to benefit from and advance our work.
Exploiting Dissent: Towards Fuzzing-based Differential Black Box Testing of TLS Implementations
(2017)
The Transport Layer Security (TLS) protocol is one of the most widely used security protocols on the internet. Yet do implementations of TLS keep on suffering from bugs and security vulnerabilities. In large part is this due to the protocol's complexity which makes implementing and testing TLS notoriously difficult. In this paper, we present our work on using differential testing as effective means to detect issues in black-box implementations of the TLS handshake protocol. We introduce a novel fuzzing algorithm for generating large and diverse corpuses of mostly-valid TLS handshake messages. Stimulating TLS servers when expecting a ClientHello message, we find messages generated with our algorithm to induce more response discrepancies and to achieve a higher code coverage than those generated with American Fuzzy Lop, TLS-Attacker, or NEZHA. In particular, we apply our approach to OpenssL, BoringSSL, WolfSSL, mbedTLS, and MatrixSSL, and find several real implementation bugs; among them a serious vulnerability in MatrixSSL 3.8.4. Besides do our findings point to imprecision in the TLS specification. We see our approach as present in this paper as the first step towards fully interactive differential testing of black-box TLS protocol implementations. Our software tools are publicly available as open source projects.
The Datagram Transport Layer Security (DTLS) protocol has been designed to provide end-to-end security over unreliable communication links. Where its connection establishment is concerned, DTLS copes with potential loss of protocol messages by implementing its own loss detection and retransmission scheme. However, the default scheme turns out to be suboptimal for links with high transmission error rates and low data rates, such as wireless links in electromagnetically harsh industrial environments. Therefore, in this paper, as a first step we provide an analysis of the standard DTLS handshake's performance under such adverse transmission conditions. Our studies are based on simulations that model message loss as the result of bit transmission errors. We consider several handshake variants, including endpoint authentication via pre-shared keys or certificates. As a second step, we propose and evaluate modifications to the way message loss is dealt with during the handshake, making DTLS deployable in situations which are prohibitive for default DTLS.
Concussions in sports and during recreational activities are a major source of traumatic brain injury in our society. This is mainly relevant in adolescence and young adulthood, where the annual rate of diagnosed concussions is increasing from year to year. Contact sports (e.g., ice hockey, American football, or boxing) are especially exposed to repeated concussions. While most of the athletes recover fully from the trauma, some experience a variety of symptoms including headache, fatigue, dizziness, anxiety, abnormal balance and postural instability, impaired memory, or other cognitive deficits. Moreover, there is growing evidence regarding clinical and neuropathological consequences of repetitive concussions, which are also linked to an increased risk for depression and Alzheimer’s disease or the development of chronic traumatic encephalopathy. With little contribution of conventional structural imaging (computed tomography (CT) or magnetic resonance imaging (MRI)) to the evaluation of concussion, nuclear imaging techniques (i.e., positron emission tomography (PET) and single-photon emission computed tomography (SPECT)) are in a favorable position to provide reliable tools for a better understanding of the pathophysiology and the clinical evaluation of athletes suffering a concussion.
Langzeit-EKG-Scripte
(2016)
Dieser Artikel beschreibt das vom Wissenschaftsministerium Baden-Württemberg geförderte Projekt bwLehrpool, welches zum Ziel hat, landesweit mittels Zentralisierung von Services und übergreifende Kooperationen IT-Kosten nachhaltig zu reduzieren und RZPersonal zu entlasten. Das Projekt umfasst die Schaffung einer zentralen Infrastruktur für PC-Pools, Speziallabore und e-Prüfungen, um eine größere Flexibilität für die ITUnterstützung in Lehre und Forschung zu erreichen. Dabei soll der administrative Aufwand für den Betrieb reduziert und gleichzeitig Lehre und Forschung von einem konkreten, rechnergestützten Arbeitsplatz beziehungsweise einer Räumlichkeit entkoppelt werden. Hierdurch lassen sich bestehende PC-Pools deutlich besser ausnutzen. Zudem sollen Software- und Hardwarekosten verringert werden, indem anders als derzeit, auch heterogene PC-Landschaften genutzt werden können. Der sich im Aufbau befindende Service leistet dabei eine doppelte Abstraktion. Einerseits schafft er ein gemeinsam nutzbares Basissystem, welches sich an die jeweiligen lokalen Gegebenheiten wie Benutzerverwaltung, Home- und gemeinsame Verzeichnisse oder Druckdienste anpasst. Andererseits bietet es die notwendige Abstraktion, um virtuelle Maschinen verschiedenen Typs hochschulübergreifend nutzen zu können. Expertenwissen auf verschiedenen Ebenen wird optimal genutzt, und für die Lehrenden ergibt sich eine neue Perspektive, da sie auf einem einfachen Weg ihre Lehrund Forschungsumgebungen unabhängig von der konkreten Hardware- und Maschinenadministration definieren und gleichzeitig Erfahrungen mit anderen Hochschulen austauschen können.
The Institute of Applied Research Offenburg is working in the field of autonomous data loggers since many years. In collaboration with industry, a new RFID based active sensor data logger for continuous recording of temperature has been developed and is now manufactured in mass production. Compared to existing systems, an unusual large data memory is integrated, which can be used via a simplified file system in a flexible way. The system will be used to accompany and monitor temperature sensitive goods of high value. The transponder is the first member of a new class of logging devices, the smallest will be not larger than a 2 Euro-coin with a fully integrated ASIC frontend.
Remote measurement of the physiology, so-called biotelemetry, is a key technology in the modern veterinary medicine. The usage of wireless implants has less impact on the behavior of animals than manual measurement methods and cause less disturbance than wired devices. But, common biotelemetry still uses proprietary communication and power concepts focused on small systems with one animal. Therefore, the University of Applied Sciences Offenburg is developing a low-cost RFID system called muTrans1, which is able to measure ECG, pressure, temperature, oxygen saturation and activity. The muTrans uses an own RFID sensor transponder and standardized commercial components and combines them to a scalable RFID system able to build-up RFID sensor networks with a nearly unlimited size.
Mice and rats make up 95% of all animals used in medical research and drug discovery and development. Monitoring of physiological functions such as ECG, blood pressure, and body temperature over the entire period of an experiment is often required. Restraining of the animals in order to obtain this data can cause great inconvenience. The use of telemetric systems solves this problem and provides more reliable results. However, these devices are mostly equipped with batteries, which limit the time of operation or they use passive power supplies, which affects the operating range. The semi-passive telemetric implant being presented is based on RFID technology and overcomes these obstacles. The device is inductively powered using the magnetic field of a common RFID reader device underneath the cage, but is also able to operate for several hours autonomously. Being independent from the battery capacity, it is possible to use the implant over a long period of time or to re-use the device several times in different animals, thus avoiding the disadvantages of existing systems and reducing the costs of purchase and refurbishment.
Formal Description of Inductive Air Interfaces Using Thévenin's Theorem and Numerical Analysis
(2014)
With the development of new integrated circuits to interface radio frequency identification protocols, inductive air interfaces have become more and more important. Near field communication is not only able to communicate, but also possible to transfer power wirelessly and to build up passive devices for logistical and medical applications. In this way, the power management on the transponder becomes more and more relevant. A designer has to optimize power consumption as well as energy harvesting from the magnetic field. This paper discusses a model with simple equations to improve transponder antenna matching. Furthermore, a new numerical analysis technique is presented to calculate the coupling factors, inductions, and magnetic fields of multiantenna systems.
There is an increasing demand by an ever-growing number of mobile customers for transfer of rich media content. This requires very high bandwidth which either cannot be provided by the current cellular systems or puts pressure on the wireless networks, affecting customer service quality. This study introduces COARSE – a novel cluster-based quality-oriented adaptive radio resource allocation scheme, which dynamically and adaptively manages the radio resources in a cluster-based two-hop multi-cellular network, having a frequency reuse of one. COARSE is a cross-layer approach across physical layer, link layer and the application layer. COARSE gathers data delivery-related information from both physical and link layers and uses it to adjust bandwidth resources among the video streaming end-users. Extensive analysis and simulations show that COARSE enables a controlled trade-off between the physical layer data rate per user and the number of users communicating using a given resource. Significantly, COARSE provides 25–75% improvement in the computed user-perceived video quality compared with that obtained from an equivalent single-hop network.
A Survey of Channel Measurements and Models for Current and Future Railway Communication Systems
(2016)
Cardiac resynchronization therapy (CRT) with biventricular pacing is an established therapy for heart failure (HF) patients (P) with ventricular desynchronization and reduced left ventricular (LV) ejection fraction. The aim of this study was to evaluate electrical right atrial (RA), left atrial (LA), right ventricular (RV) and LV conduction delay with novel telemetric signal averaging electrocardiography (SAECG) in implantable cardioverter defibrillator (ICD) P to better select P for CRT and to improve hemodynamics in cardiac pacing.
Methods: ICD-P (n=8, age 70.8 ± 9.0 years; 2 females, 6 males) with VVI-ICD (n=4), DDD-ICD (n=3) and CRT-ICD (n=1) (Medtronic, Inc., Minneapolis, MN, USA) were analysed with telemetric ECG recording by Medronic programmer 2090, ECG cable 2090AB, PCSU1000 oscilloscope with Pc-Lab2000 software (Velleman®) and novel National Intruments LabView SAECG software.
Results: Electrical RA conduction delay (RACD) was measured between onset and offset of RA deflection in the RAECG. Interatrial conduction delay (IACD) was measured between onset of RA deflection and onset of far-field LA deflection in the RAECG. Interventricular conduction delay (IVCD) was measured between onset of RV deflection in the RVECG and onset of LV deflection in the LVECG. Telemetric SAECG recording was possible in all ICD-P with a mean of 11.7 ± 4.4 SAECG heart beats, 97.6 ± 33.7 ms QRS duration, 81.5 ± 44.6 ms RACD, 62.8 ± 28.4 ms RV conduction delay, 143.7 ± 71.4 ms right cardiac AV delay, 41.5 ms LA conduction delay, 101.6 ms LV conduction delay, 176.8 ms left cardiac AV delay, 53.6 ms IACD and 93 ms IVCD.
Conclusions: Determination of RA, LA, RV and LV conduction delay, IACD, IVCD, right and left cardiac AV delay by telemetric SAECG recording using LabView SAECG technique may be useful parameters of atrial and ventricular desynchronization to improve P selection for CRT and hemodynamics in cardiac pacing.
Cardiac resynchronization therapy (CRT) is an established therapy for heart failure patients and improves quality of life in patients with sinus rhythm, reduced left ventricular ejection fraction (LVEF), left bundle branch block and wide QRS duration. Since approximately sixty percent of heart failure patients have a normal QRS duration they do not benefit or respond to the CRT. Cardiac contractility modulation (CCM) releases nonexcitatoy impulses during the absolute refractory period in order to enhance the strength of the left ventricular contraction. The aim of the investigation was to evaluate differences in cardiac index between optimized and nonoptimized CRT and CCM devices versus standard values. Impedance cardiography, a noninvasive method was used to measure cardiac index (CI), a useful parameter which describes the blood volume during one minutes heart pumps related to the body surface. CRT patients indicate an increase of 39.74 percent and CCM patients an improvement of 21.89 percent more cardiac index with an optimized device.
Spectral analysis of signal averaging electrocardiography in atrial and ventricular tachyarrhythmias
(2017)
Background: Targeting complex fractionated atrial electrograms detected by automated algorithms during ablation of persistent atrial fibrillation has produced conflicting outcomes in previous electrophysiological studies. The aim of the investigation was to evaluate atrial and ventricular high frequency fractionated electrical signals with signal averaging technique.
Methods: Signal averaging electrocardiography (ECG) allows high resolution ECG technique to eliminate interference noise signals in the recorded ECG. The algorithm uses automatic ECG trigger function for signal averaged transthoracic, transesophageal and intracardiac ECG signals with novel LabVIEW software (National Instruments, Austin, Texas, USA). For spectral analysis we used fast fourier transformation in combination with spectro-temporal mapping and wavelet transformation for evaluation of detailed information about the frequency and intensity of high frequency atrial and ventricular signals.
Results: Spectral-temporal mapping and wavelet transformation of the signal averaged ECG allowed the evaluation of high frequency fractionated atrial signals in patients with atrial fibrillation and high frequency ventricular signals in patients with ventricular tachycardia. The analysis in the time domain evaluated fractionated atrial signals at the end of the signal averaged P-wave and fractionated ventricular signals at the end of the QRS complex. The analysis in the frequency domain evaluated high frequency fractionated atrial signals during the P-wave and high frequency fractionated ventricular signals during QRS complex. The combination of analysis in the time and frequency domain allowed the evaluation of fractionated signals during atrial and ventricular conduction.
Conclusions: Spectral analysis of signal averaging electrocardiography with novel LabVIEW software can utilized to evaluate atrial and ventricular conduction delays in patients with atrial fibrillation and ventricular tachycardia. Complex fractionated atrial electrograms may be useful parameters to evaluate electrical cardiac arrhythmogenic signals in atrial fibrillation ablation.
Agile Business Intelligence als Beispiel für ein domänenspezifisch angepasstes Vorgehensmodell
(2016)
Business-Intelligence-Systeme stellen durch ihre Unterstützung bei der Entscheidungsfindung für Unternehmen eine wichtige Rolle dar. Mit einer stetig dynamischeren Unternehmensumwelt geht daher die Anforderung nach der agilen Entwicklung dieser Systeme einher, so dass in der BI-Domäne zunehmend erfolgreich agile Methoden und Vorgehensmodelle eingesetzt werden. Die Weiterentwicklung und Anpassung von BI-Systemen ist dahingehend besonders, dass diese in der Regel langjährig gewachsenen Systemen und Strukturen betreffen, die strengen regulatorischen Rahmenbedingungen unterliegen, was eine Herausforderung für agile Vorgehensweisen darstellt. Wurden die Werte und Prinzipien des agilen Manifests [AM01] und die daraus abgeleiteten Methoden zu Beginn meist eins zu eins auf den Bereich BI übertragen, so hat sich das Verständnis von BI- Agilität als ganzheitliche Eigenschaft der BI im deutschsprachigen Raum etabliert, und agile Me- thoden wurden auf die Besonderheiten der BI-Domäne adaptiert. In diesem Beitrag werden BI-Agilität und Agile BI erläutert, ein Ordnungsrahmen für Maßnahmen zur Steigerung der BI-Agilität eingeführt sowie Herausforderungen bei Agile BI erläutert.
Im Projekt bwLehrpool wurde ein verteiltes System für die flexible Nutzung von Rechnerpools durch Desktop-Virtualisierung entwickelt. Auf Basis eines zentral gebooteten Linux- Grundsystems können beliebige virtualisierbare Betriebssysteme für Lehrund Prüfungszwecke zentral bereitgestellt und lokal auf den Maschinen aus-gewählt werden. Die verschiedenen Ar- beitsumgebungen müssen nicht mehr auf den PCs installiert werden und erlauben so eine multifunktionale Nutzung von PCs und Räumen für vielfältige Lehrund Lernszenarien sowie für elektronische Prüfungen. bwLehrpool abstrahiert von der PC-Hardware vor Ort und ermöglicht den Dozenten die eigene Gestaltung und Verwaltung ihrer Softwareumgebungen als Self-Service. Darüber hinaus fördert bwLehrpool den hochschulübergreifenden Austausch von Kursumgebungen.
In public transportation, the motor pool often consists of various different vehicles bought over a duration of many years. Sometimes, they even differ within one batch bought at the same time. This poses a considerable challenge in the storage and allocation of spare parts, especially in the event of damage to a vehicle. Correctly assigning these parts before the vehicle reaches the workshop could significantly reduce both the downtime and, therefore, the actual costs for companies. In order to achieve this, the current software uses a simple probability calculation. To improve the performance, the data of specific companies was analysed, preprocessed and used with several modelling techniques to classify and, therefore, predict the spare parts to be used in the event of a faulty vehicle. We summarize our experience running through the steps of the Cross Industry Standard Process for Data Mining and compare the performance to the previously used probability. Gradient Boosting Trees turned out to be the best modeling technique for this special case.
This paper describes the use of the single-linkage hierarchical clustering method in outlier detection for manufactured metal work pieces. The main goal of the study is to group defects that occur 5 mm into a work piece from the edge, i.e., the border of the metal work piece. The goal is to remove defects outside the area of interest as outliers. According to the assumptions made for the performance criteria, the single-linkage method has achieved better results compared to other agglomeration methods.
Seit den ersten Projekten der 90er Jahre arbeiten Hochschulen daran, geeignete Servicestrukturen für E-Learning zu etablieren, die die erforderliche technische, didaktische und organisatorische Unterstützung hochschulweit zur Verfügung stellen. Ging es zunächst darum, Services überhaupt dauerhaft zu sichern, steht heute die Frage des „wie“ im Vordergrund. Dabei wird am Bereich E-Learning ein eigentlich viel allgemeineres Problem deutlich: Die bisher überwiegende Organisation der Hochschule nach funktionellen Einheiten stößt an ihre Grenzen. Wir schlagen eine stärker prozessorientierte Sichtweise vor, analog zu Entwicklungen bei der Organisation von Unternehmen.
Data Science gilt als eine der wichtigsten Entwicklungen der letzten
Jahre und viele Unternehmen sehen in Data Science die Möglichkeit,
ihre Daten zusätzlich wertschöpfend zu nutzen. Dabei kann es sich um
die Optimierung von Maintenance-Prozessen handeln, um eine bessere
Steuerung der eigenen Preis- und Lagerhaltungsstrategie oder auch
um völlig neue Services und Produkte, die durch Data Science möglich
werden. Die im Unternehmen vorliegenden Daten, an die so hohe Erwartungen
geknüpft wurden, sollen dazu genutzt werden, um Services
und Prozesse effizienter und passgenauer gestalten zu können. Vielfach
gilt Data Science dabei als Allheilmittel: Daten, die über Jahre hinweg
gesammelt wurden und mit zunehmender Geschwindigkeit und Heterogenität
anfallen, sollen endlich nutzbar gemacht werden. Zwar sind die
eingesetzten Techniken und Algorithmen teilweise schon zehn Jahre und
mehr alt, doch erst jetzt entfalten sie im Zusammenspiel mit Big Data
ihr Potenzial im Unternehmensumfeld. Die Erwartungen sind hoch, doch
der Weg zu den neuen Erkenntnissen ist mit hohem Aufwand verbunden
und wird von einigen Unternehmen noch immer unterschätzt.
Für Unternehmen mit einem traditionellen BI-Ansatz stellt Data Science
ein ergänzendes Set von Methoden und Werkzeugen dar, mit deren Hilfe
die Informationsversorgung der Entscheider auf den verschiedenen
hierarchischen Ebenen noch besser gestaltet werden kann. So zum Beispiel,
wenn man mit Data Science feststellt, dass die Wahrscheinlichkeit
für einen Versicherungsabschluss steigt, wenn bei der Auswahl der
anzusprechenden Kunden zusätzliche Daten herangezogen werden, die
zwar bereits vorliegen, aber noch nicht berücksichtigt worden sind. Im
Extremfall werden auch Entscheidungen vollständig automatisiert, die
bisher von Mitarbeiterinnen und Mitarbeitern getroffen wurden. Ein Algorithmus
legt dann fest, wann Ware nachbestellt oder welcher Preis für
den Endkunden festgesetzt wird.
Im vorliegenden E-Book soll ein Überblick über das Gebiet Data Science
gegeben werden. Dabei wird ein besonderes Augenmerk auf das Zusammenspiel
sowie das Mit- und Nebeneinander von Data Science und vorhandenen
BI-Systemen gelegt.
Due to its numerous application fields and benefits, virtualization has become an interesting and attractive topic in computer and mobile systems, as it promises advantages for security and cost efficiency. However, it may bring additional performance overhead. Recently, CPU virtualization has become more popular for embedded platforms, where the performance overhead is especially critical. In this article, we present the measurements of the performance overhead of the two hypervisors Xen and Jailhouse on ARM processors in the context of the heavy load “Cpuburn-a8” application and compare it to a native Linux system running on ARM processors.
Bluetooth Low Energy extends the Bluetooth standard in version 4.0 for ultra-low energy applications through the extensive usage of low-power sleeping periods, which inherently difficult in frequency hopping technologies. This paper gives an introduction into the specifics of the Bluetooth Low Energy protocol, shows a sample implementation, where an embedded device is controlled by an Android smart phone, and shows the results of timing and current consumption measurements.
Transcatheter aortic valve implantation is a therapy for patients with reduced left ventricular ejection fraction and symptomatic aortic stenosis. The aim of the study was to compare the pre-and post- transcatheter aortic valve implantation procedures to determine the QRS and QT ventricular conduction times as a potential predictor of permanent pacemaker therapy requirement after transcatheter aortic valve implantation. QRS and QT ventricular conduction times were prolonged after transcatheter aortic valve implantation in heart failure patients with permanent dual chamber pacemaker therapy after transcatheter aortic valve implantation. QRS and QT ventricular conduction times may be useful parameters to evaluate the risk of post-procedural ventricular conduction block and permanent pacemaker therapy in transcatheter aortic valve implantation.
In contrast to conventional aortic valve replacement, the Transcatheter Aortic Valve Implantation (TAVI) is a new highly specialist alternative to surgical valve replacement for patients with symptomatic severe aortic stenosis and high operative risk. The procedure was performed in a minimally invasive way and was introduced at the university heart centre, Freiburg – Bad Krozingen in 2008. The results have been getting better and better over the years. The aim of the investigation is the analysis of electrocardiogram conduction time and the electrocardiography changes recorded hours and days after the procedure depending on artificial heart valve models, which may lead to pacemaker implantation, even the analysis of the effectiveness of treatment.
Transcatheter aortiv valve implantation is a new safe strategy treatment for patients with symptomatic severe aortic stenosis and high operative risk. The aim of the study was to compare the pre-and post- muiscatheter aortiv valve implantation procedures to determine the atrioventricuktr conduction time as a potential predictor of permanent pacemaker therapy requirement after transcatheter aortiv valve implantation. The transcatheter aortiv valve implantation patients were divided into groups without pacemaker and with dual or single chamber pacemEtker with diffent atrioventrieular conduction time disturbance before and after transcatheter aortiv valve implantation. In heart failure, patients without permanent pacemaker therapy after transcatheter aortiv valve implantation, atrioventricular conduction time was prolonged after transcatheter aortiv valve implantation. In patients with permanent dual chamber pacemaker therapy after transcatheter aortiv valve implantation, atrioventricular conduction time was normalised with dual chaniber atrioventrieuku pacing mode. Atrioventricular conduction time may be a useful parameter to evaluate the risk of post-procedural atrioventricular conduction block and permanent pacemaker therapy in transcatheter north, valve implantation patients.
In the work at hand, we combine a Private Information Retrieval (PIR) protocol with Somewhat Homomorphic Encryption (SHE) and use Searchable Encryption (SE) with the objective to provide security and confidentiality features for a third party cloud security audit. During the auditing process, a third party auditor will act on behalf of a cloud service user to validate the security requirements performed by a cloud service provider. Our concrete contribution consists of developing a PIR protocol which is proceeding directly on a log database of encrypted data and allowing to retrieve a sum or a product of multiple encrypted elements. Subsequently, we concretely apply our new form of PIR protocol to a cloud audit use case where searchable encryption is employed to allow additional confidentiality requirements to the privacy of the user. Exemplarily we are considering and evaluating an audit of client accesses to a controlled resource provided by a cloud service provider.
Today's network landscape consists of many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. In this paper software architecture has been proposed to establish device and content format independent communication, implemented in Language Learning Game (LLG).
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices, will increase their diversity and variety. In this paper software architecture has been proposed to establish device and content format independent communication including 3D imaging and virtual reality data as content. As experimental validation the concept is implemented in collaborative Language Learning Game (LLG), which is a learning tool for language acquisition.
The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.
Nowadays, it is assumed of many applications, companies and parts of the society to be always available online. However, according to [Times, Oct, 31 2011], 73% of the world population do not use the internet and thus aren't “online” at all. The most common reasons for not being “online” are expensive personal computer equipment and high costs for data connections, especially in developing countries that comprise most of the world’s population (e.g. parts of Africa, Asia, Central and South America). However it seems that these countries are leap-frogging the “PC and landline” age and moving directly to the “mobile” age. Decreasing prices for smart phones with internet connectivity and PC-like operating systems make it more affordable for these parts of the world population to join the “always-online” community. Storing learning content in a way accessible to everyone, including mobile and smart phones, seems therefore to be beneficial. This way, learning content can be accessed by personal computers as well as by mobile and smart phones and thus be accessible for a big range of devices and users. A new trend in the Internet technologies is to go to “the cloud”. This paper discusses the changes, challenges and risks of storing learning content in the “cloud”. The experiences were gathered during the evaluation of the necessary changes in order to make our solutions and systems “cloud-ready”.
Smoothie: a solution for device and content independent applications including 3D imaging as content
(2014)
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information represented in different data formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of effort is being made in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including those that are mobile, considering the individual situation of the end user. Till today the research is going on in different parts of the world but the task is not completed yet. The goal of this research work is to find a way to solve the above stated problems by investigating system architectures to provide unconstrained, continuous and personalized access to the content and interactive applications everywhere and at anytime with different devices. As a Solution of the problem considered, a new architecture named “Smoothie” is proposed.
The concept of m-learning which differs from other forms of e-learning covers a wide range of possibilities opened up by the convergence of new mobile technologies, wireless communication structure and distance learning development. This process of converging has launched some new goals to support m-learning where heterogeneity of devices, their operating systems (Linux, Windows, Symbian, Android etc) and supported markup languages (WML, XHTML etc), adaptive content, preferences or characteristics of user have become some of the major problems to be solved. To facilitate the learning process even more and to establish literally anytime anywhere learning, learning material/content should be available to the user always even if the user is in offline. Multiple devices used by the same user should also be synchronized among themselves and with server to provide updated learning content and to give a freedom to the user to choose any device as per his/her convenience. In this paper software architecture has been proposed to solve these problems and has been implemented by using a multidimensional flashcard learning system which synchronizes among all the devices that are being used by the user.
Today's network landscape consists of quite different network technologies, wide range of end-devices with large scale of capabilities and power, and immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of efforts are being done in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including mobile considering individual situation of the end user. This is very difficult because various kinds of devices used by different users or in different times/parallel by the same user which are not predictable and have to be recognized by the system in order to identify device capabilities. Not only the devices but also Content and User Interfaces are big issues because they could include different kinds of data format like text, image, audio, video, 3D Virtual Reality data and other upcoming formats. A very suitable and useful example of the use of such a system is mobile learning because of the large amount of varying devices with significantly different features and functionalities. This is true not only to support different learners, e.g. all learners within one learning community, but also to support the same learner using different equipment parallel and/or at different times. Those applications may be significantly enhanced by including virtual reality content presentation. Whatever the purposes are, it is impossible to develop and adapt content for all kind of devices including mobiles individually due to different capabilities of the devices, cost issues and author‘s requirement. A solution should be found to enable the automation of the content adaptation process.
The establishment of a software tool chain among requirements management tools, black box test approach tool CTE XL and RTRT is proposed in this paper. The use of Classification Tree Method ensures the reduction in the number of test cases and promises an increased efficiency when testing. The traceability of test cases and requirements is guaranteed by the established software tool chain with well defined interfaces. As the experimental results point out, a better test coverage can be achieved. Future work can be based on automatic generation of init and expected values for testing, requiring no interference from a software quality engineer. In conclusion, the tasks that need to be performed by the software quality engineers is to define the black box test cases using CTM/CTE XL, import the requirements from the requirements management tools, import the XML file to test tool RTRT. By giving the initial and expected values the testing can be performed in a comfortable way.
Optische Navigationssysteme weisen bisher eine eindeutige Trennung zwischen nachverfolgendem Gerät (Tool Tracker) und nachverfolgten Geräten (Tracked Tools) auf. In dieser Arbeit wird ein neues Konzept vorgestellt, dass diese Trennung aufhebt: Jedes Tracked Tool ist gleichzeitig auch Tool Tracker und besteht aus Marker-LEDs sowie mindestens einer Kamera, mit deren Hilfe andere Tracker in Lage und Orientierung nachverfolgt werden können. Bei Verwendung von nur einer Kamera geschieht dies mittels Pose Estimation, ab zwei Kameras werden die Marker-LEDs trianguliert. Diese Arbeit beinhaltet die Vorstellung des neuen Peer-To-Peer-Tracking-Konzepts, einen sehr schnellen Pose-Estimation-Algorithmus für beliebig viele Marker sowie die Klärung der Frage, ob die mit Pose Estimation erreichbare Genauigkeit vergleichbar mit der eines Stereo-Kamera-Systems ist und den Anforderungen an die chirurgische Navigation gerecht wird.
In online analytical processing (OLAP), filtering elements of a given dimensional attribute according to the value of a measure attribute is an essential operation, for example in top-k evaluation. Such filters can involve extremely large amounts of data to be processed, in particular when the filter condition includes “quantification” such as ANY or ALL, where large slices of an OLAP cube have to be computed and inspected. Due to the sparsity of OLAP cubes, the slices serving as input to the filter are usually sparse as well, presenting a challenge for GPU approaches which need to work with a limited amount of memory for holding intermediate results. Our CUDA solution involves a hashing scheme specifically designed for frequent and parallel updates, including several optimizations exploiting architectural features of Nvidia’s Fermi and Kepler GPUs.