Refine
Year of publication
Document Type
- Conference Proceeding (453)
- Article (reviewed) (145)
- Article (unreviewed) (95)
- Bachelor Thesis (59)
- Contribution to a Periodical (38)
- Book (34)
- Part of a Book (31)
- Patent (30)
- Letter to Editor (13)
- Master's Thesis (13)
Conference Type
- Konferenzartikel (288)
- Konferenz-Abstract (76)
- Konferenzband (57)
- Sonstiges (23)
- Konferenz-Poster (9)
Language
- English (501)
- German (411)
- Other language (2)
- Multiple languages (1)
- Russian (1)
- Spanish (1)
Keywords
- Mikroelektronik (56)
- RoboCup (20)
- Kommunikation (15)
- Mathematik (12)
- E-Learning (8)
- Eingebettetes System (8)
- Energieversorgung (8)
- Intelligentes Stromnetz (8)
- Messtechnik (8)
- Applikation (7)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (917) (remove)
Open Access
- Open Access (381)
- Closed Access (298)
- Bronze (155)
- Closed (84)
- Gold (2)
- Diamond (1)
- Grün (1)
Die Kommunikationstechnik für die Zählerfernauslesung (Smart Metering) und für die Energieerzeugungs- und -verteilnetze (Smart Grid) hat das Potenzial, zu einer der ersten hoch skalierten M2M-Anwendungen zu werden. In den vergangenen Jahren konnten zwei vielversprechende Entwicklungen im Umfeld der drahtlosen Kommunikation für die Smart-Grid-Kommunikation vorbereitet werden, die das Marktgeschehen über Deutschland und über die Versorgungstechnik hinaus beeinflussen könnten. Neben der Spezifikation der OMS-Gruppe ist die Erarbeitung eines Schutzprofils (Protection Profile, PP) sowie einer Technischen Richtlinie (TR) für die Kommunikationseinheit eines intelligenten Messsystems (Smart Meter Gateway) durch das Bundesamt für Sicherheit in der Informationstechnik (BSI) zu nennen. Diese greifen, wie der Beitrag beschreibt, den Stand der Technik auf und geben praxisorientierte Umsetzungen vor.
The Internet of Things (IoT), ubiquitous computing and ubiquitous connectivity, Cyber Physical Systems (CPS), ambient intelligence, Machine-to-Machine communication (M2M) or Car-to-Car (C2C)-communication, smart metering, smart grid, telematics, telecare, telehealth – there are many buzzwords around current developments related to the Internet.
This contribution gives an overview on such IoT-applications, as they are already used today to improve the availability of information, increase efficiency, push system limits and extend the value chain. At a closer look, the economic and technical development can be separated into different phases. It is interesting that we are currently at the threshold to a new phase, with decentralized and cooperative communication and control nodes as cornerstones. Thus, embedded systems and their connectivity are in the middle of the scene.
This recent development is described along with some example projects from the author’s team which are used in industrial automation, energy supply and distribution (home automation and smart metering), traffic engineering (cooperative driver assistance systems), and in telehealth and telecare.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
In the field of smart metering it can be observed that standardized protocol, like Wireless M-Bus or ZigBee, enjoy a rapidly increasing popularity. For the protocol implementations, however, up to now, mostly legacy engineering processes and technologies are used, and modern approaches such as model driven design processes or open software platform are disregarded. Therefore, within the WiMBex project, it shall be demonstrated that it is possible to develop a commercial class Wireless M-Bus implementation following state-of-the art design process and using TinyOS as an open source platform. This contribution describes the overall approach of the project, as well as the state and the first experiences of the current work in progress.
Institute of Reliable Embedded Systems and Communication Electronics, Offenburg University of Applied Sciences, Germany has developed an automated testing environment, Automated Physical TestBeds (APTB), for analyzing the performance of wireless systems and its supporting protocols. Wireless physical networking nodes can connect to this APTB and the antenna output of this attaches with the RF waveguides. To model the RF environment this RF waveguides then establish wired connection among RF elements like splitters, attenuators and switches. In such kind of set up it’s well possible to vary the path characteristics by altering the attenuators and switches. The major advantage of using APTB is the possibility of isolated, well controlled, repeatable test environment in various conditions to run statistical analysis and even to execute regression tests. This paper provides an overview of the design and implementation of APTB, demonstrates its ability to automate test cases, and its efficiency.
The IEEE 1588 precision time protocol (PTP) is a time synchronization protocol with sub-microsecond precision primarily designed for wired networks. In this letter, we propose wireless precision time protocol (WPTP) as an extension to PTP for multi-hop wireless networks. WPTP significantly reduces the convergence time and the number of packets required for synchronization without compromising on the synchronization accuracy.
Energy and environment continue to be major issues of human mankind. This holds true on the regional, the national, and the global level. And it is one of the problems, where engineers and scientists in conjunction with political will and people's awareness, can find new approaches and solutions to save the natural resources and to make their use more efficient.
The research project Ko-TAG [2], as part of the research initiative Ko-FAS [1], funded by the German Ministry of Economics and Technologies (BMWi), deals with the development of a wireless cooperative sensor system that shall pro-vide a benefit to current driver assistance systems (DAS) and traffic safety applications (TSA). The system’s primary function is the localization of vulnerable road users (VRU) e.g. pedestrians and powered two-wheelers, using communication signals, but can also serve as pre-crash (surround) safety system among vehicles. The main difference of this project, compared to previous ones that dealt with this topic, e.g. the AMULETT project, is an underlying FPGA based Hardware-Software co-design. The platform drives a real-time capable communication protocol that enables highly scalable network topologies fulfilling the hard real-time requirements of the single localization processes. Additionally it allows the exchange of further data (e.g. sensor data) to support the accident pre-diction process and the channel arbitration, and thus supports true cooperative sensing. This paper gives an overview of the project’s current system design as well as of the implementations of the key HDL entities supporting the software parts of the communication protocol. Furthermore, an approach for the dynamic reconfiguration of the devices is described, which provides several topology setups using a single PCB design.
The communication between objects, i.e. between cars (car-2-car, C2C), between cars and infrastructure (car-2-infrastructure, C2I) and between cars and vulnerable road users (car-2-VRU, C2VRU) is a major stepping stone towards traffic applications to enable efficient and safe traffic flow. However, these applications pose very high requirements to the communication protocols, which go beyond the capabilities of an available standardized solution.
This contribution shows how iterative design processes can help to fulfill these requirements, while re-using a maximum of elements from one level to the next and thus avoiding unrealistic overhead. In especially, the added value of simulation and emulation in this iterative process is elaborated.
Legacy industrial communication protocols are proved robust and functional. During the last decades, the industry has invented completely new or advanced versions of the legacy communication solutions. However, even with the high adoption rate of these new solutions, still the majority industry applications run on legacy, mostly fieldbus related technologies. Profibus is one of those technologies that still keep on growing in the market, albeit a slow in market growth in recent years. A retrofit technology that would enable these technologies to connect to the Internet of Things, utilize the ever growing potential of data analysis, predictive maintenance or cloud-based application, while at the same time not changing a running system is fundamental.
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
Die neueste Generation von programmierbaren Logikbausteinen verfügt neben den konfigurierbaren Logikzellen über einen oder mehrere leistungsfähige Mikroprozessoren. In dieser Arbeit wird gezeigt, wie ein bestehendes Zwei-Chip-System auf einen Xilinx Zynq 7000 mit zwei ARM A9-Cores migriert wird. Bei dem System handelt es sich um das „GPS-gestützte Kreisel-system ADMA“ des Unternehmens GeneSys. Die neue Lösung verbessert den Datenaustausch zwischen dem ersten Mikroprozessor zur digitalen Signalverarbeitung und dem zweiten Prozessor zur Ablaufsteuerung durch ein Shared Memory. Für die schnelle und echtzeitfähige Datenübertragung werden zahlreiche hochbitratige Schnittstellengenutzt.
A highly scalable IEEE802.11p communication and localization subsystem for autonomous urban driving
(2013)
The IEEE802.11p standard describes a protocol for car-to-X and mainly for car-to-car-communication. It has found its place in hardware and firmware implementations and is currently tested in various field tests. In the research project Ko-TAG, which is part of the research initiative Ko-FAS, cooperative sensor technology is developed for the support of highly autonomous driving. A secondary radar principle based on communication signals enables localization of objects with simultaneous data transmission. It mainly concentrates on the detection of pedestrians and other vulnerable road users (VRU), but also supports pre crash safety applications. Thus it is mainly targeted for the support of traffic safety applications in intra-urban scenarios. This contribution describes the Ko-TAG part of the overall initiative, which develops a subsystem to improve the real-time characteristics of IEEE802.11p needed for precise time of flight real-time localization. In doing this, it still fits into the regulatory schemes. It discusses the approach for definition and verification of the protocol design, while maintaining the close coexistence with existing IEEE802.11p subsystems. System simulations were performed and hardware was implemented. Test results are shown in the last part of the paper.
The IEEE802.11p standard describes a protocol for car-to-X and mainly for car-to-car-communication. It has found its place in hardware and firmware implementations and is currently tested in various field tests. In the research project Ko-TAG, which is part of the research initiative Ko-FAS, cooperative sensor technology is developed and its benefit for traffic safety applications is evaluated. A secondary radar principle based on communication signals enables localization of objects with simultaneous data transmission. It mainly concentrates on the detection of pedestrians and other vulnerable road users (VRU), but also supports pre crash safety applications. The Ko-TAG proposal enriches the current IEEE802.11p real-time characteristics needed for precise time-of-flight real-time localization. This contribution describes the development of a subsystem, which extends the functionality of IEEE802.11p and fits into the regulatory schemes. It discusses the approach for definition and verification of the protocol design, while maintaining the close coexistence with existing IEEE802.11p subsystems. System simulations were performed and hardware was implemented. The next step will be field measurements to verify the simulation results.
Automated RF Emulator for a Highly Scalable IEEE802.11p Communication and Localization Subsystem
(2014)
The IEEE802.11p standard describes a protocol for car-to-X and mainly for car-to-car-communication. In the research project Ko-TAG, which is part of the research initiative Ko-FAS, cooperative sensor technology is developed for the support of highly autonomous driving. The Ko-TAG subsystem improves the real-time characteristics of IEEE802.11p needed for precise time of flight real-time localization while still fitting into the regulatory schemes. A secondary radar principle based on communication signals enables localization of objects with simultaneous data transmission. The Ko-TAG subsystem mainly concentrates on the support of traffic safety applications in intra-urban scenarios. This paper details on the development of a fully automated RF emulator used to test the Ko-TAG subsystem.
The RF emulator includes the physical networking nodes, but models the RF environment using RF-waveguides. The RF emulator allows the controlling of path loss and connectivity between any of the nodes with the help of RF attenuators and programmable RF switches, while it is shielded against its surrounding RF environment in the lab. Therefore it is an inexpensive alternative to an RF absorber chamber, which often is not available or exceeds the project’s budget.
Details about the system definition can be found in earlier papers. Test results are shown in the last part of the paper.
Automatisierte Tests für ein IEEE802.11p-kompatibles Kommunikations- und Lokalisierungssubsystem
(2014)
OPC UA (Open Platform Communications Unified Architecture) is already a well-known concept used widely in the automation industry. In the area of factory automation, OPC UA models the underlying field devices such as sensors and actuators in an OPC UA server to allow connecting OPC UA clients to access device-specific information via a standardized information model. One of the requirements of the OPC UA server to represent field device data using its information model is to have advanced knowledge about the properties of the field devices in the form of device descriptions. The international standard IEC 61804 specifies EDDL (Electronic Device Description Language) as a generic language for describing the properties of field devices. In this paper, the authors describe a possibility to dynamically map and integrate field device descriptions based on EDDL into OPCUA.
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
The Thread protocol is a recent development based on 6LoWPAN (IPv6 over IEEE 802.15.4), but with extensions regarding a more media independent approach, which – additionally – also promises true interoperability. To evaluate and analyse the operation of a Thread network a given open source 6LoWPAN stack for embedded devices (emb::6) has been extended in order to comply with the Thread specification. The implementation covers Mesh Link Establishment (MLE) and network layer functionality as well as 6LoWPAN mesh under routing mechanism based on MAC short addresses. The development has been verified on a virtualization platform and allows dynamical establishment of network topologies based on Thread's partitioning algorithm.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
The increasing number of transistors being clocked at high frequencies of modern microprocessors lead to an increasing power consumption, which calls for an active dynamic thermal management. In a research project a system environment has been developed, which includes thermal modeling of the microprocessor in the board system, a software environment to control the characteristics of the system’s timing behavior, and a modified Linux scheduler, which is enhanced with a prediction controller. Measurement results are shown for this development for a Freescale i.MX6Q quad-core microprocessor.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
Efficient, low-cost, secure and reliable communication solutions are a major stepping stone for smart metering and smart grid applications. This especially holds true for the so called primary communication or local metrological network (LMN) between a local meter or actuator and a data collector or gateway, where the highest requirements with regard to cost, bandwidth, and energy efficiency have to be taken into consideration. Multiple developments and field tests are going on in this field, however, energy autarkic devices are hardly found, yet.
Efficient, secure and reliable communication is a major precondition for powerful applications in smart metering and smart grid. This especially holds true for the so called primary communication in the Local Metrological Network (LMN) between meter and data collector, as the LMN comes with the most stringent requirements with regard to cost, range, as well as bandwidth and energy efficiency. Until today, LMN field tests are operated all over the world. In these installations, however, energy autarkic systems play a marginal role. This contribution describes the results of the framework 7 (FP 7) WiMBex project (“Remote wireless water meter reading solution based on the EN 13757 standard, providing high autonomy, interoperability and range”). In this project an energy autarkic water meter was developed and tested, which follows the specification of the Wireless M-Bus protocol (EN 13757). The complete system development covers the PCB with the RF transceiver and the microcontroller, the energy converter and storage, and the software with the protocol. This contribution especially concentrates on the design, the development and the verification of the routing protocol. The routing protocol is based on the Q mode of EN13757-5 (Wireless M-Bus) and was extended by an additional energy state related parameter. This extension is orthogonal to the existing protocol and considers both the charge level and the charge characteristics (rate of occurrences, intensity). The software was implemented in NesC under the operating system TinyOS. The system was verified in an automated test bed and in field tests in UK and Ireland.
Efficient, low-cost, secure and reliable communication solutions are a major stepping stone for smart metering and smart grid applications. This especially holds true for the so called primary communication or local metrological network (LMN) between a local meter or actuator and a data collector or gateway, where the highest requirements with regard to cost, bandwidth, and energy efficiency have to be taken into consideration. Multiple developments and field tests are going on in this field, however, energy autarkic devices are hardly found, yet. This contribution describes the development of an automatic water meter reading (AWMR) technology based on Wireless M-Bus to provide water utility companies with an automatic remote water meter reading solution. It addresses the special needs of home utilities by providing a remote metering solution independent from the electricity infrastructure, both in terms of data communication and in terms of power supply. For this project, a cost efficient integrated energy harvesting system powered by the available water flow was developed, to enable operation independently of the mains grid, and eliminate the need for battery replacement for near-zero maintenance costs.
Due to a controversial enrollment policy in most engineering programs at German Universities of Applied Sciences (UAS), many freshmen show very low school grades in key subjects like Math and Physics. Nevertheless they expect to be entertained in the lectures and get demotivated easily. Despite initial reservations, a cheer-and-challenge approach was developed for teaching Mechanics to freshmen having very diverse school grades. When tested, it showed astonishing results.
Both German and French Air-Source Heat Pump (ASHP) markets have been enjoying an overall upwards trend for many years but, nevertheless, they remain merely slightly penetrated. In terms of market players and their share, the French market is aptly diversified, whereas the German one, being utterly dominated by one single manufacturer, is badly in need of some diversification. At the same time Korean ASHP manufacturers are targeting the French but not German ASHP market. The main purpose of the paper is to find out likely reasons for their one-sided engagement, primarily those associated with the ASHP technology and its system-related aspects.
Diese Arbeit beschäftigt sich primär mit der Erstellung von Konzepten, welche in den Bereichen der langfristigen Planung, welche eine IT-Strategie festlegt und der kurzfristigen Steuerung des Tagesgeschäfts, ein besser strukturiertes und effizienteres Vorgehen im Umgang mit der Vielzahl an anfallenden Daten und Informationen ermöglichen sollen. Hierzu werden in den Grundlagen zunächst alle relevanten Begriffe geklärt, welche in diesem Umfeld auftreten. Im Kapitel Stand der Technik werden, die in der Industrie am meisten zertifizierten Standards, zur Lösung dieser Problematik vorgestellt. In der darauf folgenden Konzeption werden alle notwendigen Elemente konzipiert. Dafür erfolgt zunächst eine Auswahl der ITIL-Prozesse, welche für ein erfolgreiches IT-Controlling umgesetzt werden sollen. Danach wird ein Vorschlag für einen Service-Katalog auf Basis von ITIL entwickelt, der es ermöglichen soll die wachsende Anzahl von IT-Services an Hochschulen zu verwalten. Nach diesem erfolgt eine Konzeption von neuen Kostenstellen und Kostenarten für das Rechenzentrum der Hochschule Offenburg. Dies ist notwendig, da die aktuellen Kostenstellen und Kostenarten nicht mehr den Anforderungen des Rechenzentrums gerecht werden. Auch wird in dieser Arbeit eine Konzeption für ein einfaches ordnerbasiertes Ablagesystem zur Verwaltung von Partner-, Lizenz- und Vertragsdokumenten erstellt. Der letzte Punkt dieser Arbeit besteht in der Konzeption und Umsetzung eines Informationssystems, durch welches alle relevanten Daten grafisch ansprechend aufbereitet zur Verfügung gestellt werden sollen. Die Umsetzung dieses Informationssystems wird in dieser Arbeit mithilfe der beiden Open-Source-Tools Talend und Palo vorgenommen.
In the recent two years the authors have developed a light weight and low power flight control system for model helicopters consisting of an attitude and heading reference system (AHRS), a navigator (INS) augmented with GPS, barometric altitude sensor and a magnetic sensor, a flight control computer (FCC) and bidirectional ground data links. The system has been tested on a commercial stunt flight model helicopter. The AHRS consists of three MEMS-gyros, two 2-axis MEMS accelerometers and a microcontroller performing the required sensor compensation and data processing to generate attitude angles and true rate and acceleration data of the flying platform. The heading angle is augmented with a 2-axis magnetic sensor. The AHRS is stunt flight capable. The INS integrates the acceleration data to obtain velocity and position data. All data are calculated in both the helicopter and the local earth frame with 50 Hz rate. The algorithm is augmented with GPS data for the lateral movement and with a barometric altitude sensor for the vertical movement. The barometric data are compensated for air pressure changes due to the helicopter main rotor. The FCC contains a set of control loops in order to stabilize the helicopter in all axis and to perform commanded velocity and position tasks. The sampling rate for the control loops is again 50 Hz allowing flight control with high bandwidth. Various safety features are implemented in the software. The bidirectional data link is based on a 2.4 GHz Bluetooth Class I RF-link with a 115 kbaud data rate. A dipole antenna is used on the helicopter, an automatically tracking patch antenna is used on the ground. For commanded velocity flight a standard 35 MHz RF-link is used. For data sampling, monitoring and mode control a laptop is used on the ground. Several operating modes are implemented ranging from commanded velocity flight to simple automatic stunt flight according to predefined flight tracks. The model helicopter is an ALIGN TREX 600 with 3 kg flight mass and a brushless electric motor. The rotor diameter is 1.40 m. The helicopter is able to carry a payload which mass depends on the size of the installed LiPo-cells and the purpose of the flight mission. The system has been tested in quite a few flight tests and missions. The helicopter is controlled safely up to wind loads of at least 5 Beaufort - 6 Beaufort. Data and video captures will be presented. If permission is granted, a demonstration flight will be performed on the premises of the conference.
Diese Arbeit beschäftigt sich mit den verschiedenen Technologien, welche in verteilten Systemen zur Kommunikation dienen können. Kernpunkt der Arbeit ist es, eine Schnittstelle zu schaffen, in der eine extrem hohe Anzahl einzelner Anwendungen untereinander und mit einer zentralen Simulationsumgebung weitestgehend automatisch - kommunizieren können. Dabei ist auch ein wesentlicher Punkt die Frage wie Daten ausgetauscht werden können. Betrachtet man zunächst das System an sich, so wird es deutlich, dass speziell für die Entwicklung des Gesamtsystems, die Stabilität, Leistungsfähigkeit und Zuverlässigkeit des Interfaces von enormer Wichtigkeit ist.
In dieser Thesis wird die Visualisierung der Stammdaten der MARKANT Handels und
Service GmbH (MARKANT) überarbeitet. Zu Beginn wird auf die Firma MARKANT,
die aktuelle Lösung der Visualisierung und die daraus resultierenden Probleme eingegangen.
In der Thesis wird zunächst ein generisches Diagramm erstellt, welches Daten,
die eine hierarchische Ordnung haben, zeichnet. Darauf aufbauend soll als Beispiel, das
Organigramm der Firma MARKANT, mit allen Anforderungen, umgesetzt werden.
Um das Ziel zu erreichen, wird eine Marktübersicht eingeholt und die Frameworks
verglichen. Nachdem eine engere Auswahl getroffen ist, sollen die Frameworks hinsichtlich
der Visualisierung genauer evaluiert werden.
Nach der Wahl des Frameworks wird eine Konzeption erstellt, bei der auf einen neuen
Technologiestack, sowie auf die Bedienbarkeit des Diagrammes durch den Endanwender
eingegangen wird.
Anhand der erstellten Konzeption werden die verschiedenen Funktionen des Diagrammes
implementiert. Ein neuer Stack wird aufgebaut und an das System der Firma
MARKANT angepasst.
Abschließend zeigt ein Vergleich zwischen der alten und der neuen Visualisierung auf,
was in der Thesis erreicht wurde.
An der Fachhochschule Offenburg wird der Design-Kit FHO_MTC_CMOS_035_v1.0 erstellt. Mit Hilfe dieses Kits lassen sich Designs in der AMI O.35 Mikrometer Technologie entwerfen. Alle durchgeführten Arbeiten werden durch den Entwurf eines Lottozahlengenerator-Chips verifiziert, der gefertigt wird. Damit sind alle wesentlichen Schritte bekannt, die für die Aufbereitung eines Design-Kits für beliebige Technologien für die Mentor-Tools erforderlich sind. Der Design-Kit wird für alle MPC-Mitglieder freigegen, die eine NDA für AMI bei Europractice unterzeichnet haben.
In online analytical processing (OLAP), filtering elements of a given dimensional attribute according to the value of a measure attribute is an essential operation, for example in top-k evaluation. Such filters can involve extremely large amounts of data to be processed, in particular when the filter condition includes “quantification” such as ANY or ALL, where large slices of an OLAP cube have to be computed and inspected. Due to the sparsity of OLAP cubes, the slices serving as input to the filter are usually sparse as well, presenting a challenge for GPU approaches which need to work with a limited amount of memory for holding intermediate results. Our CUDA solution involves a hashing scheme specifically designed for frequent and parallel updates, including several optimizations exploiting architectural features of Nvidia’s Fermi and Kepler GPUs.
Optische Navigationssysteme weisen bisher eine eindeutige Trennung zwischen nachverfolgendem Gerät (Tool Tracker) und nachverfolgten Geräten (Tracked Tools) auf. In dieser Arbeit wird ein neues Konzept vorgestellt, dass diese Trennung aufhebt: Jedes Tracked Tool ist gleichzeitig auch Tool Tracker und besteht aus Marker-LEDs sowie mindestens einer Kamera, mit deren Hilfe andere Tracker in Lage und Orientierung nachverfolgt werden können. Bei Verwendung von nur einer Kamera geschieht dies mittels Pose Estimation, ab zwei Kameras werden die Marker-LEDs trianguliert. Diese Arbeit beinhaltet die Vorstellung des neuen Peer-To-Peer-Tracking-Konzepts, einen sehr schnellen Pose-Estimation-Algorithmus für beliebig viele Marker sowie die Klärung der Frage, ob die mit Pose Estimation erreichbare Genauigkeit vergleichbar mit der eines Stereo-Kamera-Systems ist und den Anforderungen an die chirurgische Navigation gerecht wird.
The establishment of a software tool chain among requirements management tools, black box test approach tool CTE XL and RTRT is proposed in this paper. The use of Classification Tree Method ensures the reduction in the number of test cases and promises an increased efficiency when testing. The traceability of test cases and requirements is guaranteed by the established software tool chain with well defined interfaces. As the experimental results point out, a better test coverage can be achieved. Future work can be based on automatic generation of init and expected values for testing, requiring no interference from a software quality engineer. In conclusion, the tasks that need to be performed by the software quality engineers is to define the black box test cases using CTM/CTE XL, import the requirements from the requirements management tools, import the XML file to test tool RTRT. By giving the initial and expected values the testing can be performed in a comfortable way.
Smoothie: a solution for device and content independent applications including 3D imaging as content
(2014)
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information represented in different data formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of effort is being made in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including those that are mobile, considering the individual situation of the end user. Till today the research is going on in different parts of the world but the task is not completed yet. The goal of this research work is to find a way to solve the above stated problems by investigating system architectures to provide unconstrained, continuous and personalized access to the content and interactive applications everywhere and at anytime with different devices. As a Solution of the problem considered, a new architecture named “Smoothie” is proposed.
The concept of m-learning which differs from other forms of e-learning covers a wide range of possibilities opened up by the convergence of new mobile technologies, wireless communication structure and distance learning development. This process of converging has launched some new goals to support m-learning where heterogeneity of devices, their operating systems (Linux, Windows, Symbian, Android etc) and supported markup languages (WML, XHTML etc), adaptive content, preferences or characteristics of user have become some of the major problems to be solved. To facilitate the learning process even more and to establish literally anytime anywhere learning, learning material/content should be available to the user always even if the user is in offline. Multiple devices used by the same user should also be synchronized among themselves and with server to provide updated learning content and to give a freedom to the user to choose any device as per his/her convenience. In this paper software architecture has been proposed to solve these problems and has been implemented by using a multidimensional flashcard learning system which synchronizes among all the devices that are being used by the user.
Today's network landscape consists of quite different network technologies, wide range of end-devices with large scale of capabilities and power, and immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. A lot of efforts are being done in order to establish open, scalable and seamless integration of various technologies and content presentation for different devices including mobile considering individual situation of the end user. This is very difficult because various kinds of devices used by different users or in different times/parallel by the same user which are not predictable and have to be recognized by the system in order to identify device capabilities. Not only the devices but also Content and User Interfaces are big issues because they could include different kinds of data format like text, image, audio, video, 3D Virtual Reality data and other upcoming formats. A very suitable and useful example of the use of such a system is mobile learning because of the large amount of varying devices with significantly different features and functionalities. This is true not only to support different learners, e.g. all learners within one learning community, but also to support the same learner using different equipment parallel and/or at different times. Those applications may be significantly enhanced by including virtual reality content presentation. Whatever the purposes are, it is impossible to develop and adapt content for all kind of devices including mobiles individually due to different capabilities of the devices, cost issues and author‘s requirement. A solution should be found to enable the automation of the content adaptation process.
Nowadays, it is assumed of many applications, companies and parts of the society to be always available online. However, according to [Times, Oct, 31 2011], 73% of the world population do not use the internet and thus aren't “online” at all. The most common reasons for not being “online” are expensive personal computer equipment and high costs for data connections, especially in developing countries that comprise most of the world’s population (e.g. parts of Africa, Asia, Central and South America). However it seems that these countries are leap-frogging the “PC and landline” age and moving directly to the “mobile” age. Decreasing prices for smart phones with internet connectivity and PC-like operating systems make it more affordable for these parts of the world population to join the “always-online” community. Storing learning content in a way accessible to everyone, including mobile and smart phones, seems therefore to be beneficial. This way, learning content can be accessed by personal computers as well as by mobile and smart phones and thus be accessible for a big range of devices and users. A new trend in the Internet technologies is to go to “the cloud”. This paper discusses the changes, challenges and risks of storing learning content in the “cloud”. The experiences were gathered during the evaluation of the necessary changes in order to make our solutions and systems “cloud-ready”.
Today's network landscape consists of many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices and will increase their diversity and variety. In this paper software architecture has been proposed to establish device and content format independent communication, implemented in Language Learning Game (LLG).
Network landscape of recent time contains many different network technologies, a wide range of end-devices with a large scale of capabilities and power, and an immense quantity of information and data represented in different formats. Research on 3D imaging, virtual reality and holographic techniques will result in new user interfaces (UI) for mobile devices, will increase their diversity and variety. In this paper software architecture has been proposed to establish device and content format independent communication including 3D imaging and virtual reality data as content. As experimental validation the concept is implemented in collaborative Language Learning Game (LLG), which is a learning tool for language acquisition.
The improvements in the hardware and software of communication devices have allowed running Virtual Reality (VR) and Augmented Reality (AR) applications on those. Nowadays, it is possible to overlay synthetic information on real images, or even to play 3D on-line games on smart phones or some other mobile devices. Hence the use of 3D data for business and specially for education purposes is ubiquitous. Due to always available at hand and always ready to use properties of mobile phones, those are considered as most potential communication devices. The total numbers of mobile phone users are increasing all over the world every day and that makes mobile phones the most suitable device to reach a huge number of end clients either for education or for business purposes. There are different standards, protocols and specifications to establish the communication among different communication devices but there is no initiative taken so far to make it sure that the send data through this communication process will be understood and used by the destination device. Since all the devices are not able to deal with all kind of 3D data formats and it is also not realistic to have different version of the same data to make it compatible with the destination device, it is necessary to have a prevalent solution. The proposed architecture in this paper describes a device and purpose independent 3D data visibility any time anywhere to the right person in suitable format. There is no solution without limitation. The architecture is implemented in a prototype to make an experimental validation of the architecture which also shows the difference between theory and practice.
In the work at hand, we combine a Private Information Retrieval (PIR) protocol with Somewhat Homomorphic Encryption (SHE) and use Searchable Encryption (SE) with the objective to provide security and confidentiality features for a third party cloud security audit. During the auditing process, a third party auditor will act on behalf of a cloud service user to validate the security requirements performed by a cloud service provider. Our concrete contribution consists of developing a PIR protocol which is proceeding directly on a log database of encrypted data and allowing to retrieve a sum or a product of multiple encrypted elements. Subsequently, we concretely apply our new form of PIR protocol to a cloud audit use case where searchable encryption is employed to allow additional confidentiality requirements to the privacy of the user. Exemplarily we are considering and evaluating an audit of client accesses to a controlled resource provided by a cloud service provider.
Transcatheter aortiv valve implantation is a new safe strategy treatment for patients with symptomatic severe aortic stenosis and high operative risk. The aim of the study was to compare the pre-and post- muiscatheter aortiv valve implantation procedures to determine the atrioventricuktr conduction time as a potential predictor of permanent pacemaker therapy requirement after transcatheter aortiv valve implantation. The transcatheter aortiv valve implantation patients were divided into groups without pacemaker and with dual or single chamber pacemEtker with diffent atrioventrieular conduction time disturbance before and after transcatheter aortiv valve implantation. In heart failure, patients without permanent pacemaker therapy after transcatheter aortiv valve implantation, atrioventricular conduction time was prolonged after transcatheter aortiv valve implantation. In patients with permanent dual chamber pacemaker therapy after transcatheter aortiv valve implantation, atrioventricular conduction time was normalised with dual chaniber atrioventrieuku pacing mode. Atrioventricular conduction time may be a useful parameter to evaluate the risk of post-procedural atrioventricular conduction block and permanent pacemaker therapy in transcatheter north, valve implantation patients.
In contrast to conventional aortic valve replacement, the Transcatheter Aortic Valve Implantation (TAVI) is a new highly specialist alternative to surgical valve replacement for patients with symptomatic severe aortic stenosis and high operative risk. The procedure was performed in a minimally invasive way and was introduced at the university heart centre, Freiburg – Bad Krozingen in 2008. The results have been getting better and better over the years. The aim of the investigation is the analysis of electrocardiogram conduction time and the electrocardiography changes recorded hours and days after the procedure depending on artificial heart valve models, which may lead to pacemaker implantation, even the analysis of the effectiveness of treatment.
Transcatheter aortic valve implantation is a therapy for patients with reduced left ventricular ejection fraction and symptomatic aortic stenosis. The aim of the study was to compare the pre-and post- transcatheter aortic valve implantation procedures to determine the QRS and QT ventricular conduction times as a potential predictor of permanent pacemaker therapy requirement after transcatheter aortic valve implantation. QRS and QT ventricular conduction times were prolonged after transcatheter aortic valve implantation in heart failure patients with permanent dual chamber pacemaker therapy after transcatheter aortic valve implantation. QRS and QT ventricular conduction times may be useful parameters to evaluate the risk of post-procedural ventricular conduction block and permanent pacemaker therapy in transcatheter aortic valve implantation.
Bluetooth Low Energy extends the Bluetooth standard in version 4.0 for ultra-low energy applications through the extensive usage of low-power sleeping periods, which inherently difficult in frequency hopping technologies. This paper gives an introduction into the specifics of the Bluetooth Low Energy protocol, shows a sample implementation, where an embedded device is controlled by an Android smart phone, and shows the results of timing and current consumption measurements.
Due to its numerous application fields and benefits, virtualization has become an interesting and attractive topic in computer and mobile systems, as it promises advantages for security and cost efficiency. However, it may bring additional performance overhead. Recently, CPU virtualization has become more popular for embedded platforms, where the performance overhead is especially critical. In this article, we present the measurements of the performance overhead of the two hypervisors Xen and Jailhouse on ARM processors in the context of the heavy load “Cpuburn-a8” application and compare it to a native Linux system running on ARM processors.
Data Science gilt als eine der wichtigsten Entwicklungen der letzten
Jahre und viele Unternehmen sehen in Data Science die Möglichkeit,
ihre Daten zusätzlich wertschöpfend zu nutzen. Dabei kann es sich um
die Optimierung von Maintenance-Prozessen handeln, um eine bessere
Steuerung der eigenen Preis- und Lagerhaltungsstrategie oder auch
um völlig neue Services und Produkte, die durch Data Science möglich
werden. Die im Unternehmen vorliegenden Daten, an die so hohe Erwartungen
geknüpft wurden, sollen dazu genutzt werden, um Services
und Prozesse effizienter und passgenauer gestalten zu können. Vielfach
gilt Data Science dabei als Allheilmittel: Daten, die über Jahre hinweg
gesammelt wurden und mit zunehmender Geschwindigkeit und Heterogenität
anfallen, sollen endlich nutzbar gemacht werden. Zwar sind die
eingesetzten Techniken und Algorithmen teilweise schon zehn Jahre und
mehr alt, doch erst jetzt entfalten sie im Zusammenspiel mit Big Data
ihr Potenzial im Unternehmensumfeld. Die Erwartungen sind hoch, doch
der Weg zu den neuen Erkenntnissen ist mit hohem Aufwand verbunden
und wird von einigen Unternehmen noch immer unterschätzt.
Für Unternehmen mit einem traditionellen BI-Ansatz stellt Data Science
ein ergänzendes Set von Methoden und Werkzeugen dar, mit deren Hilfe
die Informationsversorgung der Entscheider auf den verschiedenen
hierarchischen Ebenen noch besser gestaltet werden kann. So zum Beispiel,
wenn man mit Data Science feststellt, dass die Wahrscheinlichkeit
für einen Versicherungsabschluss steigt, wenn bei der Auswahl der
anzusprechenden Kunden zusätzliche Daten herangezogen werden, die
zwar bereits vorliegen, aber noch nicht berücksichtigt worden sind. Im
Extremfall werden auch Entscheidungen vollständig automatisiert, die
bisher von Mitarbeiterinnen und Mitarbeitern getroffen wurden. Ein Algorithmus
legt dann fest, wann Ware nachbestellt oder welcher Preis für
den Endkunden festgesetzt wird.
Im vorliegenden E-Book soll ein Überblick über das Gebiet Data Science
gegeben werden. Dabei wird ein besonderes Augenmerk auf das Zusammenspiel
sowie das Mit- und Nebeneinander von Data Science und vorhandenen
BI-Systemen gelegt.
Seit den ersten Projekten der 90er Jahre arbeiten Hochschulen daran, geeignete Servicestrukturen für E-Learning zu etablieren, die die erforderliche technische, didaktische und organisatorische Unterstützung hochschulweit zur Verfügung stellen. Ging es zunächst darum, Services überhaupt dauerhaft zu sichern, steht heute die Frage des „wie“ im Vordergrund. Dabei wird am Bereich E-Learning ein eigentlich viel allgemeineres Problem deutlich: Die bisher überwiegende Organisation der Hochschule nach funktionellen Einheiten stößt an ihre Grenzen. Wir schlagen eine stärker prozessorientierte Sichtweise vor, analog zu Entwicklungen bei der Organisation von Unternehmen.
This paper describes the use of the single-linkage hierarchical clustering method in outlier detection for manufactured metal work pieces. The main goal of the study is to group defects that occur 5 mm into a work piece from the edge, i.e., the border of the metal work piece. The goal is to remove defects outside the area of interest as outliers. According to the assumptions made for the performance criteria, the single-linkage method has achieved better results compared to other agglomeration methods.
In public transportation, the motor pool often consists of various different vehicles bought over a duration of many years. Sometimes, they even differ within one batch bought at the same time. This poses a considerable challenge in the storage and allocation of spare parts, especially in the event of damage to a vehicle. Correctly assigning these parts before the vehicle reaches the workshop could significantly reduce both the downtime and, therefore, the actual costs for companies. In order to achieve this, the current software uses a simple probability calculation. To improve the performance, the data of specific companies was analysed, preprocessed and used with several modelling techniques to classify and, therefore, predict the spare parts to be used in the event of a faulty vehicle. We summarize our experience running through the steps of the Cross Industry Standard Process for Data Mining and compare the performance to the previously used probability. Gradient Boosting Trees turned out to be the best modeling technique for this special case.
Im Projekt bwLehrpool wurde ein verteiltes System für die flexible Nutzung von Rechnerpools durch Desktop-Virtualisierung entwickelt. Auf Basis eines zentral gebooteten Linux- Grundsystems können beliebige virtualisierbare Betriebssysteme für Lehrund Prüfungszwecke zentral bereitgestellt und lokal auf den Maschinen aus-gewählt werden. Die verschiedenen Ar- beitsumgebungen müssen nicht mehr auf den PCs installiert werden und erlauben so eine multifunktionale Nutzung von PCs und Räumen für vielfältige Lehrund Lernszenarien sowie für elektronische Prüfungen. bwLehrpool abstrahiert von der PC-Hardware vor Ort und ermöglicht den Dozenten die eigene Gestaltung und Verwaltung ihrer Softwareumgebungen als Self-Service. Darüber hinaus fördert bwLehrpool den hochschulübergreifenden Austausch von Kursumgebungen.
Agile Business Intelligence als Beispiel für ein domänenspezifisch angepasstes Vorgehensmodell
(2016)
Business-Intelligence-Systeme stellen durch ihre Unterstützung bei der Entscheidungsfindung für Unternehmen eine wichtige Rolle dar. Mit einer stetig dynamischeren Unternehmensumwelt geht daher die Anforderung nach der agilen Entwicklung dieser Systeme einher, so dass in der BI-Domäne zunehmend erfolgreich agile Methoden und Vorgehensmodelle eingesetzt werden. Die Weiterentwicklung und Anpassung von BI-Systemen ist dahingehend besonders, dass diese in der Regel langjährig gewachsenen Systemen und Strukturen betreffen, die strengen regulatorischen Rahmenbedingungen unterliegen, was eine Herausforderung für agile Vorgehensweisen darstellt. Wurden die Werte und Prinzipien des agilen Manifests [AM01] und die daraus abgeleiteten Methoden zu Beginn meist eins zu eins auf den Bereich BI übertragen, so hat sich das Verständnis von BI- Agilität als ganzheitliche Eigenschaft der BI im deutschsprachigen Raum etabliert, und agile Me- thoden wurden auf die Besonderheiten der BI-Domäne adaptiert. In diesem Beitrag werden BI-Agilität und Agile BI erläutert, ein Ordnungsrahmen für Maßnahmen zur Steigerung der BI-Agilität eingeführt sowie Herausforderungen bei Agile BI erläutert.
Spectral analysis of signal averaging electrocardiography in atrial and ventricular tachyarrhythmias
(2017)
Background: Targeting complex fractionated atrial electrograms detected by automated algorithms during ablation of persistent atrial fibrillation has produced conflicting outcomes in previous electrophysiological studies. The aim of the investigation was to evaluate atrial and ventricular high frequency fractionated electrical signals with signal averaging technique.
Methods: Signal averaging electrocardiography (ECG) allows high resolution ECG technique to eliminate interference noise signals in the recorded ECG. The algorithm uses automatic ECG trigger function for signal averaged transthoracic, transesophageal and intracardiac ECG signals with novel LabVIEW software (National Instruments, Austin, Texas, USA). For spectral analysis we used fast fourier transformation in combination with spectro-temporal mapping and wavelet transformation for evaluation of detailed information about the frequency and intensity of high frequency atrial and ventricular signals.
Results: Spectral-temporal mapping and wavelet transformation of the signal averaged ECG allowed the evaluation of high frequency fractionated atrial signals in patients with atrial fibrillation and high frequency ventricular signals in patients with ventricular tachycardia. The analysis in the time domain evaluated fractionated atrial signals at the end of the signal averaged P-wave and fractionated ventricular signals at the end of the QRS complex. The analysis in the frequency domain evaluated high frequency fractionated atrial signals during the P-wave and high frequency fractionated ventricular signals during QRS complex. The combination of analysis in the time and frequency domain allowed the evaluation of fractionated signals during atrial and ventricular conduction.
Conclusions: Spectral analysis of signal averaging electrocardiography with novel LabVIEW software can utilized to evaluate atrial and ventricular conduction delays in patients with atrial fibrillation and ventricular tachycardia. Complex fractionated atrial electrograms may be useful parameters to evaluate electrical cardiac arrhythmogenic signals in atrial fibrillation ablation.
Cardiac resynchronization therapy (CRT) is an established therapy for heart failure patients and improves quality of life in patients with sinus rhythm, reduced left ventricular ejection fraction (LVEF), left bundle branch block and wide QRS duration. Since approximately sixty percent of heart failure patients have a normal QRS duration they do not benefit or respond to the CRT. Cardiac contractility modulation (CCM) releases nonexcitatoy impulses during the absolute refractory period in order to enhance the strength of the left ventricular contraction. The aim of the investigation was to evaluate differences in cardiac index between optimized and nonoptimized CRT and CCM devices versus standard values. Impedance cardiography, a noninvasive method was used to measure cardiac index (CI), a useful parameter which describes the blood volume during one minutes heart pumps related to the body surface. CRT patients indicate an increase of 39.74 percent and CCM patients an improvement of 21.89 percent more cardiac index with an optimized device.
Cardiac resynchronization therapy (CRT) with biventricular pacing is an established therapy for heart failure (HF) patients (P) with ventricular desynchronization and reduced left ventricular (LV) ejection fraction. The aim of this study was to evaluate electrical right atrial (RA), left atrial (LA), right ventricular (RV) and LV conduction delay with novel telemetric signal averaging electrocardiography (SAECG) in implantable cardioverter defibrillator (ICD) P to better select P for CRT and to improve hemodynamics in cardiac pacing.
Methods: ICD-P (n=8, age 70.8 ± 9.0 years; 2 females, 6 males) with VVI-ICD (n=4), DDD-ICD (n=3) and CRT-ICD (n=1) (Medtronic, Inc., Minneapolis, MN, USA) were analysed with telemetric ECG recording by Medronic programmer 2090, ECG cable 2090AB, PCSU1000 oscilloscope with Pc-Lab2000 software (Velleman®) and novel National Intruments LabView SAECG software.
Results: Electrical RA conduction delay (RACD) was measured between onset and offset of RA deflection in the RAECG. Interatrial conduction delay (IACD) was measured between onset of RA deflection and onset of far-field LA deflection in the RAECG. Interventricular conduction delay (IVCD) was measured between onset of RV deflection in the RVECG and onset of LV deflection in the LVECG. Telemetric SAECG recording was possible in all ICD-P with a mean of 11.7 ± 4.4 SAECG heart beats, 97.6 ± 33.7 ms QRS duration, 81.5 ± 44.6 ms RACD, 62.8 ± 28.4 ms RV conduction delay, 143.7 ± 71.4 ms right cardiac AV delay, 41.5 ms LA conduction delay, 101.6 ms LV conduction delay, 176.8 ms left cardiac AV delay, 53.6 ms IACD and 93 ms IVCD.
Conclusions: Determination of RA, LA, RV and LV conduction delay, IACD, IVCD, right and left cardiac AV delay by telemetric SAECG recording using LabView SAECG technique may be useful parameters of atrial and ventricular desynchronization to improve P selection for CRT and hemodynamics in cardiac pacing.
Diese Abschlussarbeit beschäftigt sich mit der Entwicklung eines VHDL-Ethernet Protokollstapels. Aufbauend auf einem existierenden Protokollstapel für 1 GBit/s ist das Ziel dieser Arbeit, einen Protokollstapel zu entwerfen, der eine Datenübertragungsrate von mehr als 10 GBit/s erreicht. Dieser Protokollstapel soll die Protokolle Ethernet, IPv4, ARP, ICMP und UDP enthalten. Durch eine flexible Struktur der Ports und den Einsatz von generics soll dieser Protokollstapel leicht konfigurierbar und so für viele Anwendungszwecke nutzbar sein.
Zunächst wurde der existierende Protokollstapel von der Xilinx Vertix5 Serie auf die 7er Serie portiert und in Betrieb genommen. Dabei traten Probleme mit dem Transceiver und dem ARP Protokoll auf. Nachdem diese gelöst wurden, konnte ein Konzept für den neuen Protokollstapel erarbeitet werden. Dieser nutzt nun ein – in der Busbreite – flexibles Streaming Interface, um die Geschwindigkeit der Datenübertragung anzupassen.
Da jedes der genutzten Protokolle an die Daten einen Header/Trailer anhängt oder entfernt,
wurden für diese Aufgabe insgesamt vier Low-Level-Module entwickelt. Diese Module wurden
mit Hilfe einer Testbench und einer Testmatrix verifiziert. Die Protokollmodule stellen nun den Header/Trailer zusammen und fügen diese mit Hilfe der Low-Level-Module an oder entfernen diese.
Bei der Simulation des Protokollstapels wurde die Funktion der Protokolle nachgewiesen. Durch die große Busbreite, von 64 Bit bei 10 GBit/s und 128 oder 256 Bit bei 40 GBit/s, schlug allerdings die Timing-Analyse fehl. Der kritische Pfad konnte bei der CRC Berechnung gefunden werden. Durch parallele CRC-Berechnungen wurde eine vorübergehende Lösung dieses Problems für die Geschwindigkeit 10 GBit/s erreicht.
Die Analyse des Ressourcenverbrauchs zeigte, dass der neue Protokollstapel nur wenig Ressourcen in einem FPGA nutzt. Für 10 GBit/s werden lediglich 3000 LUTs und 2400 Register
benötigt. Des weiteren wurde festgestellt, dass der Ressourcenverbrauch nicht proportional zur Busbreite ist. Bei einer Verdopplung der Busbreite werden lediglich 60 % mehr Ressourcen benötigt.
Entwicklung eines miniaturisierten Energieversorgungs-Moduls zur autarken Versorgung von Funkmodulen
(2017)
Diese Abschlussarbeit beschäftigt sich mit der Entwicklung eines miniaturisierten Energieversorgungs-Moduls. Das Modul soll gleichzeitig aus drei Energy-Harvestern Energie sammeln und diese in einem Doppelschichtkondensator zwischenspeichern. Diese Energie kann anschließend von einem Funksensorknoten zum Sammeln und Übermitteln von Daten genutzt werden.
Solch ein aufgebautes System verspricht bei einem geringen Volumen eine hohe Lebensdauer,geringen Wartungsaufwand und eine hohe Leistung, ohne dass eine drahtgebundene Energieversorgung oder große Batterien notwendig sind.
Nach dem Erstellen eins Konzepts und der Auswahl der dafür passenden Komponenten wurde zunächst ein Evaluations-Board aufgebaut. Auf diesem wurden alle in Frage kommenden Komponenten vermessen und die Funktion getestet. Aus den dadurch erworbenen Erkenntnissen wurde das miniaturisierte Energieversorgungs-Modul entwickelt. Das miniaturisierte Modul bietet folgende Möglichkeiten: Aus drei verschiedenen Energy Harvestern wird gleichzeitig die Energie gesammelt und aufbereitet. Durch die Nutzung eines effizienten Wandler-Moduls wird bei der Energieaufbereitung eine Effizienz von über 85 % erreicht. Drei Wandler-Module verbrauchen zusammen lediglich eine Leistung von P = 3, 459 µW. Der Maximum Power Point jedes einzelnen Harvesters kann separat eingestellt werden. Auch alle anderen Komponenten werden stromsparend gewählt. Die Energie wird in einem 1, 5 F Doppelschichtkondensator gespeichert. Zusätzlich wird als Backup-Energieversorgung eine Lithium Knopfzelle eingesetzt.
Dadurch können auch sicherheitskritische Anwendungen realisiert werden. Konnte durch die Energy Harvester nicht genügend Energie gesammelt werden, wird die Batterie dem Funksensorknoten zugeschaltet. Das miniaturisierte Modul hat die Maße 20 mm x 40 mm. Nach abschließenden Messungen mit einem neu implementierten Funksensorknoten, der in einer anderen Bachelorthesis entstanden ist [1], wurde eine Sendewiederholrate von 1, 1 s nachgewiesen. Dies stellt einen sehr guten Wert dar und reicht für die meisten Anwendungen aus.
Zuletzt wird aus allen Komponenten inklusive des Funksensorknotens ein Demonstrationsmuster zusammengebaut. Dieses hat die Maße von 5 cm x 5 cm x 5 cm und kann zur weiterführenden Forschung oder als Anschauungsmaterial genutzt werden.
A Survey of Channel Measurements and Models for Current and Future Railway Communication Systems
(2016)
There is an increasing demand by an ever-growing number of mobile customers for transfer of rich media content. This requires very high bandwidth which either cannot be provided by the current cellular systems or puts pressure on the wireless networks, affecting customer service quality. This study introduces COARSE – a novel cluster-based quality-oriented adaptive radio resource allocation scheme, which dynamically and adaptively manages the radio resources in a cluster-based two-hop multi-cellular network, having a frequency reuse of one. COARSE is a cross-layer approach across physical layer, link layer and the application layer. COARSE gathers data delivery-related information from both physical and link layers and uses it to adjust bandwidth resources among the video streaming end-users. Extensive analysis and simulations show that COARSE enables a controlled trade-off between the physical layer data rate per user and the number of users communicating using a given resource. Significantly, COARSE provides 25–75% improvement in the computed user-perceived video quality compared with that obtained from an equivalent single-hop network.
RFID- Frontend ISO 15693
(2008)
Formal Description of Inductive Air Interfaces Using Thévenin's Theorem and Numerical Analysis
(2014)
With the development of new integrated circuits to interface radio frequency identification protocols, inductive air interfaces have become more and more important. Near field communication is not only able to communicate, but also possible to transfer power wirelessly and to build up passive devices for logistical and medical applications. In this way, the power management on the transponder becomes more and more relevant. A designer has to optimize power consumption as well as energy harvesting from the magnetic field. This paper discusses a model with simple equations to improve transponder antenna matching. Furthermore, a new numerical analysis technique is presented to calculate the coupling factors, inductions, and magnetic fields of multiantenna systems.
Mice and rats make up 95% of all animals used in medical research and drug discovery and development. Monitoring of physiological functions such as ECG, blood pressure, and body temperature over the entire period of an experiment is often required. Restraining of the animals in order to obtain this data can cause great inconvenience. The use of telemetric systems solves this problem and provides more reliable results. However, these devices are mostly equipped with batteries, which limit the time of operation or they use passive power supplies, which affects the operating range. The semi-passive telemetric implant being presented is based on RFID technology and overcomes these obstacles. The device is inductively powered using the magnetic field of a common RFID reader device underneath the cage, but is also able to operate for several hours autonomously. Being independent from the battery capacity, it is possible to use the implant over a long period of time or to re-use the device several times in different animals, thus avoiding the disadvantages of existing systems and reducing the costs of purchase and refurbishment.
Remote measurement of the physiology, so-called biotelemetry, is a key technology in the modern veterinary medicine. The usage of wireless implants has less impact on the behavior of animals than manual measurement methods and cause less disturbance than wired devices. But, common biotelemetry still uses proprietary communication and power concepts focused on small systems with one animal. Therefore, the University of Applied Sciences Offenburg is developing a low-cost RFID system called muTrans1, which is able to measure ECG, pressure, temperature, oxygen saturation and activity. The muTrans uses an own RFID sensor transponder and standardized commercial components and combines them to a scalable RFID system able to build-up RFID sensor networks with a nearly unlimited size.
The Institute of Applied Research Offenburg is working in the field of autonomous data loggers since many years. In collaboration with industry, a new RFID based active sensor data logger for continuous recording of temperature has been developed and is now manufactured in mass production. Compared to existing systems, an unusual large data memory is integrated, which can be used via a simplified file system in a flexible way. The system will be used to accompany and monitor temperature sensitive goods of high value. The transponder is the first member of a new class of logging devices, the smallest will be not larger than a 2 Euro-coin with a fully integrated ASIC frontend.
Als Fortsetzung des FHOP-Projektes wurde an der Fachhochschule Offenburg auf Basis des bestehenden Mikroprozessorkerns im Rahmen einer Diplomarbeit ein Mikrocontroller in ES2-0.7 μm-Technologie entworfen. Der Controller wurde modular aufgebaut mit den Komponenten: FHOP-Mikroprozessor, Buscontroller, Waitstate-Chipselect-Einheit, 16x16 Bit Multiplizierer, 2KB ROM, 256 Byte RAM, Watchdog, PIO mit 16 konfigurierbaren Ports, SIO, 2 Timer und ein Interruptcontroller für 8 Interrputquellen.
Der Chip benötigt bei einer Komplexität von ca. 65400 Transistoren eine Siliziumfläche von etwa 27 mm². Er wurde im September 1996 zur Fertigung gegeben und mittlerweile erfolgreich getestet. Das interne ROM des Mikrocontrollers enthält das BIOS sowie ein Testprogramm. Zur Erstellung der Software steht eine komplette Entwicklungsumgebung zur Verfügung. Sämtliche Komponenten stehen im FHOP-Design-Kit in Kürze zur Verfügung.
Dieser Artikel beschreibt das vom Wissenschaftsministerium Baden-Württemberg geförderte Projekt bwLehrpool, welches zum Ziel hat, landesweit mittels Zentralisierung von Services und übergreifende Kooperationen IT-Kosten nachhaltig zu reduzieren und RZPersonal zu entlasten. Das Projekt umfasst die Schaffung einer zentralen Infrastruktur für PC-Pools, Speziallabore und e-Prüfungen, um eine größere Flexibilität für die ITUnterstützung in Lehre und Forschung zu erreichen. Dabei soll der administrative Aufwand für den Betrieb reduziert und gleichzeitig Lehre und Forschung von einem konkreten, rechnergestützten Arbeitsplatz beziehungsweise einer Räumlichkeit entkoppelt werden. Hierdurch lassen sich bestehende PC-Pools deutlich besser ausnutzen. Zudem sollen Software- und Hardwarekosten verringert werden, indem anders als derzeit, auch heterogene PC-Landschaften genutzt werden können. Der sich im Aufbau befindende Service leistet dabei eine doppelte Abstraktion. Einerseits schafft er ein gemeinsam nutzbares Basissystem, welches sich an die jeweiligen lokalen Gegebenheiten wie Benutzerverwaltung, Home- und gemeinsame Verzeichnisse oder Druckdienste anpasst. Andererseits bietet es die notwendige Abstraktion, um virtuelle Maschinen verschiedenen Typs hochschulübergreifend nutzen zu können. Expertenwissen auf verschiedenen Ebenen wird optimal genutzt, und für die Lehrenden ergibt sich eine neue Perspektive, da sie auf einem einfachen Weg ihre Lehrund Forschungsumgebungen unabhängig von der konkreten Hardware- und Maschinenadministration definieren und gleichzeitig Erfahrungen mit anderen Hochschulen austauschen können.
Langzeit-EKG-Scripte
(2016)
Concussions in sports and during recreational activities are a major source of traumatic brain injury in our society. This is mainly relevant in adolescence and young adulthood, where the annual rate of diagnosed concussions is increasing from year to year. Contact sports (e.g., ice hockey, American football, or boxing) are especially exposed to repeated concussions. While most of the athletes recover fully from the trauma, some experience a variety of symptoms including headache, fatigue, dizziness, anxiety, abnormal balance and postural instability, impaired memory, or other cognitive deficits. Moreover, there is growing evidence regarding clinical and neuropathological consequences of repetitive concussions, which are also linked to an increased risk for depression and Alzheimer’s disease or the development of chronic traumatic encephalopathy. With little contribution of conventional structural imaging (computed tomography (CT) or magnetic resonance imaging (MRI)) to the evaluation of concussion, nuclear imaging techniques (i.e., positron emission tomography (PET) and single-photon emission computed tomography (SPECT)) are in a favorable position to provide reliable tools for a better understanding of the pathophysiology and the clinical evaluation of athletes suffering a concussion.