Refine
Year of publication
Document Type
- Conference Proceeding (647) (remove)
Language
- English (486)
- German (159)
- Multiple languages (1)
- Russian (1)
Keywords
- Gamification (9)
- Kommunikation (9)
- Assistive Technology (8)
- Produktion (8)
- Ausbildung (7)
- Design (6)
- Deafblindness (5)
- Eingebettetes System (5)
- Energieversorgung (5)
- Heart rhythm model (5)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (246)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (145)
- Fakultät Medien und Informationswesen (M+I) (104)
- Fakultät Betriebswirtschaft und Wirtschaftsingenieurwesen (B+W) (80)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (70)
- ivESK - Institut für verlässliche Embedded Systems und Kommunikationselektronik (64)
- ACI - Affective and Cognitive Institute (32)
- INES - Institut für Energiesystemtechnik (25)
- IMLA - Institute for Machine Learning and Analytics (6)
- Zentrale Einrichtungen (6)
With the increasing degree of interconnectivity in industrial factories, security becomes more and more the most important stepping-stone towards wide adoption of the Industrial Internet of Things (IIoT). This paper summarizes the most important aspects of one keynote of DESSERT2020 conference. It highlights the ongoing and open research activities on the different levels, from novel cryptographic algorithms over security protocol integration and testing to security architectures for the full lifetime of devices and systems. It includes an overview of the research activities at the authors' institute.
eTPL: An Enhanced Version of the TLS Presentation Language Suitable for Automated Parser Generation
(2017)
The specification of the Transport Layer Security (TLS) protocol defines its own presentation language used for the purpose of semi-formally describing the structure and on-the-wire format of TLS protocol messages. This TLS Presentation Language (TPL) is more expressive and concise than natural language or tabular descriptions, but as a result of its limited objective has a number of deficiencies. We present eTPL, an enhanced version of TPL that improves its expressiveness, flexibility, and applicability to non-TLS scenarios. We first define a generic model that describes the parsing of binary data. Based on this, we propose language constructs for TPL that capture important information which would otherwise have to be picked manually from informal protocol descriptions. Finally, we briefly introduce our software tool etpl-tool which reads eTPL definitions and automatically generates corresponding message parsers in C++. We see our work as a contribution supporting sniffing, debugging, and rapid-prototyping of wired and wireless communication systems.
The low cost and small size of MEMS inertial sensors allows their combination into a multi sensor module in order to improve performance. However the different linear accelerations measured on different places on a rotating rigid body have to be considered for the proper fusion of the measurements. The errors in measurement of MEMS inertial sensors include deterministic imperfection, but also random noise. The gain in accuracy of using multiple sensors depends strongly on the correlation between these errors from the different sensors. Although for sensor fusion it usually assumed that the measurement errors of different sensors are uncorrelated, estimation theory shows that for the combination of the same type of sensors actually a negative correlation will be more beneficial. Therefore we describe some important and often neglected considerations for the combination of several sensors and also present some preliminary results with regard to the correlation of measurements from a simple multi sensor setup.
The Thread protocol is a recent development based on 6LoWPAN (IPv6 over IEEE 802.15.4), but with extensions regarding a more media independent approach, which – additionally – also promises true interoperability. To evaluate and analyse the operation of a Thread network a given open source 6LoWPAN stack for embedded devices (emb::6) has been extended in order to comply with the Thread specification. The implementation covers Mesh Link Establishment (MLE) and network layer functionality as well as 6LoWPAN mesh under routing mechanism based on MAC short addresses. The development has been verified on a virtualization platform and allows dynamical establishment of network topologies based on Thread's partitioning algorithm.
A novel approach of a test environment for embedded networking nodes has been conceptualized and implemented. Its basis is the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes run in parallel, connected via so-called virtual channels. The environment allows to modifying the behavior of the virtual channels as well as the overall topology during runtime to virtualize real-life networking scenarios. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features as well as it supports the identification of bugs in wireless communication stacks. In combination with powerful test execution systems, it is possible to create a continuous development and integration flow.
A novel approach of a testbed for embedded networking nodes has been conceptualized and implemented. It is based on the use of virtual nodes in a PC environment, where each node executes the original embedded code. Different nodes are running in parallel and are connected via so-called virtual interfaces. The presented approach is very efficient and allows a simple description of test cases without the need of a network simulator. Furthermore, it speeds up the process of developing new features.
OPC UA (Open Platform Communications Unified Architecture) is already a well-known concept used widely in the automation industry. In the area of factory automation, OPC UA models the underlying field devices such as sensors and actuators in an OPC UA server to allow connecting OPC UA clients to access device-specific information via a standardized information model. One of the requirements of the OPC UA server to represent field device data using its information model is to have advanced knowledge about the properties of the field devices in the form of device descriptions. The international standard IEC 61804 specifies EDDL (Electronic Device Description Language) as a generic language for describing the properties of field devices. In this paper, the authors describe a possibility to dynamically map and integrate field device descriptions based on EDDL into OPCUA.
A highly scalable IEEE802.11p communication and localization subsystem for autonomous urban driving
(2013)
Die neueste Generation von programmierbaren Logikbausteinen verfügt neben den konfigurierbaren Logikzellen über einen oder mehrere leistungsfähige Mikroprozessoren. In dieser Arbeit wird gezeigt, wie ein bestehendes Zwei-Chip-System auf einen Xilinx Zynq 7000 mit zwei ARM A9-Cores migriert wird. Bei dem System handelt es sich um das „GPS-gestützte Kreisel-system ADMA“ des Unternehmens GeneSys. Die neue Lösung verbessert den Datenaustausch zwischen dem ersten Mikroprozessor zur digitalen Signalverarbeitung und dem zweiten Prozessor zur Ablaufsteuerung durch ein Shared Memory. Für die schnelle und echtzeitfähige Datenübertragung werden zahlreiche hochbitratige Schnittstellengenutzt.
The authentication method of electronic devices, based on individual forms of correlograms of their internal electric noises, is well-known. Specific physical differences in the components – for example, caused by variations in production quality – cause specific electrical signals, i.e. electric noise, in the electronic device. It is possible to obtain this information and to identify the specific differences of the individual devices using an embedded analog-to-digital converter (ADC). These investigations confirm the possibility to identify and authenticate electronic devices using bit templates, calculated from the sequence of values of the normalized autocorrelation function of noise. Experiments have been performed using personal computers. The probability of correct identification and authentication increases with increasing noise recording duration. As a result of these experiments, an accuracy of 98.1% was achieved for a 1 second-long registration of EM for a set of investigated computers.
The paper describes the hardware and software architecture of the developed multi MEMS sensor prototype module, consisting of ARM Cortex M4 STM32F446 microcontroller unit, five 9-axis inertial measurement units MPU9255 (3D accelerometer, 3D gyroscope, 3D magnetometer and temperature sensor) and a BMP280 barometer. The module is also equipped with WiFi wireless interface (Espressif ESP8266 chip). The module is constructed in the form of a truncated pyramid. Inertial sensors are mounted on a special basement at different angles to each other to eliminate hardware sensors drifts and to provide the capability for self-calibration. The module fuses information obtained from all types of inertial sensors (acceleration, rotation rate, magnetic field and air pressure) in order to calculate orientation and trajectory. It might be used as an Inertial Measurement Unit, Vertical Reference Unit or Attitude and Heading Reference System.
Legacy industrial communication protocols are proved robust and functional. During the last decades, the industry has invented completely new or advanced versions of the legacy communication solutions. However, even with the high adoption rate of these new solutions, still the majority industry applications run on legacy, mostly fieldbus related technologies. Profibus is one of those technologies that still keep on growing in the market, albeit a slow in market growth in recent years. A retrofit technology that would enable these technologies to connect to the Internet of Things, utilize the ever growing potential of data analysis, predictive maintenance or cloud-based application, while at the same time not changing a running system is fundamental.
The research project Ko-TAG [2], as part of the research initiative Ko-FAS [1], funded by the German Ministry of Economics and Technologies (BMWi), deals with the development of a wireless cooperative sensor system that shall pro-vide a benefit to current driver assistance systems (DAS) and traffic safety applications (TSA). The system’s primary function is the localization of vulnerable road users (VRU) e.g. pedestrians and powered two-wheelers, using communication signals, but can also serve as pre-crash (surround) safety system among vehicles. The main difference of this project, compared to previous ones that dealt with this topic, e.g. the AMULETT project, is an underlying FPGA based Hardware-Software co-design. The platform drives a real-time capable communication protocol that enables highly scalable network topologies fulfilling the hard real-time requirements of the single localization processes. Additionally it allows the exchange of further data (e.g. sensor data) to support the accident pre-diction process and the channel arbitration, and thus supports true cooperative sensing. This paper gives an overview of the project’s current system design as well as of the implementations of the key HDL entities supporting the software parts of the communication protocol. Furthermore, an approach for the dynamic reconfiguration of the devices is described, which provides several topology setups using a single PCB design.
Energy and environment continue to be major issues of human mankind. This holds true on the regional, the national, and the global level. And it is one of the problems, where engineers and scientists in conjunction with political will and people's awareness, can find new approaches and solutions to save the natural resources and to make their use more efficient.
Institute of Reliable Embedded Systems and Communication Electronics, Offenburg University of Applied Sciences, Germany has developed an automated testing environment, Automated Physical TestBeds (APTB), for analyzing the performance of wireless systems and its supporting protocols. Wireless physical networking nodes can connect to this APTB and the antenna output of this attaches with the RF waveguides. To model the RF environment this RF waveguides then establish wired connection among RF elements like splitters, attenuators and switches. In such kind of set up it’s well possible to vary the path characteristics by altering the attenuators and switches. The major advantage of using APTB is the possibility of isolated, well controlled, repeatable test environment in various conditions to run statistical analysis and even to execute regression tests. This paper provides an overview of the design and implementation of APTB, demonstrates its ability to automate test cases, and its efficiency.
Climate change and resultant scarcity of water are becoming major challenges for countries around the world. With the advent of Wireless Sensor Networks (WSN) in the last decade and a relatively new concept of Internet of Things (IoT), embedded systems developers are now working on designing control and automation systems that are lower in cost and more sustainable than the existing telemetry systems for monitoring. The Indus river basin in Pakistan has one of the world's largest irrigation systems and it is extremely challenging to design a low-cost embedded system for monitoring and control of waterways that can last for decades. In this paper, we present a hardware design and performance evaluation of a smart water metering solution that is IEEE 802.15.4-compliant. The results show that our hardware design is as powerful as the reference design, but allows for additional flexibility both in hardware and in firmware. The indigenously designed solution has a power added efficiency (PAE) of 24.7% that is expected to last for 351 and 814 days for nodes with and without a power amplifier (PA). Similarly, the results show that a broadband communication (434 MHz) over more than 3km can be supported, which is an important stepping stone for designing a complete coverage solution of large-scale waterways.
Due to climate change and scarcity of water reservoirs, monitoring and control of irrigation systems is now becoming a major focal area for researchers in Cyber-Physical Systems (CPS). Wireless Sensor Networks (WSNs) are rapidly finding their way in the field of irrigation and play the key role as data gathering technology in the domain of IoT and CPS. They are efficient for reliable monitoring, giving farmers an edge to take precautionary measures. However, designing an energy-efficient WSN system requires a cross-layer effort and energy-aware routing protocols play a vital role in the overall energy optimization of a WSN. In this paper, we propose a new hierarchical routing protocol suitable for large area environmental monitoring such as large-scale irrigation network existing in the Punjab province of Pakistan. The proposed protocol resolves the issues faced by traditional multi-hop routing protocols such as LEACH, M-LEACH and I-LEACH, and enhances the lifespan of each WSN node that results in an increased lifespan of the whole network. We used the open-source NS3 simulator for simulation purposes and results indicate that our proposed modifications result in an average 27.8% increase in lifespan of the overall WSN when compared to the existing protocols.
In this work, we consider a duty-cycled wireless sensor network with the assumption that the on/off schedules are uncoordinated. In such networks, as all nodes may not be awake during the transmission of time synchronization messages, nodes will require to re-transmit the synchronization messages. Ideally a node should re-transmit for the maximum sleep duration to ensure that all nodes are synchronized. However, such a proposition will immensely increase the energy consumption of the nodes. Such a situation demands that there is an upper bound of the number of retransmissions. We refer to the time a node spends in re-transmission of the control message as broadcast duration. We ask the question, what should be the broadcast duration to ensure that a certain percentage of the available nodes are synchronized. The problem to estimate the broadcast duration is formulated so as to capture the probability threshold of the nodes being synchronized. Results show the proposed analytical model can predict the broadcast duration with a given lower error margin under real world conditions, thus demonstrating the efficiency of our solution.
The application of leaky feeder (radiating) cables is a common solution for the implementation of reliable radio communication in huge industrial buildings, tunnels and mining environment. This paper explores the possibilities of leaky feeders for 1D and 2D localization in wireless systems based on time of flight chirp spread spectrum technologies. The main focus of this paper is to present and analyse the results of time of flight and received signal strength measurements with leaky feeders in indoor and outdoor conditions. The authors carried out experiments to compare ranging accuracy and radio coverage area for a point-like monopole antenna and for a leaky feeder acting as a distributed antenna. In all experiments RealTrac equipment based on nanoLOC radio standard was used. The estimation of the most probable path of a chirp signal going through a leaky feeder was calculated using the ray tracing approach. The typical non-line-of-sight errors profiles are presented. The results show the possibility to use radiating cables in real time location technologies based on time-of-flight method.
HiSiMo cast irons are frequently used as material for high temperature components in engines as e.g. exhaust manifolds and turbo chargers. These components must withstand severe cyclic mechanical and thermal loads throughout their life cycle. The combination of thermal transients with mechanical load cycles results in a complex evolution of damage, leading to thermomechanical fatigue (TMF) of the material and, after a certain number of loading cycles, to failure of the component. In Part I of the paper, a fracture mechanics model for TMF life prediction was developed based on results of uniaxial tests. In this paper (Part II), the model is formulated for three-dimensional stress states, so that it can be applied in a post-processing step of a finite-element analysis. To obtain reliable stresses and (time dependent plastic) strains in the finite-element calculation, a time and temperature dependent plasticity model is applied which takes non-linear kinematic hardening into account. The material properties of the model are identified from the results of the uniaxial test. The plasticity model and the TMF life model are applied to assess the lifetime of an exhaust manifold.
Video game developers continuously increase the degree of details and realism in games to create more human-like characters. But increasing the human-likeness becomes a problem in regard to the Uncanny Valley phenomenon that predicts negative feelings of people towards artificial entities. We developed an avatar creation system to examine preferences towards parametrized faces and explore in regard to the Uncanny Valley phenomenon how people design faces that they like or reject. Based on the 3D model of the Caucasian average face, 420 participants generate 1341 faces of positively and negatively associated concepts of both gender. The results show that some characteristics associated with the Uncanny Valley are used to create villains or repulsive faces. Heroic faces get attractive features but are rarely and little stylized. A voluntarily designed face is very similar to the heroine. This indicates that there is a tendency of users to design feminine and attractive but still credible faces.
In this contribution, we propose an system setup for the detection andclassification of objects in autonomous driving applications. The recognition algo-rithm is based upon deep neural networks, operating in the 2D image domain. Theresults are combined with data of a stereo camera system to finally incorporatethe 3D object information into our mapping framework. The detection systemis locally running upon the onboard CPU of the vehicle. Several network archi-tectures are implemented and evaluated with respect to accuracy and run-timedemands for the given camera and hardware setup.
Brand identification has the potential of shaping individuals' attitudes, performance and commitment within learning and work contexts. We explore these effects, by incorporating elements of branded identification within gamified environments. We report a study with 44 employees, in which task performance and emotional outcomes are assessed in a real-world assembly scenario - namely, while performing a soldering task. Our results indicate that brand identification has a direct impact on individuals' attitude towards the task at hand: while instigating positive emotions, aversion and reactance also arise.
For the RoboCup Soccer AdultSize League the humanoid robot Sweaty uses a single fully convolutional neural network to detect and localize the ball, opponents and other features on the field of play. This neural network can be trained from scratch in a few hours and is able to perform in real-time within the constraints of computational resources available on the robot. The time it takes to precess an image is approximately 11 ms. Balls and goal posts are recalled in 99 % of all cases (94.5 % for all objects) accompanied by a false detection rate of 1.2 % (5.2 % for all). The object detection and localization helped Sweaty to become finalist for the RoboCup 2017 in Nagoya.
Spinal cord stimulation (SCS) is the most commonly used technique of neurostimulation. It involves the stimulation of the spinal cord and is therefore used to treat chronic pain. The existing esophageal catheters are used for temperature monitoring during an electrophysiology study with ablation and transesophageal echocardiography. The aim of the study was to model the spine and new esophageal electrodes for the transesophageal electrical pacing of the spinal cord, and to integrate them in the Offenburg heart rhythm model for the static and dynamic simulation of transesophageal neurostimulation. The modeling and simulation were both performed with the electromagnetic and thermal simulation software CST (Computer Simulation Technology, Darmstadt). Two new esophageal catheters were modelled as well as a thoracic spine based on the dimensions of a human skeleton. The simulation of directed transesophageal neurostimulation is performed using the esophageal balloon catheter with an electric pacing potential of 5 V and a trapezoidal signal. A potential of 4.33 V can be measured directly at the electrode, 3.71 V in the myocardium at a depth of 2 mm, 2.68 V in the thoracic vertebra at a depth of 10 mm, 2.1 V in the thoracic vertebra at a depth of 50 mm and 2.09 V in the spinal cord at a depth of 70 mm. The relation between the voltage delivered to the electrodes and the voltage applied to the spinal cord is linear. Virtual heart rhythm and catheter models as well as the simulation of electrical pacing fields and electrical sensing fields allow the static and dynamic simulation of directed transesophageal electrical pacing of the spinal cord. The 3D simulation of the electrical sensing and pacing fields may be used to optimize transesophageal neurostimulation.
An Empirical Study of Explainable AI Techniques on Deep Learning Models For Time Series Tasks
(2020)
Decision explanations of machine learning black-box models are often generated by applying Explainable AI (XAI) techniques. However, many proposed XAI methods produce unverified outputs. Evaluation and verification are usually achieved with a visual interpretation by humans on individual images or text. In this preregistration, we propose an empirical study and benchmark framework to apply attribution methods for neural networks developed for images and text data on time series. We present a methodology to automatically evaluate and rank attribution techniques on time series using perturbation methods to identify reliable approaches.
Model-based analysis of Electrochemical Pressure Impedance Spectroscopy (EPIS) for PEM Fuel Cells
(2019)
Electrochemical impedance spectroscopy (EIS) is a widely-used diagnostic technique to characterize electrochemical processes. It is based on the dynamic analysis of two electrical observables, that is, current and voltage. Electrochemical cells with gaseous reactants or products, in particular fuel cells, offer an additional observable, that is, the gas pressure. The dynamic coupling of current or voltage with gas pressure gives rise to a number of additional impedance definitions, for which we have previously introduced the term electrochemical pressure impedance spectroscopy (EPIS) [1,2]. EPIS shows a particular sensitivity towards transport processes of gas-phase or dissolved species, in particular, diffusion coefficients and transport pathway lengths. It is as such complementary to standard EIS, which is mainly sensitive towards electrochemical processes. First EPIS experiments on PEM fuel cells have recently been shown [3].
We present a detailed modeling and simulation analysis of EPIS of a PEM fuel cell. We use a 1D+1D continuum model of a fuel/air channel pair with GDL and MEA. Backpressure is dynamically varied, and the resulting simulated oscillation in cell voltage is evaluated to yield the ▁Z_( V⁄p_ca ) EPIS signal. Results are obtained for different transport situations of the fuel cell, giving rise to very complex EPIS shapes in the Nyquist plot. This complexity shows the necessity of model-based interpretation of the complex EPIS shapes. Based on the simulation results, specific features in the EPIS spectra can be assigned to different transport domains (gas channel, GDL, membrane water transport).
One of the challenges in humanoid robotics is motion control. Interacting with humans requires impedance control algorithms, as well as tackling the problem of the closed kinematic chains which occur when both feet touch the ground. However, pure impedance control for totally autonomous robots is difficult to realize, as this algorithm needs very precise sensors for force and speed of the actuated parts, as well as very high sampling rates for the controller input signals. Both requirements lead to a complex and heavy weight design, which makes up for heavy machines unusable in RoboCup Soccer competitions.
A lightweight motor controller was developed that can be used for admittance and impedance control as well as for model predictive control algorithms to further improve the gait of the robot.
6LoWPAN (IPv6 over Low Power Wireless Personal Area Networks) is gaining more and more attraction for the seamless connectivity of embedded devices for the Internet of Things. It can be observed that most of the available solutions are following an open source approach, which significantly leads to a fast development of technologies and of markets. Although the currently available implementations are in a pretty good shape, all of them come with some significant drawbacks. It was therefore decided to start the development of an own implementation, which takes the advantages from the existing solutions, but tries to avoid the drawbacks. This paper discussed the reasoning behind this decision, describes the implementation and its characteristics, as well as the testing results. The given implementation is available as open-source project under [15].
Blockchain frameworks enable the immutable storage of data. A still open practical question is the so called "oracle" problem, i.e. the way how real world data is actually transferred into and out of a blockchain while preserving its integrity. We present a case study that demonstrates how to use an existing industrial strength secure element for cryptographic software protection (Wibu CmDongle / the "dongle") to function as such a hardware-based oracle for the Hyperledger blockchain framework. Our scenario is that of a dentist having leased a 3D printer. This printer is initially supplied with an amount of x printing units. With each print action the local unit counter on the attached dongle is decreased and in parallel a unit counter is maintained in the Hyperledger-based blockchain. Once a threshold is met, the printer will stop working (by means of the cryptographically protected invocation of the local print method). The blockchain is configured in such a way that chaincode is executed to increase the units again automatically (and essentially trigger any payment processes). Once this has happened, the new unit counter value will be passed from the blockchain to the local dongle and thus allow for further execution of print jobs.
The development of secure software systems is of ever-increasing importance. While software companies often invest large amounts of resources into the upkeeping and general security properties of large-scale applications when in production, they appear to neglect utilizing threat modeling in the earlier stages of the software development lifecycle. When applied during the design phase of development, and continuously throughout development iterations, threat modeling can help to establish a "Secure by Design" approach. This approach allows issues relating to IT security to be found early during development, reducing the need for later improvement – and thus saving resources in the long term. In this paper the current state of threat modeling is investigated. This investigation drove the derivation of requirements for the development of a new threat modelling framework and tool, called OVVL. OVVL utilizes concepts of established threat modeling methodologies, as well as functionality not available in existing solutions.
Protecting software from illegal access, intentional modification or reverse engineering is an inherently difficult practical problem involving code obfuscation techniques and real-time cryptographic protection of code. In traditional systems a secure element (the "dongle") is used to protect software. However, this approach suffers from several technical and economical drawbacks such as the dongle being lost or broken.
We present a system that provides such dongles as a cloud service, and more importantly, provides the required cryptographic material to control access to software functionality in real-time.
This system is developed as part of an ongoing nationally funded research project and is now entering a first trial stage with stakeholders from different industrial sectors.
With the need for automatic control based supervisory controllers for complex energy systems, comes the need for reduced order system models representing not only the non-linear behaviour of the components but also certain unknown process dynamics like their internal control logic. At the Institute of Energy Systems Technology in Offenburg we have built a real-life microscale trigeneration plant and present in this paper a rational modelling procedure that satisfies the necessary characteristics for models to be applied in model predictive control for grid-reactive optimal scheduling of this complex energy system. These models are validated against experimental data and the efficacy of the methodology is discussed. Their application in the future for the optimal scheduling problem is also briefly motivated.
Modelling and Simulation of Microscale Trigeneration Systems Based on Real- Life Experimental Data
(2017)
Introduction: To simplify AV delay (AVD) optimization in cardiac resynchronization therapy (CRT), we reported that the hemodynamically optimal AVD for VDD and DDD mode CRT pacing can be approximated by individually measuring implant-related interatrial conduction intervals (IACT) in oesophageal electrogram (LAE) and adding about 50ms. The programmer-based St Jude QuickOpt algorithm is utilizing this finding. By automatically measuring IACT in VDD operation, it predicts the sensed AVD by adding either 30ms or 60ms. Paced AVD is strictly 50ms longer than sensed AVD. As consequence of those variations, several studies identified distinct inaccuracies of QuickOpt. Therefore, we aimed to seek for better approaches to automate AVD optimization.
Methods: In a study of 35 heart failure patients (27m, 8f, age: 67±8y) with Insync III Marquis CRT-D systems we recorded telemetric electrograms between left ventricular electrode and superior vena cava shock coil (LVtip/SVC = LVCE) simultaneously with LAE. By LVCE we measured intervals As-Pe in VDD and Ap-Pe in DDD operation between right atrial sense-event (As) or atrial stimulus (Ap), resp., and end of the atrial activity (Pe). As-Pe and Ap-Pe were compared with As-LA an Ap-LA in LAE, respectively.
Results: End of the left atrial activity in LVCE could clearly be recognized in 35/35 patients in VDD and 29/35 patients in DDD operation. We found mean intervals As-LA of 40.2±24.5ms and Ap-LA of 124.3±20.6ms. As-Pe was 94.8±24.1ms and Ap-Pe was 181.1±17.8ms. Analyzing the sums of As-LA + 50ms with duration of As-Pe and Ap-LA + 50ms with duration of Ap-Pe, the differences were 4.7±9.2ms and 4.2±8.6ms, resp., only. Thus, hemodynamically optimal timing of the ventricular stimulus can be triggered by automatically detecting Pe in LVCE.
Conclusion: Based on minimal deviations between LAE and LVCE approach, we proposed companies to utilize the LVCE in order to automate individual AVD optimization in CRT pacing.
Introduction: Patient selection for cardiac resynchronization therapy (CRT) requires quantification of left ventricular conduction delay (LVCD). After implantation of biventricular pacing systems, individual AV delay (AVD) programming is essential to ensure hemodynamic response. To exclude adverse effects, AVD should exceed individual implant-related interatrial conduction times (IACT). As result of a pilot study, we proposed the development of a programmer-based transoesophageal left heart electrogram (LHE) recording to simplify both, LVCD and IACT measurement. This feature was implemented into the Biotronik ICS3000 programmer simultaneously with 3-channel surface ECG.
Methods: A 5F oesophageal electrode was perorally applied in 44 heart failure CRT-D patients (34m, 10f, 65±8 yrs., QRS=162±21ms). In position of maximum left ventricular deflection, oesophageal LVCD was measured between onsets of QRS in surface ECG and oesophageal left ventricular deflection. Then, in position of maximum left atrial deflection (LA), IACT in VDD operation (As-LA) was calculated by difference between programmed AV delay and the measured interval from onset of left atrial deflection to ventricular stimulus in the oesophageal electrogram. IACT in DDD operation (Ap-LA) was measured between atrial stimulus and LA..
Results: LVCD of the CRT patients was characterized by a minimum of 47ms with mean of 69±23ms. As-LA and Ap-LA were found to be 41±23ms and 125±25ms, resp., at mean. In 7 patients (15,9%), IACT measurement in DDD operation uncovered adverse AVD if left in factory settings. In this cases, Ap-LA exceeded the factory AVD. In 6 patients (13,6%), IACT in VDD operation was less than or equal 10ms indicating the need for short AVD.
Conclusion: Response to CRT requires distinct LVCD and AVD optimization. The ICS3000 oesophageal LHE feature can be utilized to measure LVCD in order to justify selection for CRT. IACT measurement simplifies AV delay optimization in patients with CRT systems irrespective of their make and model.
Currently, QRS width and bundle branch block morphology are used as electrocardiographic guideline criterias to selectheart failure (HF) patients with interventricular desynchronization in sinus rhythm (SR) for cardiac resynchronisationtherapy (CRT). Nevertheless, up to 30% of these patients do not benefit from implantation of CRT systems. Esophagealleft ventricular electrogram (LVE) enables semi-invasive measurement of interventricular conduction delays (IVCD)even in patients with atrial fibrillation (AF). To routinely apply this method, a programmer based semi-invasiveautomatic quantification of IVCD should to be developed. Our aims were todefine interventricular conduction delaysby analyzing fractionated left ventricular (LV) deflections in the esophageal left ventricular electrogram of HF patientsin SR or AF.
In 66 HF patients (49 male,17 female, age 65 ± 10 years) a 5F TOslim electrode (Osypka AG, Germany) was perorallyapplied. Using BARD EP Lab, cardiac desynchronization was quantified as interval IVCD between onset of QRS insurface ECG and the investigator-determined onset of the left ventricular deflection in LVE. IVCD was compared withthe intervals between QRS onset and the first maximum (IVCDm1) and between QRS onset and the second maximum(IVCDm2) of the LV complex.
QRS of 173 ± 26 ms was linked with empirical IVCD of 75 ± 25 ms, at mean. First and second LV maximum could beascertained beyond doubt in all patients. Significant correlations of the p<0,01 level were found between IVCD and theIVCDm1 of 96 ± 28 ms as well as between IVCD and the IVCDm2 of 147 ± 31 ms, at mean. To standardize automatic measurement of interventricular conduction delays with respect to patients with fractionatedLV complexes, the first maximum of the LV deflection should be utilized to qualify the IVCD of HF patients with sinusrhythm and atrial fibrillation.
This paper presents a novel low-jitter interface between a low-cost integrated IEEE802.11 chip and a FPGA. It is designed to be part of system hardware for ultra-precise synchronization between wireless stations. On physical level, it uses Wi-Fi chip coexistence signal lines and UART frame encoding. On its basis, we propose an efficient communication protocol providing precise timestamping of incoming frames and internal diagnostic mechanisms for detecting communication faults. Meanwhile it is simple enough to be implemented both in low-cost FPGA and commodity IEEE802.11 chip firmware. The results of computer simulation shows that developed FPGA implementation of the proposed protocol can precisely timestamp incoming frames as well as detect most of communication errors even in conditions of high interference. The probability of undetected errors was investigated. The results of this analysis are significant for the development of novel wireless synchronization hardware.
Wireless synchronization of industrial controllers is a challenging task in environments where wired solutions are not practical. The best solutions proposed so far to solve this problem require pretty expensive and highly specialized FPGA-based devices. With this work we counter the trend by introducing a straightforward approach to synchronize a fairly cheap IEEE 802.11 integrated wireless chip (IWC) with external devices. More specifically we demonstrate how we can reprogram the software running in the 802.11 IWC of the Raspberry Pi 3B and transform the receiver input potential of the wireless transceiver into a triggering signal for an external inexpensive FPGA. Experimental results show a mean-square synchronization error of less than 496 ns, while the absolute synchronization error does not exceed 6 μs. The jitter of the output signal that we obtain after synchronizing the clock of the external device did not exceed 5.2 μs throughout the whole measurement campaign. Even though we do not score new records in term of accuracy, we do in terms of complexity, cost, and availability of the required components: all these factors make the proposed technique a very promising of the deployment of large-scale low-cost automation solutions.
The efficient support of Hardwae-In-theLoop (HIL) in the design process of hardwaresoftware-co-designed systems is an ongoing challenge. This paper presents a network-based integration of hardware elements into the softwarebased image processing tool „ADTF“, based on a high-performance Gigabit Ethernet MAC and a highly-efficient TCP/IP-stack. The MAC has been designed in VHDL. It was verified in a SystemCsimulation environment and tested on several Altera FPGAs.
Laser ultrasound was used to determine dispersion curves of surface acoustic waves on a Si (001) surface covered by AlScN films with a scandium content between 0 and 41%. By including off-symmetry directions for wavevectors, all five independent elastic constants of the film were extracted from the measurements. Results for their dependence on the Sc content are presented and compared to corresponding data in the literature, obtained by alternative experimental methods or by ab-initio calculations.
Zerstörungsfreie Verfahren zur Messung von Eigenspannungen
erfordern, abhängig vom gewählten Verfahren, die Kenntnis gewisser
Kopplungskonstanten. Im Falle von Ultraschallmessverfahren sind das neben den
elastischen Konstanten zweiter Ordnung (SOEC) vor allem die Konstanten dritter
Ordnung (TOEC). Elastische Konstanten fester, metallischer Bauteile werden in der
Regel in Zugversuchen bestimmt. Zur Ermittlung der TOEC werden diese mit
Ultraschallmessmethoden kombiniert. Durch äußere Einflüsse, wie etwa mechanische
Nachbehandlungen der zu untersuchenden Bauteile können sich diese Konstanten
jedoch ändern und müssen folglich direkt am veränderten Material bestimmt werden.
Mithilfe von Simulationen wird die Ausbreitung der zweiten Harmonischen und
der nichtlinear erzeugten Oberflächenwellen in Wellenmischexperimenten analysiert
und der akustische Nichtlinearitätsparameter (ANP) bzw. der Kopplungsparameter
aus der Amplitudenentwicklung berechnet. Insbesondere wird untersucht, welchen
Einfluss ein gegebenes Tiefenprofil der TOEC auf den ANP hat (Vorwärtsproblem)
und inwiefern sich aus den Messungen des ANP auf ein vorliegendes Tiefenprofil der
TOEC schließen lässt (inverses Problem). Außerdem wird diskutiert, welchen
Einfluss lokale Änderungen der SOEC auf den ANP haben können und wie groß diese
Änderungen sein dürfen, um die TOEC dennoch bestimmen zu können. Die
Untersuchungen hierzu wurden auf der Basis eines 3D-FEM Modells mit zufällig
orientierten Mikrorissen durchgeführt. Die numerischen Rechnungen zeigen dabei
auch eine gute Übereinstimmung mit einem aus der Literatur bekannten und für dieses
Problem erweiterten, analytischen Modell. Neben der rissinduzierten Nichtlinearität
kann bei diesem auch die Gitternichtlinearität berücksichtigt werden.
Elastic constants of components are usually determined by tensile tests in combination with ultrasonic
experiments. However, these properties may change due to e.g. mechanical treatments or service conditions during
their lifetime. Knowledge of the actual material parameters is key to the determination of quantities like residual
stresses present in the medium. In this work the acoustic nonlinearity parameter (ANP) for surface acoustic waves is
examined through the derivation of an evolution equation for the amplitude of the second harmonic. Given a certain
depth profile of the third-order elastic constants, the dependence of the ANP with respect to the input frequency is
determined and on the basis of these results, an appropriate inversion method is developed. This method is intended
for the extraction of the depth dependence of the third-order elastic constants of the material from second-harmonic
generation and guided wave mixing experiments, assuming that the change in the linear Rayleigh wave velocity is
small. The latter assumption is supported by a 3D-FEM model study of a medium with randomly distributed microcracks as well as theoretical works on this topic in the literature.
In-vivo and in-vitro comparison of implant-based CRT optimization - What provide new algorithms?
(2011)
Introduction: In cardiac resynchronization therapy (CRT), individual AV delay (AVD) optimization can effectively increase hemodynamics and reduce non-responder rate. Accurate, automatic and easily comprehensible algorithms for the follow-up are desirable. QuickOpt is the first attempt of a semi-automatic intracardiac electrogram (IEGM) based AVD algorithm. We aimed to compare its accuracy and usefulness by in-vitro and in-vivo studies.
Methods: Using the programmable ARSI-4 four-chamber heart rhythm and IEGM simulator (HKP, Germany), the QuickOpt feature of an Epic HF system (St. Jude, USA) was tested in-vitro by simulated atrial IEGM amplitudes between 0.3 and 3.5mV during both, manual and automatic atrial sensing between 0.2 and 1.0mV. Subsequently, in 21 heart failure patients with implanted biventricular defibrillators, QuickOpt was performed in-vivo. Results of the algorithm for VDD and DDD stimulation were compared with echo AV delay optimization.
Results: In-vitro simulations demonstrated a QuickOpt measuring accuracy of ± 8ms. Depending on atrial IEGM amplitude, the algorithm proposed optimal AVD between 90 and 150ms for VDD and between 140 and 200ms for DDD operation, respectively. In-vivo, QuickOpt difference between individual AVD in DDD and VDD mode was either 50ms (20pts) or 40ms (1pt). QuickOpt and echo AVD differed by 41 ± 25ms (7 – 90ms) in VDD and by 18 ± 24ms (17-50ms) in DDD operation. Individual echo AVD difference between both modes was 73 ± 20ms (30-100ms).
Conclusion: The study demonstrates the value of in-vitro studies. It predicted QuickOpt deficiencies regarding IEGM amplitude dependent AVD proposals constrained to fixed individual differences between DDD and VDD mode. Consequently, in-vivo, the algorithm provided AVD of predominantly longer duration than echo in both modes. Accepting echo individualization as gold standard, QuickOpt should not be used alone to optimize AVD in CRT patients.
A car is only useful, when it runs properly – but keeping a car it running is getting more and more complex. Car service providers need a deep knowledge about technical details of the different car models. On the other hand car producers try to keep this information in their ownership. Digital data collection takes place every second on the car´s product life cycle and is stored on the car producers´ servers. The contribution of this paper is three-fold: we will provide an overview of the current concepts of intelligent order assistant technologies (I). This corpus is used to come to a more precise description of the specific service performance aspects (II). Finally, a representative empirical study with German motor mechanics will help to evaluate the wishes and needs regarding an intelligent order assistant in the garage (III).
Automotive service suppliers are keen to invent products that help to reduce particulate matter pollution substantial, but governance worldwide are not yet ready to introduce this retrofitting of helpful devices statutory. To develop a strategy how to introduce these devices to the market based on user needs is the objective of our research. The contribution of this paper is three-fold: we will provide an overview of the current options of particulate matter pollution solutions (I). This corpus is used to come to a more precise description of the specific needs and wishes of target groups (II). Finally, a representative empirical study via social media channels with German car owners will help to develop a strategy to introduce retrofit devices into the German market (III).
This paper gives an overview of the implementation of an Active Noise Control system on the TMS320C6713 Digital Signal Processor from Texas Instruments in the Digital Signal Processing Lab at Hochschule Offenburg, Germany. This system is implemented considering some non-ideal environmental conditions on a real system instead of being limited to computer simulations. Changes over time on the physical acoustical path as well as reverberation and variation on the power of the reference signal can strongly degrade the performance of the system or even lead to instability. In order to try to minimize these effects, the Active Noise Control system was designed to support a fast and easy implementation and evaluation of different algorithms on the DSP in real-time. In Section 1 a brief introduction about active noise control system is given and in section 2 the basic algorithm is described. In section 3 the implementation of the system is described and in section 4 some final considerations are given.
The paper describes the methodology and experimental results for revealing similarities in thermal dependencies of biases of accelerometers and gyroscopes from 250 inertial MEMS chips (MPU-9250). Temperature profiles were measured on an experimental setup with a Peltier element for temperature control. Classification of temperature curves was carried out with machine learning approach.
A perfect sensor should not have thermal dependency at all. Thus, only sensors inside the clusters with smaller dependency (smaller total temperature slopes) might be pre-selected for production of high accuracy inertial navigation modules. It was found that no unified thermal profile (“family” curve) exists for all sensors in a production batch. However, obviously, sensors might be grouped according to their parameters. Therefore, the temperature compensation profiles might be regressed for each group. 12 slope coefficients on 5 degrees temperature intervals from 0°C to +60°C were used as the features for the k-means++ clustering algorithm.
The minimum number of clusters for all sensors to be well separated from each other by bias thermal profiles in our case is 6. It was found by applying the elbow method. For each cluster a regression curve can be obtained.
Printed Electronics is perceived to have a major impact in the fields of smart sensors, Internet of Things and wearables. Especially low power printed technologies such as electrolyte gated field effect transistors (EGFETs) using solution-processed inorganic materials and inkjet printing are very promising in such application domains. In this paper, we discuss a modeling approach to describe the variations of printed devices. Incorporating these models and design flows into our previously developed printed design system allows for robust circuit design. Additionally, we propose a reliability-aware routing solution for printed electronics technology based on the technology constraints in printing crossovers. The proposed methodology was validated on multiple benchmark circuits and can be easily integrated with the design automation tools-set.
Konstrukteure im Maschinenbau stehen häufig vor der Problemstellung, hochfest vorge-
spannte Schraubenverbindungen und einen durchgehenden Korrosionsschutz zu ver-
einen. Die Normen und Richtlinien bieten hierzu Stand heute keine ausreichenden Ant-
worten. Die Hochschule Offenburg befasst sich im Rahmen einer industriellen Gemein-
schaftsforschung mit der Fragestellung, welchen Einfluss organische Beschichtungen auf
die Vorspannkraft insbesondere bei erhöhten Umgebungstemperaturen haben. In dieser
Arbeit werden die ersten Ergebnisse zum Einfluss der Einzelschichtstärke des Beschich-
tungssystems präsentiert.
Most machine learning methods require careful selection of hyper-parameters in order to train a high performing model with good generalization abilities. Hence, several automatic selection algorithms have been introduced to overcome tedious manual (try and error) tuning of these parameters. Due to its very high sample efficiency, Bayesian Optimization over a Gaussian Processes modeling of the parameter space has become the method of choice. Unfortunately, this approach suffers from a cubic compute complexity due to underlying Cholesky factorization, which makes it very hard to be scaled beyond a small number of sampling steps. In this paper, we present a novel, highly accurate approximation of the underlying Gaussian Process. Reducing its computational complexity from cubic to quadratic allows an efficient strong scaling of Bayesian Optimization while outperforming the previous approach regarding optimization accuracy. First experiments show speedups of a factor of 162 in single node and further speed up by a factor of 5 in a parallel environment.
The Bluetooth community is in the process to develop mesh technology. This is highly promising as Bluetooth is widely available in Smart Phones and Tablet PCs, allowing an easy access to the Internet of Things. In this paper work, we investigate the performance of Bluetooth enabled mesh networking that we performed to identify the strengths and weaknesses. A demonstrator for this protocol has been implemented by using the Fruity Mesh protocol implementation. Extensive test cases have been executed to measure the performance, the reliability, the power consumption and the delay. For this, an Automated Physical Testbed (APTB), which emulates the physical channels has been used. The results of these measurements are considered useful for the real implementation of Bluetooth; not only for home and building automation, but also for industrial automation.
In this paper, we establish a simple model for the exchange of messages in a vehicular network and we consider fundamental limits on the achievable data rate. For a vehicular network, the exchange of data with other nearby vehicles is particularly important for traffic safety, e.g. for collision avoidance, but also for cooperative applications like platooning. These use cases are currently addressed by standards building on IEEE 802.11p, namely ITS-G5 and DSRC (dedicated short range communication), which encounter saturation problems at high vehicle densities. For this reason, we take a step back and ask for the fundamental limits for the common data rate in a vehicular network. After defining a simple single-lane model and the corresponding capacity limits for some basic multiple- access schemes, we present results for a more realistic setting. For both scenarios, non-orthogonal multiple-access (NOMA) yields the best results.
The high peak power in comparison to the average transmit power is one of the major long-standing problems in multicarrier modulation and is known as the PAPR (peak to average power ratio) problem. Many PAPR reduction methods have been devised and their comparison is usually based on the complementary cumulative distribution function (CCDF) of the PAPR. While this comparison is straightforward and easy to compute, its relationship with system performance metrics like the (uncoded) BER or the word error rate (WER) for coded systems is considerably more involved. We evaluate the impact of the PAPR on performance metrics like uncoded BER, EVM (error vector magnitude), mutual information and the WER for soft decoding. In this context, we find that system performance is not necessarily degraded by an increasing PAPR. We show that a high number of subcarriers, despite the corresponding high PAPR, is actually not a problem for the system performance and provide a simple explanation for this seemingly counter-intuitive fact.
In this paper, we present a frame synchronization method which consists of the non-orthogonal superposition of a synchronization sequence and the data. We derive the optimum detection criterion and compare it to the classical sequential concatenation of synchronization and data sequences. Computer simulations confirm the benefits of the non-orthogonal allocation for the case of short frames, which makes this technique particularly suited for the increasingly important regime of low latency and ultra-reliable communication.
This study presents some results from a monitoring project with night ventilation and earthto-air heat exchanger. Both techniques refer to air-based low-energy cooling. As these technologies are limited to specific boundary conditions (e.g. moderate summer climate, low temperatures during night, or low ground temperatures, respectively), water-based low-energy cooling may be preferred in many projects. A comparison of the night-ventilated building with a ground-cooled building shows major differences in both concepts.
Die Geschäftsleitung und Führungskräfte von Eller Repro+Druck beschlossen im Juli 1994 die Teilnahme am damls noch neuen EU-Öko-Audit. Die Durchführung des Audits ist für 1996 geplant. Zwei Diplomanden der FH Offenburg wurde die Möglichkeit gegeben, als externe Berater für Eller Repro+Druck ihre Diplomarbeit über die Vorbereitung zum Öko-Audit zu schreiben. Der Betrieb (170 Mitarbeiter) verfügt über elektronische Bildverarbeitung auf Scitex- und Mac-Schiene, derzeit noch konventionelle Plattenkopie und -entwicklung, fünf Offsetrotationen sowie Weiterverarbeitung mit Sammelheftern und Falzmaschinen. Der Referent berichtet über die Erfahrungen, die sein Unternehmen bis zum Herbst 1995 mit der Vorbereitung zum Öko-Audit gemacht hat, und gibt Praxistips. Zusammen mit den Beratern wurden eine Aufnahme der betrieblichen Situation durchgeführt, Maßnahmen geplant und zum Teil durchgeführt.
In the dual membrane fuel cell (DM-Cell), protons formed at the anode and oxygen ions formed at the cathode migrate through their respective dense electrolytes to react and form water in a porous composite layer called dual membrane (DM). The DM-Cell concept was experimentally proven (as detailed in Part I of this paper). To describe the electrochemical processes occurring in this novel fuel cell, a mathematical model has been developed which focuses on the DM as the characteristic feature of the DM-Cell. In the model, the porous composite DM is treated as a continuum medium characterized by effective macro-homogeneous properties. To simulate the polarization behavior of the DM-Cell, the potential distribution in the DM is related to the flux of protons and oxygen ions in the conducting phases by introducing kinetic and transport equations into charge balances. Since water pressure may affect the overall formation rate, water mass balances across the DM and transport equations are also considered. The satisfactory comparison with available experimental results suggests that the model provides sound indications on the effects of key design parameters and operating conditions on cell behavior and performance.
Paper Abstract
The industry of the agave-derived bacanora, in the northern Mexican state of Sonora, has been growing substantially in recent years. However, this higher demand still lies under the influences of a variety of social, legal, cultural, ecological and economic elements. The governmental institutions of the state have tried to encourage a sustainable development and certain levels of standardization in the production of bacanora by applying different economical and legal strategies. However, a large portion of this alcoholic beverage is still produced in a traditional and rudimentary fashion. Beyond the quality of the beverage, the lack of proper control, by using adequate instrumental methods, might represent a health risk, as in several cases traditional-distilled beverages can contain elevated levels of harmful materials. The present article describes the qualitative spectral analysis of samples of the traditional-produced distilled beverage bacanora in the range from 0 cm−1 to 3500 cm−1 by using a Fourier Transform Raman spectrometer. This particular technique has not been previously explored for the analysis of bacanora, as in the case of other beverages, including tequila. The proposed instrumental arrangement for the spectral analysis has been built by combining conventional hardware parts (Michelson interferometer, photo-diodes, visible laser, etc.) and a set of self-developed evaluation algorithms. The resulting spectral information has been compared to those of pure samples of ethanol and to the spectra from different samples of the alcoholic beverage tequila. The proposed instrumental arrangement can be used the analysis of bacanora.
We report the use of the Raman spectral information of the chemical compound toluene C7H8 as a reference on the analysis of laboratory-prepared and commercially acquired gasoline-ethanol blends. The rate behavior of the characteristic Raman lines of toluene and gasoline has enabled the approximated quantification of this additive in commercial gasoline-ethanol mixtures. This rate behavior has been obtained from the Raman spectra of gasoline-ethanol blends with different proportions of toluene.
All these Raman spectra have been collected by using a self-designed, frequency precise and low-cost Fourier-transform Raman spectrometer (FT-Raman spectrometer) prototype. This FT-Raman prototype has helped to accurately confirm the frequency position of the main characteristic Raman lines of toluene present on the different gasoline-ethanol samples analyzed at smaller proportions than those commonly found in commercial gasoline-ethanol blends. The frequency accuracy validation has been performed by analyzing the same set of toluene samples with two additional state-of-the-art commercial FT-Raman devices. Additionally, the spectral information has been contrasted, with highly-correlated coefficients as a result, with the values of the standard Raman spectrum of toluene.
A simple measuring method for acquiring the radiation pattern of an ultrawide band Vivaldi antenna is presented. The measuring is performed by combining two identical Vivaldi antennas and some of the intrinsic properties of a stepped-frequency continue wave radar (SFCW radar) in the
range from 1.0 GHz to 6.0 GHz. A stepper-motor provided the azimuthal rotation for one of the antennas from 0 ◦ to 360 ◦. The tests have been performed within the conventional environment (laboratory / office) without using an anechoic chamber or absorbing materials. Special measuring devices have not been used either. This method has been tested with different pairs of Vivaldi antennas and it can be also used for different ones (with little or no change in the system), as long as their operational
bandwidth is within the frequency range of the SFCW radar.
Keywords — SFCW Radar, Antenna Gain Characterization,
Azimuthal Radiation Pattern