Refine
Document Type
- Conference Proceeding (23)
- Article (reviewed) (2)
Conference Type
- Konferenzartikel (22)
- Konferenz-Abstract (1)
Has Fulltext
- no (25) (remove)
Is part of the Bibliography
- yes (25)
Keywords
- Kalman Filter (2)
- Air Pollution (1)
- EKF-SLAM (1)
- Entfernung (1)
- Environmental monitoring (1)
- Geschwindigkeit (1)
- Inertial (1)
- Monocular Depth Estimation (1)
- Optimization (1)
- RoboCup (1)
Institute
Open Access
- Closed Access (12)
- Open Access (8)
- Bronze (3)
- Closed (3)
Die in dieser Arbeit vorgestellte Vorgehensweise erlaubt die Ortung von Schienenfahrzeugen in topologischen Karten allein mit Hilfe eines Wirbelstromsensorsystems (WSS). Zur Ortung primär erforderlich ist die Identifizierung des befahrenen Gleises selbst, wofür unterschiedliche in einer Karte gespeicherte Merkmale herangezogen werden sowie der zurückgelegte Weg, der durch Zählen der passierten Schwellen ermittelt wird. Diese Merkmale werden mittels eigens definierter, virtueller Sensoren aus dem Signal des WSS gewonnen und mittels einem Bayes’schen Formalismus mit den Referenzdaten aus der vorliegenden topologischen Karte abgeglichen. Diese auf virtuellen Sensoren basierende Vorgehensweise erlaubt eine Parallelisierung der Sensorsignalverarbeitung und eine flexible Einbindung von Sensoren in das Ortungssystem. Die Möglichkeit, Weichen mit einer Trefferquote von 99% zu detektieren, erlaubt die Verfolgung der Fahrzeugposition über die gesamte Fahrstrecke hinweg, unter alleiniger Verwendung der vom WSS gelieferten Messdaten.
Für die genaue Positionsbestimmung in Innenräumen, beispielsweise in Bahnhöfen oder Einkaufszentren, soll in dem beschriebenen Projekt untersucht werden, inwiefern lokale Magnetfelder genutzt werden können, um Genauigkeit und Robustheit zu erhöhen. Hierzu wird untersucht, ob und wie kostengünstige Magnetfeldsensoren und mobile Roboterplattformen genutzt werden können, um Karten zu erstellen, die eine spätere Navigation, beispielsweise mit Smartphones oder mit anderen mobilen Geräten.
Solar irradiance prediction is vital for the power management and the cost reduction when integrating solar energy. The study is towards a ground image based solar irradiance prediction which is highly dependent on the cloud coverage. The sky images are collected by using ground based sky imager (fisheye lens). In this work, different algorithms for cloud detection being a preparation step for their segmentation are compared.
Comparison of Time Warping Algorithms for Rail Vehicle Velocity Estimation in Low Speed Scenarios
(2017)
This study focuses on the autonomous navigation and mapping of indoor environments using a drone equipped only with a monocular camera and height measurement sensors. A visual SLAM algorithm was employed to generate a preliminary map of the environment and to determine the drone's position within the map. A deep neural network was utilized to generate a depth image from the monocular camera's input, which was subsequently transformed into a point cloud to be projected into the map. By aligning the depth point cloud with the map, 3D occupancy grid maps were constructed by using ray tracing techniques to get a precise depiction of obstacles and the surroundings. Due to the absence of IMU data from the low-cost drone for the SLAM algorithm, the created maps are inherently unscaled. However, preliminary tests with relative navigation in unscaled maps have revealed potential accuracy issues, which can only be overcome by incorporating additional information from the given sensors for scale estimation.
Modern industrial production is heavily dependent on efficient workflow processes and automation. The steady flow of raw materials as well as the separation of vital parts and semi-finished products are at the core of these automated procedures. Commonly used systems for this work are bowl feeders, which separate the parts and material by a combination of mechanical vibration and friction. The production of these tools, especially the design of the ramping spiral, is delicate and time-consuming work, as the shape, slope, and material must be carefully adjusted for the corresponding parts. In this work, we propose an automated approach, making use of optimization procedures from artificial intelligence, to design the spiral ramps of the bowl feeders. Therefore, the whole system and considered parts are physically simulated and the optimized geometry is subsequently exported into a CAD system for the actual building, respectively printing. The employment of evolutionary optimization gives the need to develop a mathematical model for the whole setup and find an efficient representation of integral features.
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
The precise positioning of mobile systems is a prerequisite for any autonomous behavior, in an industrial environment as well as for field robotics. The paper describes the set up for an experimental platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. Two approaches are compared. First, a local method based on point cloud matching and integration of inertial measurement units is evaluated. Subsequent matching makes it possible to create a three-dimensional point cloud that can be used as a map in subsequent runs. The second approach is a full SLAM algorithm, based on graph relaxation models, incorporating the full sensor suite of odometry, inertial sensors, and 3D laser scan data.
Die Positionierung mobiler Systeme mit hoher Genauigkeit ist eine Voraussetzung für intelligentes autonomes Verhalten, sowohl in der Feldrobotik als auch in industriellen Umgebungen. Dieser Beitrag beschreibt den Aufbau einer Roboterplattform und ihre Verwendung für den Test und die Bewertung von Kalman-Filter-Konfigurationen. Der Aufbau wurde mit einem mobilen Roboter Husky A200 und einem LiDAR-Sensor (Light Detection and Ranging) realisiert. Zur Verifizierung des vorgeschlagenen Aufbaus wurden fünf verschiedene Szenarien ausgearbeitet. Mit denen wurden die Filter auf ihre Leistungsfähigkeit hinsichtlich der Genauigkeit der Positionsbestimmung getestet.
Evaluierung von Kalman Filter Konfigurationen zur Roboterlokaliserung mittels Sensordatenfusion
(2023)
In dieser Arbeit werden drei verschiedene Konfigurationen der von Tom Moore, für das Robot Operating System, entwickelte Kalman-Filter vorgestellt. Diese bilden die Grundlage für eine Lokalisierung mittels Sensorfusion in dem verwendeten ROS-Framework. Ziel dieser Arbeit ist der Aufbau und die Verifikation einer Lokalisierung für ein mobiles Robotersystem Husky A200 der Firma Clearpath Robotics. Hierzu wurden die Möglichkeiten des bestehenden Systems untersucht und mehrere Versionen von Lokalisierungsfiltern konfiguriert. Am an Ende, wird eine Verifikation der Ergebnisse in verschiedenen Szenarien gegeneinandergestellt. Hierzu werden die Ergebnisse einer Variante des Extended Kalman-Filters in 2D (EKF2D), eine Variante des Unscented Kalman-Filter in 2D (UKF2D) und eine Variante des Extended Kalman-Filters in 3D (EKF3D) verifiziert und verglichen. Die Untersuchungen ergaben das der EKF2D die besten und robustesten Ergebnisse für eine Lokalisierung erbringt, trotz, im Vergleich zu der UKF2D Variante, 17,3 % höhere Endpositionsabweichung aufweist. Die in diesem Projekt gewählte EKF3D Konfigurationsvariante eignet sich, wegen seinen starken Ungenauigkeiten in der Höhenbestimmung nicht für eine aussagekräftige Positionsbestimmung.
Mit der Implementierung sowie einer anschließenden aussagekräftigen Evaluierung, soll das, visuelle-inertiale Kartierungs- und Lokalisierungssystem maplab analysiert werden. Hierbei basiert die Kartierung bzw. Lokalisierung auf der Detektion von Umgebungsmerkmalen. Neben der Möglichkeit der Kartenerstellung besteht ferner die Option, mehrere Karten zu fusionieren und somit weitreichende Gebiete zu kartieren sowie für weitere Datenauswertungen zu nutzen. Aufgrund der Durchführung und Bewertung der Ergebnisse in unterschiedlichen Anwendungsszenarien zeigt sich, dass maplab besonders zur Kartierung von Räumen bzw. kleinen Gebäudekomplexen geeignet ist. Die Möglichkeit der Kartenfusionierung bietet weiterhin die Option, den Informationsgehalt von Karten zu erhöhen, welches die Effektivität für eine anschließende Lokalisierung steigert. Bei wachsender Kartierungsgröße hingegen zeigt sich jedoch eine Vergrößerung geometrischer Inkonsistenzen.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
The visual-inertial mapping and localization system maplab is analyzed by its implementation and subsequent evaluation. The mapping or localization is based on environmental feature detection. In addition to creating maps, there is also the option of fusion of several maps and thus mapping extensive areas and using them for further analysis of data. In this way, various software tools can be used to optimize the existing data sets.
Two sensor components are needed: an inertial measuring unit (IMU) and a monochrome camera, which are combined by a hardware rig and put into operation for the analysis of the visual-inertial system. System calibration is crucial for precision and system functioning and is based on nonlinear dynamic state estimation. This ensures the best possible estimate of the position of the environmental feature and the map. Maplab is particularly suitable for mapping rooms or small building complexes as the implementation and evaluation of the results in different application scenarios show. Special emphasis is laid on the evaluation of larger scenarios, in which is shown, that the system is struggling to keep up geometric consistencies and thus provide an accurate map.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The fisheye camera has been widely studied in the field of ground based sky imagery and robot vision since it can capture a wide view of the scene at one time. However, serious image distortion is a major drawback hindering its wider use. To remedy this, this paperproposes a lens calibration and distortion correction method for detecting clouds and forecasting solar radiation. Finally, the radial distortion of the fisheye image can be corrected by incorporating the estimated calibration parameters. Experimental results validate the effectiveness of the proposed method.
This paper deals with the detection and segmentation of clouds on high-dynamic-range (HDR) images of the sky as well as the calculation of the position of the sun at any time of the year. In order to predict the movement of clouds and the radiation of the sun for a short period of time, the clouds thickness and position have to be known as precisely as possible. Consequently, the segmentation algorithm has to provide satisfactory results regardless of different weather, illumination and climatic conditions. The principle of the segmentation is based on the classification of each pixel as a cloud or as a sky. This classification is usually based on threshold methods, since these are relatively fast to implement and show a low computational burden. In order to predict if and when the sun will be covered by clouds, the position of the sun on the images has to be determined. For this purpose, the zenith and azimuth angles of the sun are determined and converted into XY coordinates.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
Bei dem vorgestellten Ansatz soll der Auftreffpunkt des Pfeils durch die Kreuzkorrelation von Audio-Signalen bestimmt werden. Das Auftreffen des Pfeils erzeugt ein charakteristisches Geräusch, welches von mehreren Mikrofonen in bestimmter Anordnung um die Dartscheibe herum in elektrische Signale umgewandelt wird. Mithilfe der Schallgeschwindigkeit und den Zeitdifferenzen, welche die Schallwelle zu den einzelnen Mikrofonen benötigt soll dann der Auftreffpunkt berechnet werden.