Refine
Document Type
- Conference Proceeding (23)
- Article (reviewed) (5)
Conference Type
- Konferenzartikel (22)
- Konferenz-Abstract (1)
Is part of the Bibliography
- yes (28) (remove)
Keywords
- Kalman Filter (2)
- Air Pollution (1)
- Deep Learning (1)
- EKF-SLAM (1)
- Entfernung (1)
- Environmental monitoring (1)
- Geschwindigkeit (1)
- Inertial (1)
- LOAM (LiDAR odometry and mapping) (1)
- LiDAR (1)
Institute
Open Access
- Closed Access (12)
- Open Access (11)
- Bronze (3)
- Closed (3)
- Gold (2)
Evaluierung von Kalman Filter Konfigurationen zur Roboterlokaliserung mittels Sensordatenfusion
(2023)
In dieser Arbeit werden drei verschiedene Konfigurationen der von Tom Moore, für das Robot Operating System, entwickelte Kalman-Filter vorgestellt. Diese bilden die Grundlage für eine Lokalisierung mittels Sensorfusion in dem verwendeten ROS-Framework. Ziel dieser Arbeit ist der Aufbau und die Verifikation einer Lokalisierung für ein mobiles Robotersystem Husky A200 der Firma Clearpath Robotics. Hierzu wurden die Möglichkeiten des bestehenden Systems untersucht und mehrere Versionen von Lokalisierungsfiltern konfiguriert. Am an Ende, wird eine Verifikation der Ergebnisse in verschiedenen Szenarien gegeneinandergestellt. Hierzu werden die Ergebnisse einer Variante des Extended Kalman-Filters in 2D (EKF2D), eine Variante des Unscented Kalman-Filter in 2D (UKF2D) und eine Variante des Extended Kalman-Filters in 3D (EKF3D) verifiziert und verglichen. Die Untersuchungen ergaben das der EKF2D die besten und robustesten Ergebnisse für eine Lokalisierung erbringt, trotz, im Vergleich zu der UKF2D Variante, 17,3 % höhere Endpositionsabweichung aufweist. Die in diesem Projekt gewählte EKF3D Konfigurationsvariante eignet sich, wegen seinen starken Ungenauigkeiten in der Höhenbestimmung nicht für eine aussagekräftige Positionsbestimmung.
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm.
Design and Implementation of a Camera-Based Tracking System for MAV Using Deep Learning Algorithms
(2023)
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The primary objective of this study is to build up a system capable of detecting objects and locating them on a map in real time. Detection and positioning are achieved solely through the utilization of the drone’s camera and sensors. To accomplish this goal, several deep learning algorithms are assessed and adopted because of their suitability with the system. Object detection is based upon a single-shot detector architecture chosen for maximum computation speed, and the tracking is based upon the combination of deep neural-network-based features combined with an efficient sorting strategy. Subsequently, the developed system is evaluated using diverse metrics to determine its performance for detection and tracking. To further validate the approach, the system is employed in the real world to show its possible deployment. For this, two distinct scenarios were chosen to adjust the algorithms and system setup: a search and rescue scenario with user interaction and precise geolocalization of missing objects, and a livestock control scenario, showing the capability of surveying individual members and keeping track of number and area. The results demonstrate that the system is capable of operating in real time, and the evaluation verifies that the implemented system enables precise and reliable determination of detected object positions. The ablation studies prove that object identification through small variations in phenotypes is feasible with our approach.
Mit der Implementierung sowie einer anschließenden aussagekräftigen Evaluierung, soll das, visuelle-inertiale Kartierungs- und Lokalisierungssystem maplab analysiert werden. Hierbei basiert die Kartierung bzw. Lokalisierung auf der Detektion von Umgebungsmerkmalen. Neben der Möglichkeit der Kartenerstellung besteht ferner die Option, mehrere Karten zu fusionieren und somit weitreichende Gebiete zu kartieren sowie für weitere Datenauswertungen zu nutzen. Aufgrund der Durchführung und Bewertung der Ergebnisse in unterschiedlichen Anwendungsszenarien zeigt sich, dass maplab besonders zur Kartierung von Räumen bzw. kleinen Gebäudekomplexen geeignet ist. Die Möglichkeit der Kartenfusionierung bietet weiterhin die Option, den Informationsgehalt von Karten zu erhöhen, welches die Effektivität für eine anschließende Lokalisierung steigert. Bei wachsender Kartierungsgröße hingegen zeigt sich jedoch eine Vergrößerung geometrischer Inkonsistenzen.
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
The visual-inertial mapping and localization system maplab is analyzed by its implementation and subsequent evaluation. The mapping or localization is based on environmental feature detection. In addition to creating maps, there is also the option of fusion of several maps and thus mapping extensive areas and using them for further analysis of data. In this way, various software tools can be used to optimize the existing data sets.
Two sensor components are needed: an inertial measuring unit (IMU) and a monochrome camera, which are combined by a hardware rig and put into operation for the analysis of the visual-inertial system. System calibration is crucial for precision and system functioning and is based on nonlinear dynamic state estimation. This ensures the best possible estimate of the position of the environmental feature and the map. Maplab is particularly suitable for mapping rooms or small building complexes as the implementation and evaluation of the results in different application scenarios show. Special emphasis is laid on the evaluation of larger scenarios, in which is shown, that the system is struggling to keep up geometric consistencies and thus provide an accurate map.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The fisheye camera has been widely studied in the field of ground based sky imagery and robot vision since it can capture a wide view of the scene at one time. However, serious image distortion is a major drawback hindering its wider use. To remedy this, this paperproposes a lens calibration and distortion correction method for detecting clouds and forecasting solar radiation. Finally, the radial distortion of the fisheye image can be corrected by incorporating the estimated calibration parameters. Experimental results validate the effectiveness of the proposed method.
This paper deals with the detection and segmentation of clouds on high-dynamic-range (HDR) images of the sky as well as the calculation of the position of the sun at any time of the year. In order to predict the movement of clouds and the radiation of the sun for a short period of time, the clouds thickness and position have to be known as precisely as possible. Consequently, the segmentation algorithm has to provide satisfactory results regardless of different weather, illumination and climatic conditions. The principle of the segmentation is based on the classification of each pixel as a cloud or as a sky. This classification is usually based on threshold methods, since these are relatively fast to implement and show a low computational burden. In order to predict if and when the sun will be covered by clouds, the position of the sun on the images has to be determined. For this purpose, the zenith and azimuth angles of the sun are determined and converted into XY coordinates.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.