Refine
Document Type
- Conference Proceeding (23)
- Article (reviewed) (5)
Conference Type
- Konferenzartikel (22)
- Konferenz-Abstract (1)
Is part of the Bibliography
- yes (28)
Keywords
- Kalman Filter (2)
- Air Pollution (1)
- Deep Learning (1)
- EKF-SLAM (1)
- Entfernung (1)
- Environmental monitoring (1)
- Geschwindigkeit (1)
- Inertial (1)
- LOAM (LiDAR odometry and mapping) (1)
- LiDAR (1)
Institute
Open Access
- Closed Access (12)
- Open Access (11)
- Bronze (3)
- Closed (3)
- Gold (2)
A novel approach for synchronization and calibration of a camera and an inertial measurement unit (IMU) in the research-oriented visual-inertial mapping-and localization-framework maplab is presented. Mapping and localization are based on detecting different features in the environment. In addition to the possibility of creating single-case maps, the included algorithms allow merging maps to increase mapping accuracy and obtain large-scale maps. Furthermore, the algorithms can be used to optimize the collected data. The preliminary results show that after appropriate calibration and synchronization maplab can be used efficiently for mapping, especially in rooms and small building environments.
In this contribution, we propose an system setup for the detection andclassification of objects in autonomous driving applications. The recognition algo-rithm is based upon deep neural networks, operating in the 2D image domain. Theresults are combined with data of a stereo camera system to finally incorporatethe 3D object information into our mapping framework. The detection systemis locally running upon the onboard CPU of the vehicle. Several network archi-tectures are implemented and evaluated with respect to accuracy and run-timedemands for the given camera and hardware setup.
The precise positioning of mobile systems is a prerequisite for any autonomous behavior, in an industrial environment as well as for field robotics. The paper describes the set up for an experimental platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. Two approaches are compared. First, a local method based on point cloud matching and integration of inertial measurement units is evaluated. Subsequent matching makes it possible to create a three-dimensional point cloud that can be used as a map in subsequent runs. The second approach is a full SLAM algorithm, based on graph relaxation models, incorporating the full sensor suite of odometry, inertial sensors, and 3D laser scan data.
Für die genaue Positionsbestimmung in Innenräumen, beispielsweise in Bahnhöfen oder Einkaufszentren, soll in dem beschriebenen Projekt untersucht werden, inwiefern lokale Magnetfelder genutzt werden können, um Genauigkeit und Robustheit zu erhöhen. Hierzu wird untersucht, ob und wie kostengünstige Magnetfeldsensoren und mobile Roboterplattformen genutzt werden können, um Karten zu erstellen, die eine spätere Navigation, beispielsweise mit Smartphones oder mit anderen mobilen Geräten.
Design and Implementation of a Camera-Based Tracking System for MAV Using Deep Learning Algorithms
(2023)
In recent years, the advancement of micro-aerial vehicles has been rapid, leading to their widespread utilization across various domains due to their adaptability and efficiency. This research paper focuses on the development of a camera-based tracking system specifically designed for low-cost drones. The primary objective of this study is to build up a system capable of detecting objects and locating them on a map in real time. Detection and positioning are achieved solely through the utilization of the drone’s camera and sensors. To accomplish this goal, several deep learning algorithms are assessed and adopted because of their suitability with the system. Object detection is based upon a single-shot detector architecture chosen for maximum computation speed, and the tracking is based upon the combination of deep neural-network-based features combined with an efficient sorting strategy. Subsequently, the developed system is evaluated using diverse metrics to determine its performance for detection and tracking. To further validate the approach, the system is employed in the real world to show its possible deployment. For this, two distinct scenarios were chosen to adjust the algorithms and system setup: a search and rescue scenario with user interaction and precise geolocalization of missing objects, and a livestock control scenario, showing the capability of surveying individual members and keeping track of number and area. The results demonstrate that the system is capable of operating in real time, and the evaluation verifies that the implemented system enables precise and reliable determination of detected object positions. The ablation studies prove that object identification through small variations in phenotypes is feasible with our approach.
Evaluation of Deep Learning-Based Neural Network Methods for Cloud Detection and Segmentation
(2021)
This paper presents a systematic approach for accurate short-time cloud coverage prediction based on a machine learning (ML) approach. Based on a newly built omnidirectional ground-based sky camera system, local training and evaluation data sets were created. These were used to train several state-of-the-art deep neural networks for object detection and segmentation. For this purpose, the camera-generated a full hemispherical image every 30 min over two months in daylight conditions with a fish-eye lens. From this data set, a subset of images was selected for training and evaluation according to various criteria. Deep neural networks, based on the two-stage R-CNN architecture, were trained and compared with a U-net segmentation approach implemented by CloudSegNet. All chosen deep networks were then evaluated and compared according to the local situation.
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm.
Evaluierung von Kalman Filter Konfigurationen zur Roboterlokaliserung mittels Sensordatenfusion
(2023)
In dieser Arbeit werden drei verschiedene Konfigurationen der von Tom Moore, für das Robot Operating System, entwickelte Kalman-Filter vorgestellt. Diese bilden die Grundlage für eine Lokalisierung mittels Sensorfusion in dem verwendeten ROS-Framework. Ziel dieser Arbeit ist der Aufbau und die Verifikation einer Lokalisierung für ein mobiles Robotersystem Husky A200 der Firma Clearpath Robotics. Hierzu wurden die Möglichkeiten des bestehenden Systems untersucht und mehrere Versionen von Lokalisierungsfiltern konfiguriert. Am an Ende, wird eine Verifikation der Ergebnisse in verschiedenen Szenarien gegeneinandergestellt. Hierzu werden die Ergebnisse einer Variante des Extended Kalman-Filters in 2D (EKF2D), eine Variante des Unscented Kalman-Filter in 2D (UKF2D) und eine Variante des Extended Kalman-Filters in 3D (EKF3D) verifiziert und verglichen. Die Untersuchungen ergaben das der EKF2D die besten und robustesten Ergebnisse für eine Lokalisierung erbringt, trotz, im Vergleich zu der UKF2D Variante, 17,3 % höhere Endpositionsabweichung aufweist. Die in diesem Projekt gewählte EKF3D Konfigurationsvariante eignet sich, wegen seinen starken Ungenauigkeiten in der Höhenbestimmung nicht für eine aussagekräftige Positionsbestimmung.
This paper deals with the detection and segmentation of clouds on high-dynamic-range (HDR) images of the sky as well as the calculation of the position of the sun at any time of the year. In order to predict the movement of clouds and the radiation of the sun for a short period of time, the clouds thickness and position have to be known as precisely as possible. Consequently, the segmentation algorithm has to provide satisfactory results regardless of different weather, illumination and climatic conditions. The principle of the segmentation is based on the classification of each pixel as a cloud or as a sky. This classification is usually based on threshold methods, since these are relatively fast to implement and show a low computational burden. In order to predict if and when the sun will be covered by clouds, the position of the sun on the images has to be determined. For this purpose, the zenith and azimuth angles of the sun are determined and converted into XY coordinates.