Refine
Year of publication
Document Type
- Conference Proceeding (23)
- Article (reviewed) (5)
Conference Type
- Konferenzartikel (23)
Is part of the Bibliography
- yes (28)
Keywords
- Kalman Filter (2)
- Air Pollution (1)
- Deep Learning (1)
- EKF-SLAM (1)
- Entfernung (1)
- Environmental monitoring (1)
- Geschwindigkeit (1)
- Inertial (1)
- LOAM (LiDAR odometry and mapping) (1)
- LiDAR (1)
Institute
Open Access
- Closed Access (12)
- Open Access (12)
- Closed (4)
- Bronze (2)
- Gold (2)
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm.
This paper deals with the detection and segmentation of clouds on high-dynamic-range (HDR) images of the sky as well as the calculation of the position of the sun at any time of the year. In order to predict the movement of clouds and the radiation of the sun for a short period of time, the clouds thickness and position have to be known as precisely as possible. Consequently, the segmentation algorithm has to provide satisfactory results regardless of different weather, illumination and climatic conditions. The principle of the segmentation is based on the classification of each pixel as a cloud or as a sky. This classification is usually based on threshold methods, since these are relatively fast to implement and show a low computational burden. In order to predict if and when the sun will be covered by clouds, the position of the sun on the images has to be determined. For this purpose, the zenith and azimuth angles of the sun are determined and converted into XY coordinates.
The fisheye camera has been widely studied in the field of ground based sky imagery and robot vision since it can capture a wide view of the scene at one time. However, serious image distortion is a major drawback hindering its wider use. To remedy this, this paperproposes a lens calibration and distortion correction method for detecting clouds and forecasting solar radiation. Finally, the radial distortion of the fisheye image can be corrected by incorporating the estimated calibration parameters. Experimental results validate the effectiveness of the proposed method.
The need to measure basic aerosol parameters has increased dramatically in the last decade. This is due mainly to their harmful effect on the environment and on public health. Legislation requires that particle emissions and ambient levels, workplace particle concentrations and exposure to them are measured to confirm that the defined limits are met and the public is not exposed to harmful concentrations of aerosols.
Solar irradiance prediction is vital for the power management and the cost reduction when integrating solar energy. The study is towards a ground image based solar irradiance prediction which is highly dependent on the cloud coverage. The sky images are collected by using ground based sky imager (fisheye lens). In this work, different algorithms for cloud detection being a preparation step for their segmentation are compared.
The aim of this work is the application and evaluation of a method to visually detect markers at a distance of up to five meters and determine their real-world position. Combinations of cameras and lenses with different parameters were studied to determine the optimal configuration. Based on this configuration, camera images were taken after proper calibration. These images are then transformed into a bird's eye view using a homography matrix. The homography matrix is calculated with four-point pairs as well as with coordinate transformations. The obtained images show the ground plane un distorted, making it possible to convert a pixel position into a real-world position with a conversion factor. The proposed approach helps to effectively create data sets for training neural networks for navigation purposes.
Object Detection and Mapping with Unmanned Aerial Vehicles Using Convolutional Neural Networks
(2021)
Significant progress has been made in the field of deep learning through intensive research over the last decade. So-called convolutional neural networks are an essential component of this research. In this type of neural network, the mathematical convolution operator is used to extract characteristics or anomalies. The purpose of this work is to investigate the extent to which it is possible in certain initial settings to input aerial recordings and flight data of Unmanned Aerial Vehicles (UAVs) in the architecture of a neural network and to detect and map an object. Using the calculated contours or dimensions of the so-called bounding boxes, the position of the objects can be determined relative to the current UAV location.
The applicability of characteristics of local magnetic fields for more precise determination of localization of subjects and/or objects in indoor environments, such as railway stations, airports, exhibition halls, showrooms, or shopping centers, is considered. An investigation has been carried out to find out whether and how low-cost magnetic field sensors and mobile robot platforms can be used to create maps that improve the accuracy and robustness of later navigation with smartphones or other devices.
This study focused on enhancing odometry estimation for self-driving cars using LiDAR-based sensor technology. The project involves integrating LiDAR sensors into the car’s sensor suite, which generates detailed 3D point clouds of the environment. This integration can be useful when gaps-based odometry estimation is not accurate enough. These point clouds are then used to accurately estimate the vehicle’s movement and position using learning-based and model-based odometry estimation methods.
Evaluierung von Kalman Filter Konfigurationen zur Roboterlokaliserung mittels Sensordatenfusion
(2023)
In dieser Arbeit werden drei verschiedene Konfigurationen der von Tom Moore, für das Robot Operating System, entwickelte Kalman-Filter vorgestellt. Diese bilden die Grundlage für eine Lokalisierung mittels Sensorfusion in dem verwendeten ROS-Framework. Ziel dieser Arbeit ist der Aufbau und die Verifikation einer Lokalisierung für ein mobiles Robotersystem Husky A200 der Firma Clearpath Robotics. Hierzu wurden die Möglichkeiten des bestehenden Systems untersucht und mehrere Versionen von Lokalisierungsfiltern konfiguriert. Am an Ende, wird eine Verifikation der Ergebnisse in verschiedenen Szenarien gegeneinandergestellt. Hierzu werden die Ergebnisse einer Variante des Extended Kalman-Filters in 2D (EKF2D), eine Variante des Unscented Kalman-Filter in 2D (UKF2D) und eine Variante des Extended Kalman-Filters in 3D (EKF3D) verifiziert und verglichen. Die Untersuchungen ergaben das der EKF2D die besten und robustesten Ergebnisse für eine Lokalisierung erbringt, trotz, im Vergleich zu der UKF2D Variante, 17,3 % höhere Endpositionsabweichung aufweist. Die in diesem Projekt gewählte EKF3D Konfigurationsvariante eignet sich, wegen seinen starken Ungenauigkeiten in der Höhenbestimmung nicht für eine aussagekräftige Positionsbestimmung.