Refine
Document Type
Conference Type
- Konferenzartikel (4)
Language
- English (4)
Has Fulltext
- no (4)
Is part of the Bibliography
- yes (4)
Institute
Open Access
- Closed Access (3)
- Open Access (1)
In this contribution, we propose an system setup for the detection andclassification of objects in autonomous driving applications. The recognition algo-rithm is based upon deep neural networks, operating in the 2D image domain. Theresults are combined with data of a stereo camera system to finally incorporatethe 3D object information into our mapping framework. The detection systemis locally running upon the onboard CPU of the vehicle. Several network archi-tectures are implemented and evaluated with respect to accuracy and run-timedemands for the given camera and hardware setup.
The paper describes a systematic approach for a precise short-time cloud coverage prediction based on an optical system. We present a distinct pre-processing stage that uses a model based clear sky simulation to enhance the cloud segmentation in the images. The images are based on a sky imager system with fish-eye lens optic to cover a maximum area. After a calibration step, the image is rectified to enable linear prediction of cloud movement. In a subsequent step, the clear sky model is estimated on actual high dynamic range images and combined with a threshold based approach to segment clouds from sky. In the final stage, a multi hypothesis linear tracking framework estimates cloud movement, velocity and possible coverage of a given photovoltaic power station. We employ a Kalman filter framework that efficiently operates on the rectified images. The evaluation on real world data suggests high coverage prediction accuracy above 75%.
This paper deals with the detection and segmentation of clouds on high-dynamic-range (HDR) images of the sky as well as the calculation of the position of the sun at any time of the year. In order to predict the movement of clouds and the radiation of the sun for a short period of time, the clouds thickness and position have to be known as precisely as possible. Consequently, the segmentation algorithm has to provide satisfactory results regardless of different weather, illumination and climatic conditions. The principle of the segmentation is based on the classification of each pixel as a cloud or as a sky. This classification is usually based on threshold methods, since these are relatively fast to implement and show a low computational burden. In order to predict if and when the sun will be covered by clouds, the position of the sun on the images has to be determined. For this purpose, the zenith and azimuth angles of the sun are determined and converted into XY coordinates.
The fisheye camera has been widely studied in the field of ground based sky imagery and robot vision since it can capture a wide view of the scene at one time. However, serious image distortion is a major drawback hindering its wider use. To remedy this, this paperproposes a lens calibration and distortion correction method for detecting clouds and forecasting solar radiation. Finally, the radial distortion of the fisheye image can be corrected by incorporating the estimated calibration parameters. Experimental results validate the effectiveness of the proposed method.