Refine
Year of publication
Document Type
- Conference Proceeding (52)
- Contribution to a Periodical (4)
- Report (2)
- Article (unreviewed) (1)
Conference Type
- Konferenzartikel (21)
- Sonstiges (19)
- Konferenz-Abstract (12)
Is part of the Bibliography
- yes (59) (remove)
Keywords
- RoboCup (31)
- Roboter (6)
- Humanoider Roboter (2)
- Machine Learning (2)
- Agentbasierter Transport (1)
- Deep Learning (1)
- Deep Reinforcement Learning (1)
- Entscheidungstheorie (1)
- Fußball (1)
- Humanoid Robots (1)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (35)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (21)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (13)
- IMLA - Institute for Machine Learning and Analytics (5)
- Fakultät Medien und Informationswesen (M+I) (bis 21.04.2021) (4)
- Fakultät Wirtschaft (W) (3)
- INES - Institut für nachhaltige Energiesysteme (2)
- Zentrale Einrichtungen (2)
Open Access
- Open Access (52)
- Bronze (34)
- Closed Access (5)
- Closed (2)
- Grün (2)
- Diamond (1)
One of the challenges for autonomous driving in general is to detect objects in the car's camera images. In the Audi Autonomous Driving Cup (AADC), among those objects are other cars, adult and child pedestrians and emergency vehicle lighting. We show that with recent deep learning networks we are able to detect these objects reliably on the limited Hardware of the model cars. Also, the same deep network is used to detect road features like mid lines, stop lines and even complete crossings. Best results are achieved using Faster R-CNN with Inception v2 showing an overall accuracy of 0.84 at 7 Hz.
This paper describes a thorough analysis of using PPO to learn kick behaviors with simulated NAO robots in the simspark environment. The analysis includes an investigation of the influence of PPO hyperparameters, network size, training setups and performance in real games. We believe to improve the state of the art mainly in four points: first, the kicks are learned with a toed version of the NAO robot, second, we improve the reliability with respect to kickable area and avoidance of falls, third, the kick can be parameterized with desired distance and direction as input to the deep network and fourth, the approach allows to integrate the learned behavior seamlessly into soccer games. The result is a significant improvement of the general level of play.
For the RoboCup Soccer AdultSize League the humanoid robot Sweaty uses a single fully convolutional neural network to detect and localize the ball, opponents and other features on the field of play. This neural network can be trained from scratch in a few hours and is able to perform in real-time within the constraints of computational resources available on the robot. The time it takes to precess an image is approximately 11 ms. Balls and goal posts are recalled in 99 % of all cases (94.5 % for all objects) accompanied by a false detection rate of 1.2 % (5.2 % for all). The object detection and localization helped Sweaty to become finalist for the RoboCup 2017 in Nagoya.
The humanoid Sweaty was the finalist in this year’s robocup soccer championship(adult size). For the optimization of the gait and the stability, data concerning forces and torques in the ankle joints would be helpful. In the following paper the development of a six-axis force and torque sensor for the humanoid robot Sweaty is described. Since commercial sensors do not meet the demands for the sensors in Sweatys ankle joints, a new sensor was developed. As a measuring devices we used strain gauges and custom electronics based on an acam PS09. The geometry was analyzed with the FEM program ANSYS to get optimal dimensions for the measuring beams. In addition ANSYS was used to optimize the position for the strain gauges on the beam.
One of the challenges in humanoid robotics is motion control. Interacting with humans requires impedance control algorithms, as well as tackling the problem of the closed kinematic chains which occur when both feet touch the ground. However, pure impedance control for totally autonomous robots is difficult to realize, as this algorithm needs very precise sensors for force and speed of the actuated parts, as well as very high sampling rates for the controller input signals. Both requirements lead to a complex and heavy weight design, which makes up for heavy machines unusable in RoboCup Soccer competitions.
A lightweight motor controller was developed that can be used for admittance and impedance control as well as for model predictive control algorithms to further improve the gait of the robot.
Autonomous humanoid robots require light weight, high torque and high speed actuators to be able to walk and run. For conventional gears with a fixed gear ratio the product of torque and velocity is constant. On the other hand desired motions require maximum torque and speed. In this paper it is shown that with a variable gear ratio it is possible to vary the relation between torque and velocity. This is achieved by introducing systems of rods and levers to move the joints of our humanoid robot ”Sweaty II”. On the basis of a variable gear ratio low speed and high torque can be achieved for those joint angles, which require this motion mode, whereas high speed and low torque can be realized for those joint angles, where it is favorable for the desired motion.
In many application areas, Deep Reinforcement Learning (DRL) has led to breakthroughs. In Curriculum Learning, the Machine Learning algorithm is not randomly presented with examples, but in a meaningful order of increasing difficulty. This has been used in many application areas to further improve the results of learning systems or to reduce their learning time. Such approaches range from learning plans created manually by domain experts to those created automatically. The automated creation of learning plans is one of the biggest challenges.In this work, we investigate an approach in which a trainer learns in parallel and analogously to the student to automatically create a learning plan for the student during this Double Deep Reinforcement Learning (DDRL). Three Reward functions, Friendly, Adversarial, and Dynamic based on the learner’s reward are compared. The domain for evaluation is kicking with variable distance, direction and relative ball position in the SimSpark simulated soccer environment.As a result, Statistic Curriculum Learning (SCL) performs better than a random curriculum with respect to training time and result quality. DDRL reaches a comparable quality as the baseline and outperforms it significantly in shorter trainings in the distance-direction subdomain reducing the number of required training cycles by almost 50%.