Refine
Year of publication
Document Type
Conference Type
- Konferenzartikel (14)
Is part of the Bibliography
- yes (19)
Keywords
- Robotics (3)
- Human-Robot Collaboration (2)
- Robotik (2)
- energy harvesting (2)
- vibration harvester (2)
- 3D Printed Force Sensor (1)
- 3D printed (1)
- 3D printing (1)
- 3D-Time-of-Flight Cameras (1)
- Cameras (1)
Institute
Open Access
- Closed Access (10)
- Closed (6)
- Open Access (3)
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2020)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
Efficient collaborative robotic applications need a combination of speed and separation monitoring, and power and force limiting operations. While most collaborative robots have built-in sensors for power and force limiting operations, there are none with built-in sensor systems for speed and separation monitoring. This paper proposes a system for speed and separation monitoring directly from the gripper of the robot. It can monitor separation distances of up to three meters. We used single-pixel Time-of-Flight sensors to measure the separation distance between the gripper and the next obstacle perpendicular to it. This is the first system capable of measuring separation distances of up to three meters directly from the robot's gripper.
Human-robot collaboration plays a strong role in industrial production processes. The ISO/TS 15066 defines four different methods of collaboration between humans and robots. So far, there was no robotic system available that incorporates all four collaboration methods at once. Especially for the speed and separation monitoring, there was no sensor system available that can easily be attached directly to an off-the-shelf industrial robot arm and that is capable of detecting obstacles in distances from a few millimeters up to five meters. This paper presented first results of using a 3D time-of-flight camera directly on an industrial robot arm for obstacle detection in human-robot collaboration. We attached a Visionary-T camera from SICK to the flange of a KUKA LBR iiwa 7 R800. With Matlab, we evaluated the pictures and found that it works very well for detecting obstacles in a distance range starting from 0.5 m and up to 5 m.
Sichere Detektion von Menschen in der Mensch-Roboter-Kollaboration mit Time-of-Flight Kameras
(2017)
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2017)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
In safety critical applications wireless technologies are not widely spread. This is mainly due to reliability and latency requirements. In this paper a new wireless architecture is presented which will allow for customizing the latency and reliability for every single participant within the network. The architecture allows for building up a network of inhomogeneous participants with different reliability and latency requirements. The used TDMA scheme with TDD as duplex method is acting gentle on resources. Therefore participants with different processing and energy resources are able to participate.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
3D printing offers customisation capabilities regarding suspensions for oscillators of vibration energy harvesters. Adjusting printing parameters or geometry allows to influence dynamic properties like resonance frequency or bandwidth of the oscillator. This paper presents simulation results and measurements for a spiral shaped suspension printed with polylactic acid (PLA) and different layer heights. Eigenfrequencies have been simulated and measured and damping ratios have been experimentally determined.
This paper presents the development of a capacitive level sensor for robotics applications, which is designed for measurements of liquid levels during a pouring process. The proposed sensor design applies the advantages of guard electrodes in combination with passive shielding to increase resistance against external influences. This is important for reliable operations in rapidly changing measurement environments, as they occur in the field of robotics. The non-contact type sensor for liquid level measurement is the solution for avoiding contaminations and suit food guidelines. The designed sensor can be utilized in gastronomic applications. Two versions of the sensor were simulated, fabricated, and compared. The first version is based on copper electrodes, and the other type is fully 3D printed with electrodes made of conductive polylactic acid (PLA).
The development of a 3D printed force sensor for a gripper was studied applying an embedded constantan wire as sensing element. In the first section, the state of the art is explained. In the main section of the paper the modeling, simulation and verification of a sensor element are described for a three-point bending test made in accordance with the DIN EN ISO 178. The 3D printing process of the Fused Filament Fabrication (FFF) utilized for manufacturing the sensor samples in combination with an industrial robot are shown. A comparison between theory and practice are considered in detail. Finally, an outlook is given regarding the integration of the sensor element in gripper jaws.
Separation Estimation with Thermal Cameras for Separation Monitoring in Human-Robot Collaboration
(2022)
Human-Robot Collaborative applications have the drawback of being less efficient than their non-collaborative counterparts. One of the main reasons is, that the robot has to slow down when a human being is within the operating space of the robot. There are different approaches on dynamic speed and separation monitoring in human-robot collaborative applications. One approach additionally differentiates between human and non-human objects to increase efficiency in speed and separation monitoring. This paper proposes to estimate the separation distance by measuring the temperature of the approaching object. Measurements show that the measured temperature of a human being decreases with 1 deg C per meter distance from the sensor. This allows an estimation of separation between a robotic system and a human being.
Differentiation between human and non-human objects can increase efficiency of human-robot collaborative applications. This paper proposes to use convolutional neural networks for classifying objects in robotic applications. The body temperature of human beings is used to classify humans and to estimate the distance to the sensor. Using image classification with convolutional neural networks it is possible to detect humans in the surroundings of a robot up to five meters distance with low-cost and low-weight thermal cameras. Using transfer learning technique we trained the GoogLeNet and MobilenetV2. Results show accuracies of 99.48 % and 99.06 % respectively.
Avoiding collisions between a robot arm and any obstacle in its path is essential to human-robot collaboration. Multiple systems are available that can detect obstacles in the robot's way prior and subsequent to a collision. The systems work well in different areas surrounding the robot. One area that is difficult to handle is the area that is hidden by the robot arm. This paper focuses on pick and place maneuvers, especially on obstacle detection in between the robot arm and the table that robot is located on. It introduces the use of single pixel time-of-flight sensors to detect obstacles directly from the robot arm. The proposed approach reduces the complexity of the problem by locking axes of the robot that are not needed for the pick and place movement. The comparison of simulated results and laboratory measurements show concordance.
A Review on Kinetic Energy Harvesting with Focus on 3D Printed Electromagnetic Vibration Harvesters
(2021)
The increasing amount of Internet of Things (IoT) devices and wearables require a reliable energy source. Energy harvesting can power these devices without changing batteries. Three-dimensional printing allows us to manufacture tailored harvesting devices in an easy and fast way. This paper presents the development of hybrid and non-hybrid 3D printed electromagnetic vibration energy harvesters. Various harvesting approaches, their utilised geometry, functional principle, power output and the applied printing processes are shown. The gathered harvesters are analysed, challenges examined and research gaps in the field identified. The advantages and challenges of 3D printing harvesters are discussed. Reported applications and strategies to improve the performance of printed harvesting devices are presented.
Human–robot collaborative applications have been receiving increasing attention in industrial applications. The efficiency of the applications is often quite low compared to traditional robotic applications without human interaction. Especially for applications that use speed and separation monitoring, there is potential to increase the efficiency with a cost-effective and easy to implement method. In this paper, we proposed to add human–machine differentiation to the speed and separation monitoring in human–robot collaborative applications. The formula for the protective separation distance was extended with a variable for the kind of object that approaches the robot. Different sensors for differentiation of human and non-human objects are presented. Thermal cameras are used to take measurements in a proof of concept. Through differentiation of human and non-human objects, it is possible to decrease the protective separation distance between the robot and the object and therefore increase the overall efficiency of the collaborative application.
In der Forschungsgruppe um Prof. Dr. Thomas Wendt werden Themen in unterschiedlichsten Bereichen von Automatisierungstechnik über funktionale Sicherheit bis hin zur 3D-gedruckten Elektronik / Sensorik behandelt. Insgesamt arbeiten vier Doktoranden und vier Mitarbeiter an der Weiterentwicklung der verschiedenen Technologien, die in diesem Artikel zusammengefasst dargestellt sind.
Human-machine interaction can be supported by the detection of humans through the simultaneous localization and distinction from non-human objects. This paper compares modern object detection algorithms (Damo-YOLO, YOLOv6, YOLOv7 and YOLOv8) in combination with Transfer Learning and Super Resolution in different scenarios to achieve human detection on low resolution infrared images. The data set created for this purpose includes images of an empty room, images of warm coffee cups, and images of people in various scenarios and at distances ranging from two to six meters. The Average Precision AP@50 and AP@50:95 values achieved across all scenarios reach up to 98.02 % and 66.99 % respectively.