Refine
Document Type
- Conference Proceeding (11)
- Article (reviewed) (3)
- Part of a Book (1)
Conference Type
- Konferenzartikel (11)
Language
- English (15) (remove)
Is part of the Bibliography
- yes (15) (remove)
Keywords
- Robotics (3)
- Human-Robot Collaboration (2)
- energy harvesting (2)
- vibration harvester (2)
- 3D Printed Force Sensor (1)
- 3D printed (1)
- 3D printing (1)
- 3D-Time-of-Flight Cameras (1)
- Cameras (1)
- Capacitive Liquid Level Sensor (1)
Institute
Open Access
- Closed Access (8)
- Closed (5)
- Open Access (2)
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2020)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
Time-of-Flight Cameras Enabling Collaborative Robots for Improved Safety in Medical Applications
(2017)
Human-robot collaboration is being used more and more in industry applications and is finding its way into medical applications. Industrial robots that are used for human-robot collaboration, cannot detect obstacles from a distance. This paper introduced the idea of using wireless technology to connect a Time-of-Flight camera to off-the-shelf industrial robots. This way, the robot can detect obstacles up to a distance of five meters. Connecting Time-of-Flight cameras to robots increases the safety in human-robot collaboration by detecting obstacles before a collision. After looking at the state of the art, the authors elaborated the different requirements for such a system. The Time-of-Flight camera from Heptagon is able to work in a range of up to five meters and can connect to the control unit of the robot via a wireless connection.
In safety critical applications wireless technologies are not widely spread. This is mainly due to reliability and latency requirements. In this paper a new wireless architecture is presented which will allow for customizing the latency and reliability for every single participant within the network. The architecture allows for building up a network of inhomogeneous participants with different reliability and latency requirements. The used TDMA scheme with TDD as duplex method is acting gentle on resources. Therefore participants with different processing and energy resources are able to participate.
In medical applications wireless technologies are not widely spread. Today they are mainly used in non latency-critical applications where reliability can be guaranteed through retransmission protocols and error correction mechanisms. By using retransmission protocols within the disturbed shared wireless channel latency will increase. Therefore retransmission protocols are not sufficient for removing latency-critical wired connections within operating rooms such as foot switches. Todays research aims to improve reliability through the physical characteristics of the wireless channel by using diversity methods and more robust modulation. In this paper an Architecture for building up a reliable network is presented. The Architecture offers the possibility for devices with different reliability, latency and energy consumption requirements to participate. Furthermore reliability, latency and energy consumption are scalable for every single participant.
Human-robot collaboration plays a strong role in industrial production processes. The ISO/TS 15066 defines four different methods of collaboration between humans and robots. So far, there was no robotic system available that incorporates all four collaboration methods at once. Especially for the speed and separation monitoring, there was no sensor system available that can easily be attached directly to an off-the-shelf industrial robot arm and that is capable of detecting obstacles in distances from a few millimeters up to five meters. This paper presented first results of using a 3D time-of-flight camera directly on an industrial robot arm for obstacle detection in human-robot collaboration. We attached a Visionary-T camera from SICK to the flange of a KUKA LBR iiwa 7 R800. With Matlab, we evaluated the pictures and found that it works very well for detecting obstacles in a distance range starting from 0.5 m and up to 5 m.
Human–robot collaborative applications have been receiving increasing attention in industrial applications. The efficiency of the applications is often quite low compared to traditional robotic applications without human interaction. Especially for applications that use speed and separation monitoring, there is potential to increase the efficiency with a cost-effective and easy to implement method. In this paper, we proposed to add human–machine differentiation to the speed and separation monitoring in human–robot collaborative applications. The formula for the protective separation distance was extended with a variable for the kind of object that approaches the robot. Different sensors for differentiation of human and non-human objects are presented. Thermal cameras are used to take measurements in a proof of concept. Through differentiation of human and non-human objects, it is possible to decrease the protective separation distance between the robot and the object and therefore increase the overall efficiency of the collaborative application.
Avoiding collisions between a robot arm and any obstacle in its path is essential to human-robot collaboration. Multiple systems are available that can detect obstacles in the robot's way prior and subsequent to a collision. The systems work well in different areas surrounding the robot. One area that is difficult to handle is the area that is hidden by the robot arm. This paper focuses on pick and place maneuvers, especially on obstacle detection in between the robot arm and the table that robot is located on. It introduces the use of single pixel time-of-flight sensors to detect obstacles directly from the robot arm. The proposed approach reduces the complexity of the problem by locking axes of the robot that are not needed for the pick and place movement. The comparison of simulated results and laboratory measurements show concordance.
Efficient collaborative robotic applications need a combination of speed and separation monitoring, and power and force limiting operations. While most collaborative robots have built-in sensors for power and force limiting operations, there are none with built-in sensor systems for speed and separation monitoring. This paper proposes a system for speed and separation monitoring directly from the gripper of the robot. It can monitor separation distances of up to three meters. We used single-pixel Time-of-Flight sensors to measure the separation distance between the gripper and the next obstacle perpendicular to it. This is the first system capable of measuring separation distances of up to three meters directly from the robot's gripper.
Differentiation between human and non-human objects can increase efficiency of human-robot collaborative applications. This paper proposes to use convolutional neural networks for classifying objects in robotic applications. The body temperature of human beings is used to classify humans and to estimate the distance to the sensor. Using image classification with convolutional neural networks it is possible to detect humans in the surroundings of a robot up to five meters distance with low-cost and low-weight thermal cameras. Using transfer learning technique we trained the GoogLeNet and MobilenetV2. Results show accuracies of 99.48 % and 99.06 % respectively.