Efficient Auto-Labeling Tool for Human Keypoints in Human-Robot Collaboration Workspaces using Motion Capture Markers
- Three-dimensional human pose estimation is a crucial computer vision technique used to predict the joint locations of the human body in 3D space from RGB images, depth maps, or point clouds. The majority of existing methods rely on singledepth maps and RGB images for 3D pose estimation due to the lack of high-quality, fully annotated point cloud datasets. This study addresses these limitations byThree-dimensional human pose estimation is a crucial computer vision technique used to predict the joint locations of the human body in 3D space from RGB images, depth maps, or point clouds. The majority of existing methods rely on singledepth maps and RGB images for 3D pose estimation due to the lack of high-quality, fully annotated point cloud datasets. This study addresses these limitations by using sensor-fused point clouds that incorporate intensity values, thus enhancing the diversity and robustness of the annotated data. Training a 3D human pose estimation model requires largescale and fully annotated datasets. However, annotating point cloud data is a time-consuming and expensive process. To overcome this limitation, this paper presents a novel method for capturing and automatically labeling human keypoints in dense point clouds using motion capture markers. An automatic labeling algorithm is also developed, which determines the nearest neighbor of each keypoint between consecutive frames for labeling. This approach significantly reduces the need for manual labeling when processing large volumes of point cloud data, resulting in a reduction of manual effort to 20−25%. The proposed method demonstrates a substantial reduction in labeling time while improving the efficiency of 3D human pose annotation.…


| Document Type: | Conference Proceeding |
|---|---|
| Conference Type: | Konferenzartikel |
| Zitierlink: | https://opus.hs-offenburg.de/11401 | Bibliografische Angaben |
| Title (English): | Efficient Auto-Labeling Tool for Human Keypoints in Human-Robot Collaboration Workspaces using Motion Capture Markers |
| Conference: | International Conference on Control and Robotics Engineering (10. : 09-11 May 2025 : Nagoya, Japan) |
| Author: | Amal Kaithavalappil AjayStaff MemberORCiD, Sinan SümeStaff MemberORCiD, Thomas WendtStaff MemberORCiDGND |
| Year of Publication: | 2025 |
| Publisher: | IEEE |
| First Page: | 212 |
| Last Page: | 217 |
| Parent Title (English): | 2025 10th International Conference on Control and Robotics Engineering : ICCRE 2025 |
| ISBN: | 979-8-3315-4351-8 (Elektronisch) |
| ISBN: | 979-8-3315-4350-1 (USB) |
| ISBN: | 979-8-3315-4352-5 (Print on Demand) |
| ISSN: | 2835-3722 (Elektronisch) |
| ISSN: | 2835-3714 (Print on Demand) |
| DOI: | https://doi.org/10.1109/ICCRE65455.2025.11093503 |
| Language: | English | Inhaltliche Informationen |
| Institutes: | Fakultät Wirtschaft (W) |
| Research: | WLRI - Work-Life Robotics Institute |
| Collections of the Offenburg University: | Bibliografie |
| Tag: | Computer Vision; Human Pose Estimation; Human-Robot Collaboration; Robotics |
| Funded by (selection): | Bundesministerium für Wirtschaft und Klimaschutz |
| Funding number: | KK5030914LF3 |
| Storage of research data: | Sonstiges | Formale Angaben |
| Relevance for "Jahresbericht über Forschungsleistungen": | 1-fach | Konferenzbeitrag |
| Open Access: | Closed |
| Licence (German): | Urheberrechtlich geschützt |



