WLRI - Work-Life Robotics Institute
Refine
Year of publication
Document Type
Conference Type
- Konferenzartikel (31)
- Konferenz-Poster (5)
- Sonstiges (1)
Is part of the Bibliography
- yes (46)
Keywords
- Robotics (4)
- Funktechnik (3)
- Robotik (3)
- Sicherheit (3)
- energy harvesting (3)
- robot (3)
- 3D printing (2)
- Applikation (2)
- Assistive Robotics (2)
- Data analysis (2)
Institute
Open Access
- Closed Access (20)
- Open Access (16)
- Closed (10)
- Bronze (8)
- Diamond (3)
- Gold (2)
Funding number
- 121064067 (1)
Over the past decade, the popularity of cobots (collaborative robots) has grown, largely due to their operator-friendly usage. When selecting a cobot or robot for a specific application, it is essential to consider which model best aligns with the desired process. The objective of this work is to introduce a method for evaluating the three-dimensional position performance of a given process to identify the optimal technical solution.
Bin-picking systems face challenges in fully emptying containers due to Collision risks in real-world scenarios. In this contribution, we present a grasp planning approach that combines 2D and 3D sensor data to ensure collision-free handling. The method is successfully tested on objects with eccentric centers of mass and is implemented directly on the robot’s control unit, eliminating the need for separate computing hardware.
Advances in eye-tracking control for assistive robotic arms provide intuitive interaction opportunities for people with physical disabilities. Shared control has gained interest in recent years by improving user satisfaction through partial automation of robot control. We present an eye-tracking-guided shared control design based on insights from state-of-the-art literature. A Wizard of Oz setup was used in which automation was simulated by an experimenter to evaluate the concept without requiring full implementation. This approach allowed for rapid exploration of user needs and expectations to inform future iterations. Two studies were conducted to assess user experience, identify design challenges, and find improvements to ensure usability and accessibility. The first study involved people with disabilities by providing a survey, and the second study used the Wizard of Oz design in person to gain technical insights, leading to a comprehensive picture of findings.
In this paper an approach to enable mobile robots to localize and position correctly on special machines based on their unique fingerprint in manufacturing is introduced. This is achieved using a single RGB-D camera and the use of Convolutional Neural Networks, i.e. Mask-RCNN. The utilized network is trained on the individual shape of the machine to recognize, classify, and segment it, so the mobile robot can locate the correct machine without any further modifications to the machine. Additionally, the depth information of the identified object is processed to calculate the distance to the machine. For this, real data of special machines were taken, labeled and the Mask-RCNN model was trained. This approach shows promising results, with average values of the Jaccard Index at 95.9 % and the Dice Coefficient at 97.7 %, and an inference time of 0.083 s, enabling real-time detection.
This contribution introduces the use of convolutional neural networks to detect humans and collaborative robots (cobots) in human–robot collaboration (HRC) workspaces based on their thermal radiation fingerprint. The unique data acquisition includes an infrared camera, two cobots, and up to two persons walking and interacting with the cobots in real industrial settings. The dataset also includes different thermal distortions from other heat sources. In contrast to data from the public environment, this data collection addresses the challenges of indoor manufacturing, such as heat distortions from the environment, and allows for it to be applicable in indoor manufacturing. The Work-Life Robotics Institute HRC (WLRI-HRC) dataset contains 6485 images with over 20 000 instances to detect. In this research, the dataset is evaluated for implementation by different convolutional neural networks: first, one-stage methods, i.e., You Only Look Once (YOLO v5, v8, v9 and v10) in different model sizes and, secondly, two-stage methods with Faster R-CNN with three variants of backbone structures (ResNet18, ResNet50 and VGG16). The results indicate promising results with the best mean average precision at an intersection over union (IoU) of 50 (mAP50) value achieved by YOLOv9s (99.4 %), the best mAP50-95 value achieved by YOLOv9s and YOLOv8m (90.2 %), and the fastest prediction time of 2.2 ms achieved by the YOLOv10n model. Further differences in detection precision and time between the one-stage and multi-stage methods are discussed. Finally, this paper examines the possibility of the Clever Hans phenomenon to verify the validity of the training data and the models’ prediction capabilities.
This contribution introduces a guideline for the fabrication of fully 3D-printed torque sensor elements. Recent advances in 3D printing technology have made it possible to produce objects and functional structures by employing a variety of 3D printing processes and materials. 3D printing therefore provides an alternative approach to sensor fabrication. Fully 3D-printed resistive torque sensors, encompassing both the elastic structure and strain gauges produced by 3D printing processes, are rarely encountered in the literature. Here, we address this gap by presenting a guideline for the fabrication of fully 3D-printed torque sensor elements. The application of the guideline is demonstrated through a prototype. The guideline consists of three main steps: ”Design”, ”Fabrication” and ”Evaluation”. As part of the guideline, the combination of different 3D printing materials and 3D printing processes will be demonstrated. In order to coordinate the different printing processes and materials, an iterative process is introduced in the design phase. In the second step, ”Fabrication”, the capabilities of a five-axis 3D printing system are demonstrated. In the final step, ”Evaluation”, the sensor element is calibrated. The aim of this guideline is to provide an orientation for the future development and research of 3D-printed sensor elements.
In this contribution, we present different ways to simplify a simulation model of an additively manufactured force sensor in the field of robot gripping technology for efficient determination of results using COMSOL Multiphysics. The results of different computational approaches are compared with the required computing time and memory requirements. A simplified analytical approach is also presented as an alternative and to verify the plausibility of the simulations.
Plastic welding is essential to fabricate process tanks in the field of semiconductor industries. Applying robot-assisted welding processes requires defining welding paths. Utilizing CAD/CAM or teach-in is both timeconsuming. In this contribution, we describe and discuss two approaches to automatically measure and extract welding paths with the robot. These approaches enable a flexible detection of welding-paths for automated plastic welding of cuboid containers.
This article demonstrates how four distinct technologies converge to create a new design for articulated robotic arms. Each technology has proven its robustness, processability, and use cases. Generative design is a common approach in mechanical engineering, while additive manufacturing is proven and accepted, even in military applications. Printable conductive materials are used in PCBs and electronics, and wireless technology is indispensable and ubiquitous. A key challenge is that these technologies can interfere with each other. For example, in 3D printing with Fused Filament Fabrication, the curing temperature of conductive ink must be compatible with the plastic's welding temperature. Conductive traces must not interfere with the wireless technology's wavelength to ensure proper function. These factors must be considered in generative design or when using AI in design phases. Despite the challenges, initial tests show promising results. This approach allows for custom-made robotic arms, reduces weight and cabling, and provides flexibility in production processes and materials, paving the way for new robotic applications.
This article provides insights into the feasibility study of the IP500® standard for use in robotics applications, specifically with the intent to utilize the CNX 200 module. The CNX 200 is a true dual-band module that enables a robust, reliable, and stable wireless connection. An articulated robot arm with six joints, designed using AI-based generative techniques, will serve as the reference product. Given that the robot is designed through generative methods, it is crucial to minimize any cabling. Therefore, the primary objective is to maintain only the power connections for the drivers, ensuring that all other communications and signals are transmitted via the wireless connection.