Refine
Year of publication
- 2020 (4) (remove)
Document Type
- Article (reviewed) (4) (remove)
Language
- English (4)
Has Fulltext
- yes (4)
Is part of the Bibliography
- yes (4) (remove)
Keywords
- 3D light scanning (1)
- 3D multi-material printing (1)
- 3D-print (1)
- Augmented reality (1)
- Augmented-reality glasses (1)
- Personalized prostheses (1)
- Visual control (1)
- amputee (1)
- anthropomorphic hand replacement (1)
- computer-aided design (1)
Institute
Open Access
- Open Access (4)
Purpose
This work presents a new monocular peer-to-peer tracking concept overcoming the distinction between tracking tools and tracked tools for optical navigation systems. A marker model concept based on marker triplets combined with a fast and robust algorithm for assigning image feature points to the corresponding markers of the tracker is introduced. Also included is a new and fast algorithm for pose estimation.
Methods
A peer-to-peer tracker consists of seven markers, which can be tracked by other peers, and one camera which is used to track the position and orientation of other peers. The special marker layout enables a fast and robust algorithm for assigning image feature points to the correct markers. The iterative pose estimation algorithm is based on point-to-line matching with Lagrange–Newton optimization and does not rely on initial guesses. Uniformly distributed quaternions in 4D (the vertices of a hexacosichora) are used as starting points and always provide the global minimum.
Results
Experiments have shown that the marker assignment algorithm robustly assigns image feature points to the correct markers even under challenging conditions. The pose estimation algorithm works fast, robustly and always finds the correct pose of the trackers. Image processing, marker assignment, and pose estimation for two trackers are handled in less than 18 ms on an Intel i7-6700 desktop computer at 3.4 GHz.
Conclusion
The new peer-to-peer tracking concept is a valuable approach to a decentralized navigation system that offers more freedom in the operating room while providing accurate, fast, and robust results.
Background: This paper presents a novel approach for a hand prosthesis consisting of a flexible, anthropomorphic, 3D-printed replacement hand combined with a commercially available motorized orthosis that allows gripping.
Methods: A 3D light scanner was used to produce a personalized replacement hand. The wrist of the replacement hand was printed of rigid material; the rest of the hand was printed of flexible material. A standard arm liner was used to enable the user’s arm stump to be connected to the replacement hand. With computer-aided design, two different concepts were developed for the scanned hand model: In the first concept, the replacement hand was attached to the arm liner with a screw. The second concept involved attaching with a commercially available fastening system; furthermore, a skeleton was designed that was located within the flexible part of the replacement hand.
Results: 3D-multi-material printing of the two different hands was unproblematic and inexpensive. The printed hands had approximately the weight of the real hand. When testing the replacement hands with the orthosis it was possible to prove a convincing everyday functionality. For example, it was possible to grip and lift a 1-L water bottle. In addition, a pen could be held, making writing possible.
Conclusions: This first proof-of-concept study encourages further testing with users.
In the field of neuroprosthetics, the current state-of-the-art method involves controlling the prosthesis with electromyography (EMG) or electrooculography/electroencephalography (EOG/EEG). However, these systems are both expensive and time consuming to calibrate, susceptible to interference, and require a lengthy learning phase by the patient. Therefore, it is an open challenge to design more robust systems that are suitable for everyday use and meet the needs of patients. In this paper, we present a new concept of complete visual control for a prosthesis, an exoskeleton or another end effector using augmented reality (AR) glasses presented for the first time in a proof-of-concept study. By using AR glasses equipped with a monocular camera, a marker attached to the prosthesis is tracked. Minimal relative movements of the head with respect to the prosthesis are registered by tracking and used for control. Two possible control mechanisms including visual feedback are presented and implemented for both a motorized hand orthosis and a motorized hand prosthesis. Since the grasping process is mainly controlled by vision, the proposed approach appears to be natural and intuitive.
A new concept for robust non-invasive optical activation of motorized hand prostheses by simple and non-contactcommands is presented. In addition, a novel approach for aiding hand amputees is shown, outlining significantprogress in thinking worth testing. In this, personalized 3D-printed artificial flexible hands are combined withcommercially available motorized exoskeletons, as they are used e.g. in tetraplegics.