Refine
Document Type
- Conference Proceeding (8)
- Article (reviewed) (7)
- Patent (4)
- Contribution to a Periodical (2)
- Letter to Editor (2)
- Article (unreviewed) (2)
- Moving Images (1)
Conference Type
- Konferenzartikel (5)
- Konferenz-Abstract (2)
- Konferenz-Poster (1)
Is part of the Bibliography
- yes (26) (remove)
Keywords
- Augmented Reality (3)
- Kamera (3)
- prosthesis (2)
- visual control (2)
- 3D-print (1)
- AR (1)
- AR glasses (1)
- Augmented reality (1)
- Augmented-reality glasses (1)
- Convolutional Neural Network (1)
Institute
- Fakultät Elektrotechnik und Informationstechnik (E+I) (bis 03/2019) (15)
- Fakultät Elektrotechnik, Medizintechnik und Informatik (EMI) (ab 04/2019) (10)
- Fakultät Maschinenbau und Verfahrenstechnik (M+V) (1)
- IUAS - Institute for Unmanned Aerial Systems (1)
- POIM - Peter Osypka Institute of Medical Engineering (1)
Open Access
- Open Access (11)
- Closed Access (9)
In the field of neuroprosthetics, the current state-of-the-art method involves controlling the prosthesis with electromyography (EMG) or electrooculography/electroencephalography (EOG/EEG). However, these systems are both expensive and time consuming to calibrate, susceptible to interference, and require a lengthy learning phase by the patient. Therefore, it is an open challenge to design more robust systems that are suitable for everyday use and meet the needs of patients. In this paper, we present a new concept of complete visual control for a prosthesis, an exoskeleton or another end effector using augmented reality (AR) glasses presented for the first time in a proof-of-concept study. By using AR glasses equipped with a monocular camera, a marker attached to the prosthesis is tracked. Minimal relative movements of the head with respect to the prosthesis are registered by tracking and used for control. Two possible control mechanisms including visual feedback are presented and implemented for both a motorized hand orthosis and a motorized hand prosthesis. Since the grasping process is mainly controlled by vision, the proposed approach appears to be natural and intuitive.
Die Erfindung betrifft ein Verfahren zur Steuerung eines Geräts, insbesondere einer Handprothese oder eines Roboterarms, wobei wenigstens ein an oder im Bezug zu dem Gerät positionierter Marker von einer an einer Bedienperson angeordneten Kamera erkannt wird, wobei ab dem Erkennen des wenigstens einen Markers eine vordefinierte Bewegung der Bedienperson zusammen mit der Kamera erkannt wird und zum Auslösen einer entsprechenden Aktion des Geräts verwendet wird, wobei die vordefinierte Bewegung einer Bedienperson in Form eines Sehstrahls mittels Kamera-Tracking erkannt wird. Weiterhin betrifft die Erfindung eine Anordnung aus einem Gerät, insbesondere einer Handprothese oder eines Roboterarms, und einer AR-Brille zur Durchführung eines derartigen Verfahrens.
Restoring hand motion to people experiencing amputation, paralysis, and stroke is a critical area of research and development. While electrode-based systems that use input from the brain or muscle have proven successful, these systems tend to be expensive and di¨cult to learn. One group of researchers is exploring the use of augmented reality (AR) as a new way of controlling hand prostheses. A camera mounted on eyeglasses tracks LEDs on a prosthetic to execute opening and closing commands using one of two different AR systems. One system uses a rectangular command window to control motion: crossing horizontally signals “open” along one direction and “close” in the opposite direction. The second system uses a circular command window: once control is enabled, gripping strength can be controlled by the direction of head motion. While the visual system remains to be tested with patients, its low cost, ease of use, and lack of electrodes make the device a promising solution for restoring hand motion.
A new concept for robust non-invasive optical activation of motorized hand prostheses by simple and non-contactcommands is presented. In addition, a novel approach for aiding hand amputees is shown, outlining significantprogress in thinking worth testing. In this, personalized 3D-printed artificial flexible hands are combined withcommercially available motorized exoskeletons, as they are used e.g. in tetraplegics.
Neuroprosthetics 2.0
(2019)
Die Erfindung betrifft ein Verfahren zur Kalibration einer Kamera (110) unter Nutzung eines Bildschirmes (120), wobei der Bildschirm (120) eine Menge von Bildpunkten (122) aufweist und die Kamera (110) eine Vielzahl von Pixeln (112) zur Darstellung des Bildes nutzt. Das Verfahren umfasst die folgenden Schritten (a) Darstellen zumindest eines Bildwertes (BW) in zumindest einem Bildpunkt (122) des Bildschirms (120) basierend auf einer Bildwertzuweisung; (b) Erfassen des zumindest einen Bildwertes (BW) durch einen Pixel (112a) der Kamera (110); und (c) Bestimmen der Position des zumindest einen Bildpunktes (122) auf dem Bildschirm (120) basierend auf dem zumindest einen erfassten Bildwert (BW) und der Bildwertzuweisung.
Die Erfindung betrifft ein Verfahren zur Kalibration einer Kamera (110) unter Nutzung eines Bildschirmes (120), wobei der Bildschirm (120) eine Menge von Bildpunkten (122) aufweist und die Kamera (110) eine Vielzahl von Pixeln (112) zur Darstellung des Bildes nutzt. Das Verfahren umfasst die folgenden Schritten (a) Darstellen zumindest eines Bildwertes (BW) in zumindest einem Bildpunkt (122) des Bildschirms (120) basierend auf einer Bildwertzuweisung; (b) Erfassen des zumindest einen Bildwertes (BW) durch einen Pixel (112a) der Kamera (110); und (c) Bestimmen der Position des zumindest einen Bildpunktes (122) auf dem Bildschirm (120) basierend auf dem zumindest einen erfassten Bildwert (BW) und der Bildwertzuweisung. Das Verfahren umfasst weiter ein Verschieben des Bildschirmes (120) oder der Kamera (110) in eine Richtung um einen Betrag, sodass der zumindest eine Bildpunkt (122) eine andere Entfernung zu dem Pixel (112a) der Kamera (110) aufweist als vor dem Verschieben, und ein Wiederholen zumindest der Schritte (b) und (c) für den verschobenen Bildschirm (120i). Die Kamera (110) umfasst einen variablen Fokus beim Verschieben des Bildschirmes (120) relativ zu der Kamera (110), und das Verfahren umfasst weiter ein Abspeichern einer Zuordnung bezüglich des Pixels (112) und der Positionen des zumindest einen Bildpunktes (122) für verschiedene verschobene Bildschirmpositionen (120i).
Method for controlling a device, in particular, a prosthetic hand or a robotic arm (US20200327705A1)
(2020)
A method for controlling a device, in particular a prosthetic hand or a robotic arm, includes using an operator-mounted camera to detect at least one marker positioned on or in relation to the device. Starting from the detection of the at least one marker, a predefined movement of the operator together with the camera is detected and is used to trigger a corresponding action of the device. The predefined movement of the operator is detected in the form of a line of sight by means of camera tracking. A system for controlling a device, in particular a prosthetic hand or a robotic arm, includes a pair of AR glasses adapted to detect the at least one marker and to detect the predefined movement of the operator.
Die Verwendung von Kameras als Messmittel für medizinische Anwendungen setzt deren präzise Kalibrierung voraus. Gängige Verfahren modellieren die Abbildungseigenschaften einer Kamera mittels perspektivischer Projektion und parametrisierter Funktionen zur Beschreibung von Linsenverzerrung. In den Randbereichen des Kamerabildes sind diese Modelle oft unzureichend. Außerdem bedingt die Verwendung starrer Kalibriermuster eine in der Regel kleine Anzahl an nicht gleichmäßig verteilten Punktkorrespondenzen zur Bestimmung der Modellparameter. In der vorliegenden Arbeit wird ein vollkommen neues und nicht auf Modellen basierendes Kalibrierverfahren vorgestellt, bei dem jedes Kamerapixel unabhängig von jedem anderen kalibriert wird.
Nowadays, robotic systems are an integral part of many orthopedic interventions. Stationary robots improve the accuracy but also require adapted surgical workflows. Handheld robotic devices (HHRDs), however, are easily integrated into existing workflows and represent a more economical solution. Their limited range of motion is compensated by the dexterity of the surgeon. This work presents control algorithms for HHRDs with multiple degrees of freedom (DOF). These algorithms protect pre- or intraoperatively defined regions from being penetrated by the end effector (e.g., a burr) by controlling the joints as well as the device’s power. Accuracy tests on a stationary prototype with three DOF show that the presented control algorithms produce results similar to those of stationary robots and much better results than conventional techniques. This work presents novel and innovative algorithms, which work robustly, accurately, and open up new opportunities for orthopedic interventions.
This work describes a camera-based method for the calibration of optical See-Through Glasses (STGs). A new calibration technique is introduced for calibrating every single display pixel of the STGs in order to overcome the disadvantages of a parametric model. A non-parametric model compared to the parametric one has the advantage that it can also map arbitrary distortions. The new generation of STGs using waveguide-based displays [5] will have higher arbitrary distortions due to the characteristics of their optics. First tests show better accuracies than in previous works. By using cameras which are placed behind the displays of the STGs, no error prone user interaction is necessary. It is shown that a high accuracy tracking device is not necessary for a good calibration. A camera mounted rigidly on the STGs is used to find the relations between the system components. Furthermore, this work elaborates on the necessity of a second subsequent calibration step which adapts the STGs to a specific user. First tests prove the theory that this subsequent step is necessary.
MITK-OpenIGTLink for combining open-source toolkits in real-time computer-assisted interventions
(2016)
PURPOSE:
Due to rapid developments in the research areas of medical imaging, medical image processing and robotics, computer-assisted interventions (CAI) are becoming an integral part of modern patient care. From a software engineering point of view, these systems are highly complex and research can benefit greatly from reusing software components. This is supported by a number of open-source toolkits for medical imaging and CAI such as the medical imaging interaction toolkit (MITK), the public software library for ultrasound imaging research (PLUS) and 3D Slicer. An independent inter-toolkit communication such as the open image-guided therapy link (OpenIGTLink) can be used to combine the advantages of these toolkits and enable an easier realization of a clinical CAI workflow.
METHODS:
MITK-OpenIGTLink is presented as a network interface within MITK that allows easy to use, asynchronous two-way messaging between MITK and clinical devices or other toolkits. Performance and interoperability tests with MITK-OpenIGTLink were carried out considering the whole CAI workflow from data acquisition over processing to visualization.
RESULTS:
We present how MITK-OpenIGTLink can be applied in different usage scenarios. In performance tests, tracking data were transmitted with a frame rate of up to 1000 Hz and a latency of 2.81 ms. Transmission of images with typical ultrasound (US) and greyscale high-definition (HD) resolutions of [Formula: see text] and [Formula: see text] is possible at up to 512 and 128 Hz, respectively.
CONCLUSION:
With the integration of OpenIGTLink into MITK, this protocol is now supported by all established open-source toolkits in the field. This eases interoperability between MITK and toolkits such as PLUS or 3D Slicer and facilitates cross-toolkit research collaborations. MITK and its submodule MITK-OpenIGTLink are provided open source under a BSD-style licence ( http://mitk.org )
Flexible Three-dimensional Camera-based Reconstruction and Calibration of Tracked Instruments
(2016)
Navigated instruments commonly include applied parts, e.g. burrs or saw blades, that need to be calibrated with respect to the attached or integrated tracker. Since this calibration has to be very precise, it is often performed by the manufacturer. However, due to the great variety of instruments and the option to exchange the applied parts (e.g. burrs) there is a definite demand for flexible and generic calibration techniques. Furthermore, if we look into the medical field, there is also a need for calibrating sterile instruments. We propose a new and flexible camera-based calibration technique that addresses these demands by working contactlessly, precisely, and generically for a large variety of tracked instruments. This is realized using one or more tracked cameras which are calibrated with respect to an attached or integrated tracker. The tracked instrument is rotated in front of the camera(s) and its 3D geometry and surface are reconstructed from the 2D images in the coordinate system of the attached or integrated tracker. The 3D geometry of the navigated instrument was reconstructed with an accuracy of under 0.2 mm. The radius of a sphere-shaped instrument was reconstructed with an RMS deviation of 0.015mm.
This work describes a non-parametric camera-based method for the calibration of Optical See-Through Glasses (OSTG). Existing works model the optical system through perspective projection and parametric functions. In the border areas of the displays such models are often inadequate. Moreover, rigid calibration patterns, that produce only a small amount of non-equidistant point correspondences, are used. In order to overcome these disadvantages every single display pixel is calibrated individually. The error prone user interaction is avoided by using cameras placed behind the displays of the OSTG. The displays show a shifting pattern that is used to calculate the pixels' locations. A camera mounted rigidly on the OSTG is used to find the relations between the system components. The obtained results show better accuracies than in previous works and prove that a second calibration step for user adaptation is necessary for high accuracy applications.
Hybrid SPECT/US
(2014)
eLetter zum Artikel "Hybrid EEG/EOG-based brain/neural hand exoskeleton restores fully independent daily living activities after quadriplegia" von Surjo R. Soekadar et al., veröffentlicht in Science Robotics, Vol. 1, No. 1 (DOI: 10.1126/scirobotics.aag3296)
Nicht-invasives, nicht-ionisierendes funktionelles Neuroimaging mit räumlich und zeitlich hochauflösender Elektroenzephalographie oder Echtzeit-Naheinfrarotspektroskopie in Kombination mit modernen Robotorsystemen ist ein entscheidender Entwicklungsschritt auf dem Gebiet der Neuroprothetik und Brain-Machine-Interfaces. In der Medizintechnik an der Hochschule Offenburg wird hierzu geforscht.
Purpose
This work presents a new monocular peer-to-peer tracking concept overcoming the distinction between tracking tools and tracked tools for optical navigation systems. A marker model concept based on marker triplets combined with a fast and robust algorithm for assigning image feature points to the corresponding markers of the tracker is introduced. Also included is a new and fast algorithm for pose estimation.
Methods
A peer-to-peer tracker consists of seven markers, which can be tracked by other peers, and one camera which is used to track the position and orientation of other peers. The special marker layout enables a fast and robust algorithm for assigning image feature points to the correct markers. The iterative pose estimation algorithm is based on point-to-line matching with Lagrange–Newton optimization and does not rely on initial guesses. Uniformly distributed quaternions in 4D (the vertices of a hexacosichora) are used as starting points and always provide the global minimum.
Results
Experiments have shown that the marker assignment algorithm robustly assigns image feature points to the correct markers even under challenging conditions. The pose estimation algorithm works fast, robustly and always finds the correct pose of the trackers. Image processing, marker assignment, and pose estimation for two trackers are handled in less than 18 ms on an Intel i7-6700 desktop computer at 3.4 GHz.
Conclusion
The new peer-to-peer tracking concept is a valuable approach to a decentralized navigation system that offers more freedom in the operating room while providing accurate, fast, and robust results.
Optische Navigationssysteme weisen bisher eine eindeutige Trennung zwischen nachverfolgendem Gerät (Tool Tracker) und nachverfolgten Geräten (Tracked Tools) auf. In dieser Arbeit wird ein neues Konzept vorgestellt, dass diese Trennung aufhebt: Jedes Tracked Tool ist gleichzeitig auch Tool Tracker und besteht aus Marker-LEDs sowie mindestens einer Kamera, mit deren Hilfe andere Tracker in Lage und Orientierung nachverfolgt werden können. Bei Verwendung von nur einer Kamera geschieht dies mittels Pose Estimation, ab zwei Kameras werden die Marker-LEDs trianguliert. Diese Arbeit beinhaltet die Vorstellung des neuen Peer-To-Peer-Tracking-Konzepts, einen sehr schnellen Pose-Estimation-Algorithmus für beliebig viele Marker sowie die Klärung der Frage, ob die mit Pose Estimation erreichbare Genauigkeit vergleichbar mit der eines Stereo-Kamera-Systems ist und den Anforderungen an die chirurgische Navigation gerecht wird.
Das hier vorgestellte System verbindet das neue Konzept der Peer-to-Peer-Navigation mit dem Einsatz von Augmented Reality zur Unterstützung von bettseitig durchgeführten externen Ventrikeldrainagen. Das sehr kompakte und genaue Gesamtsystem beinhaltet einen Patiententracker mit integrierter Kamera, eine Augmented-Reality-Brille mit Kamera und eine Punktionsnadel bzw. einen Pointer mit zwei Trackern, mit dessen Hilfe die Anatomie des Patienten aufgenommen wird. Die exakte Position und Richtung der Punktionsnadel wird unter Zuhilfenahme der aufgenommenen Landmarken berechnet und über die Augmented-Reality-Brille für den Chirurgen sichtbar auf dem Patienten dargestellt. Die Methode zur Kalibrierung der statischen Transformationen zwischen Patiententracker und daran befestigter Kamera beziehungsweise zwischen den Trackern der Punktionsnadel sind für die Genauigkeit sehr wichtig und werden hier vorgestellt. Das Gesamtsystem konnte in vitro erfolgreich getestet werden und bestätigt den Nutzen eines Peer-to-Peer-Navigationssystems.