A learning robot for cognitive camera control in minimally invasive surgery

Background  We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic came...

Full description

Saved in:
Bibliographic Details
Main Authors: Wagner, Martin (Author) , Bihlmaier, Andreas (Author) , Kenngott, Hannes Götz (Author) , Mietkowski, Patrick (Author) , Scheikl, Paul Maria (Author) , Bodenstedt, Sebastian (Author) , Schiepe-Tiska, Anja (Author) , Vetter, Josephin (Author) , Nickel, Felix (Author) , Speidel, Stefanie (Author) , Wörn, Heinz (Author) , Mathis-Ullrich, Franziska (Author) , Müller, Beat P. (Author)
Format: Article (Journal)
Language:English
Published: 27 Apil 2021
In: Surgical endoscopy and other interventional techniques
Year: 2021, Volume: 35, Issue: 9, Pages: 5365-5374
ISSN:1432-2218
DOI:10.1007/s00464-021-08509-8
Online Access:Resolving-System, kostenfrei, Volltext: https://doi.org/10.1007/s00464-021-08509-8
Verlag, kostenfrei, Volltext: https://link.springer.com/10.1007/s00464-021-08509-8
Get full text
Author Notes:Martin Wagner, Andreas Bihlmaier, Hannes Götz Kenngott, Patrick Mietkowski, Paul Maria Scheikl, Sebastian Bodenstedt, Anja Schiepe-Tiska, Josephin Vetter, Felix Nickel, S. Speidel, H. Wörn, F. Mathis-Ullrich, B.P. Müller-Stich
Description
Summary:Background  We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. - Methods  The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon’s learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. - Results  The duration of each operation decreased with the robot’s increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. - Conclusions  The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon’s needs.
Item Description:Gesehen am 17.04.2023
Physical Description:Online Resource
ISSN:1432-2218
DOI:10.1007/s00464-021-08509-8