135. Deep learning-based Surgical Robots

Sabina Kamińska – Sano Centre for Computational Medicine, Krakow, PL

Abstract

The field of robotic surgery is rapidly evolving and holds immense potential for automating surgical procedures. However, traditional training approaches such as Reinforcement Learning (RL) requires extensive task repetition, presenting safety and practicality challenges in real surgical settings. This underscores the importance of simulated surgical environments that offer realism alongside computational efficiency and scalability.

In recent decades, there has been a steady increase in the adoption of Robot-Assisted Surgical Systems (RASS) [1]. Researchers are exploring the possibilities and complexities of RASS using platforms like the da Vinci Research Kit (dVRK) [2].  

Research on RASS has explored automating various surgical tasks, from simple ones like peg transfer to complex ones like manipulating suture needles [3] and deformable tissues [4]. This research focuses on tissue retraction, essential for exposing areas of interest. More complex automation, such as Reinforcement Learning (RL), has grown in popularity. RL training is often conducted in realistic simulation environments like UnityFlexML [5] and LapGym [6], which simulate deformable objects.  

DL is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeon’s cognitive workload, increased precision in critical aspects of the surgery, and fewer patient-related complications.  

We propose a new simulator, Fast and Flexible Surgical Reinforcement Learning (FF-SRL), which offers a fully GPU-integrated RL simulation and training approach for Robot-Assisted Surgical Systems (RASS). Unlike other simulators that rely on a combination of CPU and limited GPU acceleration, FF-SRL leverages the full power of the GPU for optimization. To manage the complexity of tissue simulation, FF-SRL uses extended Position-Based Dynamics (XPBD) [7].  

Our focus is on tissue retraction, a crucial initial phase of surgical interventions. This involves lifting deformable tissue to expose critical areas such as organs or lesions. Tissue retraction is a common task in RASS research due to its balance of simplicity and complexity, making it ideal for testing automation approaches. Moreover, this task can be effectively learned in a simulated environment and then transferred to real-world scenarios [8].  

To enhance DRL, we additionally focused on using Stable Diffusion, which generates highly realistic images. These images can aid in visualisation and preliminary training of DRL models, improving pattern and object recognition. Furthermore, generating various scenes and situations increases data diversity. The generated images can depict different types of tissues, lighting conditions, viewing angles, etc., which helps in creating versatile models capable of generalisation.  

[1] C. D’Ettorre, A. Mariani, A. Stilli, F. R. y Baena, P. Valdastri, A. Deguet, P. Kazanzides, R. H. Taylor, G. S. Fischer, S. P. DiMaio, et al., “Accelerating surgical robotics research: A review of 10 years with the da vinci research kit,” IEEE Robotics & Automation Magazine, vol. 28, no. 4, pp. 56–78, 2021.  

[2] P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. P. DiMaio, “An open-source research kit for the da vinci® surgical system,” in 2014 IEEE international conference on robotics 
and automation (ICRA). IEEE, 2014, pp. 6434–6439.  

[3] Z. J. Hu, Z. Wang, Y. Huang, A. Sena, F. R. y Baena, and E. Burdet,“Towards human-robot collaborative surgery: Trajectory and strategy learning in bimanual peg transfer,” IEEE Robotics and Automation Letters, 2023  

[4] E. Tagliabue, D. Meli, D. Dall’Alba, and P. Fiorini, “Deliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty,” in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 11 080–11 086.  

[5] E. Tagliabue, A. Pore, D. Dall’Alba, E. Magnabosco, M. Piccinelli, and P. Fiorini, “Soft tissue simulation environment to learn manipulation tasks in autonomous robotic surgery,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 3261–3266  

[6] P. M. Scheikl, B. Gyenes, R. Younis, C. Haas, G. Neumann, M. Wagner, and F. Mathis-Ullrich, “Lapgym–an open source framework for reinforcement learning in robot-assisted laparoscopic surgery,” arXiv preprint arXiv:2302.09606, 2023  

[7] M. Macklin, M. Müller, and N. Chentanez, “Xpbd: Position-based simulation of compliant constrained dynamics,” in Proceedings of the 9th International Conference on Motion in Games, 2016, p. 49–54.  

[8] P. M. Scheikl, E. Tagliabue, B. Gyenes, M. Wagner, D. Dall’Alba, P. Fiorini, and F. Mathis-Ullrich, “Sim-to-real transfer for visual reinforcement learning of deformable object manipulation for robot- assisted surgery,” IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 560–567, 2023.  

About the author

Sabina Kamińska is a Biomedical Engineer from Poland, specialising in Medical Informatics. During her Master’s program, she undertook a noteworthy project involving the development of a hand rehabilitation system, which included a hand-tracking glove and gamified training scenarios. After completing her Master’s degree, she worked as a Virtual Reality (VR) developer for surgical simulation software.