{"id":15608,"date":"2024-03-04T20:39:04","date_gmt":"2024-03-04T19:39:04","guid":{"rendered":"https:\/\/sano.science\/?post_type=seminars&#038;p=15608"},"modified":"2024-05-20T21:52:09","modified_gmt":"2024-05-20T19:52:09","slug":"deep-learning-based-surgical-robots","status":"publish","type":"seminars","link":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/","title":{"rendered":"135. Deep learning-based Surgical Robots"},"content":{"rendered":"\n<h2 class=\"wp-block-heading eplus-wrapper\">Abstract<\/h2>\n\n\n\n<div style=\"height:40px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The field of robotic surgery is rapidly evolving and holds immense potential for automating surgical procedures. However, traditional training approaches such as Reinforcement Learning (RL) requires extensive task repetition, presenting safety and practicality challenges in real surgical settings. This underscores the importance of simulated surgical environments that offer realism alongside computational efficiency and scalability.<\/p>\n\n\n\n<p class=\" eplus-wrapper\">In recent decades, there has been a steady increase in the adoption of Robot-Assisted Surgical Systems (RASS) [1]. Researchers are exploring the possibilities and complexities of RASS using platforms like the da Vinci Research Kit (dVRK) [2]. &nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Research on RASS has explored automating various surgical tasks, from simple ones like peg transfer to complex ones like manipulating suture needles [3] and deformable tissues [4]. This research focuses on tissue retraction, essential for exposing areas of interest. More complex automation, such as Reinforcement Learning (RL), has grown in popularity. RL training is often conducted in realistic simulation environments like UnityFlexML [5] and LapGym [6], which simulate deformable objects. &nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">DL is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeon\u2019s cognitive workload, increased precision in critical aspects of the surgery, and fewer patient-related complications. &nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">We propose a new simulator, Fast and Flexible Surgical Reinforcement Learning (FF-SRL), which offers a fully GPU-integrated RL simulation and training approach for Robot-Assisted Surgical Systems (RASS). Unlike other simulators that rely on a combination of CPU and limited GPU acceleration, FF-SRL leverages the full power of the GPU for optimization. To manage the complexity of tissue simulation, FF-SRL uses extended Position-Based Dynamics (XPBD) [7]. &nbsp;<\/p>\n\n\n\n<div style=\"height:24px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Our focus is on tissue retraction, a crucial initial phase of surgical interventions. This involves lifting deformable tissue to expose critical areas such as organs or lesions. Tissue retraction is a common task in RASS research due to its balance of simplicity and complexity, making it ideal for testing automation approaches. Moreover, this task can be effectively learned in a simulated environment and then transferred to real-world scenarios [8]. &nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">To enhance DRL, we additionally focused on using Stable Diffusion, which generates highly realistic images. These images can aid in visualisation and preliminary training of DRL models, improving pattern and object recognition. Furthermore, generating various scenes and situations increases data diversity. The generated images can depict different types of tissues, lighting conditions, viewing angles, etc., which helps in creating versatile models capable of generalisation. &nbsp;<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">[1] C. D\u2019Ettorre, A. Mariani, A. Stilli, F. R. y Baena, P. Valdastri, A. Deguet, P. Kazanzides, R. H. Taylor, G. S. Fischer, S. P. DiMaio, et al., \u201cAccelerating surgical robotics research: A review of 10 years with the da vinci research kit,\u201d IEEE Robotics &amp; Automation Magazine, vol. 28, no. 4, pp. 56\u201378, 2021. &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[2] P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. P. DiMaio, \u201cAn open-source research kit for the da vinci\u00ae surgical system,\u201d in 2014 IEEE international conference on robotics&nbsp;<br>and automation (ICRA). IEEE, 2014, pp. 6434\u20136439. &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[3] Z. J. Hu, Z. Wang, Y. Huang, A. Sena, F. R. y Baena, and E. Burdet,\u201cTowards human-robot collaborative surgery: Trajectory and strategy learning in bimanual peg transfer,\u201d IEEE Robotics and Automation Letters, 2023 &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[4] E. Tagliabue, D. Meli, D. Dall\u2019Alba, and P. Fiorini, \u201cDeliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty,\u201d in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 11 080\u201311 086. &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[5] E. Tagliabue, A. Pore, D. Dall\u2019Alba, E. Magnabosco, M. Piccinelli, and P. Fiorini, \u201cSoft tissue simulation environment to learn manipulation tasks in autonomous robotic surgery,\u201d in 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 3261\u20133266 &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[6] P. M. Scheikl, B. Gyenes, R. Younis, C. Haas, G. Neumann, M. Wagner, and F. Mathis-Ullrich, \u201cLapgym\u2013an open source framework for reinforcement learning in robot-assisted laparoscopic surgery,\u201d arXiv preprint arXiv:2302.09606, 2023 &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[7] M. Macklin, M. M\u00fcller, and N. Chentanez, \u201cXpbd: Position-based simulation of compliant constrained dynamics,\u201d in Proceedings of the 9th International Conference on Motion in Games, 2016, p. 49\u201354. &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">[8] P. M. Scheikl, E. Tagliabue, B. Gyenes, M. Wagner, D. Dall\u2019Alba, P. Fiorini, and F. Mathis-Ullrich, \u201cSim-to-real transfer for visual reinforcement learning of deformable object manipulation for robot-&nbsp;assisted surgery,\u201d IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 560\u2013567, 2023. &nbsp;<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\"><strong>About the author<\/strong><\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Sabina Kami\u0144ska is a Biomedical Engineer from Poland, specialising in Medical Informatics. During her Master&#8217;s program, she undertook a noteworthy project involving the development of a hand rehabilitation system, which included a hand-tracking glove and gamified training scenarios. After completing her Master&#8217;s degree, she worked as a Virtual Reality (VR) developer for surgical simulation software. &nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<figure class=\"wp-block-image size-large eplus-wrapper\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"536\" src=\"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png\" alt=\"\" class=\"wp-image-16823\" srcset=\"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png 1024w, https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-300x157.png 300w, https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-768x402.png 768w, https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Sabina Kami\u0144ska \u2013 Sano Centre for Computational Medicine, Krakow, PL<\/p>\n","protected":false},"featured_media":0,"template":"","class_list":["post-15608","seminars","type-seminars","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.5 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>135. Deep learning-based Surgical Robots - Centre for Computational Personalized Medicine<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"135. Deep learning-based Surgical Robots\" \/>\n<meta property=\"og:description\" content=\"Sabina Kami\u0144ska \u2013 Sano Centre for Computational Medicine, Krakow, PL\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/\" \/>\n<meta property=\"og:site_name\" content=\"Centre for Computational Personalized Medicine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/sano.science\/\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-20T19:52:09+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@sanoscience\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"4 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/\",\"url\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/\",\"name\":\"135. Deep learning-based Surgical Robots - Centre for Computational Personalized Medicine\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/Seminarium_Sabina_Kaminska-1024x536.png\",\"datePublished\":\"2024-03-04T19:39:04+00:00\",\"dateModified\":\"2024-05-20T19:52:09+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/#primaryimage\",\"url\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/Seminarium_Sabina_Kaminska.png\",\"contentUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/Seminarium_Sabina_Kaminska.png\",\"width\":1200,\"height\":628},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/sano.science\\\/seminars\\\/deep-learning-based-surgical-robots\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/sano.science\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Seminars\",\"item\":\"https:\\\/\\\/sano.science\\\/seminars\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"135. Deep learning-based Surgical Robots\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/sano.science\\\/#website\",\"url\":\"https:\\\/\\\/sano.science\\\/\",\"name\":\"Centre for Computational Personalized Medicine\",\"description\":\"Sano \u2013 Centre for Computational Medicine\",\"publisher\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/sano.science\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\",\"name\":\"Sano \u2013 Centre for Computational Medicine\",\"alternateName\":\"Sano\",\"url\":\"https:\\\/\\\/sano.science\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/logo_sano_podstawowe.png\",\"contentUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/logo_sano_podstawowe.png\",\"width\":700,\"height\":265,\"caption\":\"Sano \u2013 Centre for Computational Medicine\"},\"image\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/sano.science\\\/\",\"https:\\\/\\\/x.com\\\/sanoscience\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/sanoscience\\\/\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCDZ_8TcjMWUG2ZcgKKgfpwQ\",\"https:\\\/\\\/bsky.app\\\/profile\\\/sanoscience.bsky.social\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"135. Deep learning-based Surgical Robots - Centre for Computational Personalized Medicine","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/","og_locale":"en_US","og_type":"article","og_title":"135. Deep learning-based Surgical Robots","og_description":"Sabina Kami\u0144ska \u2013 Sano Centre for Computational Medicine, Krakow, PL","og_url":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/","og_site_name":"Centre for Computational Personalized Medicine","article_publisher":"https:\/\/www.facebook.com\/sano.science\/","article_modified_time":"2024-05-20T19:52:09+00:00","og_image":[{"url":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png","type":"","width":"","height":""}],"twitter_card":"summary_large_image","twitter_site":"@sanoscience","twitter_misc":{"Est. reading time":"4 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/","url":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/","name":"135. Deep learning-based Surgical Robots - Centre for Computational Personalized Medicine","isPartOf":{"@id":"https:\/\/sano.science\/#website"},"primaryImageOfPage":{"@id":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/#primaryimage"},"image":{"@id":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/#primaryimage"},"thumbnailUrl":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png","datePublished":"2024-03-04T19:39:04+00:00","dateModified":"2024-05-20T19:52:09+00:00","breadcrumb":{"@id":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/#primaryimage","url":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska.png","contentUrl":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska.png","width":1200,"height":628},{"@type":"BreadcrumbList","@id":"https:\/\/sano.science\/seminars\/deep-learning-based-surgical-robots\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sano.science\/"},{"@type":"ListItem","position":2,"name":"Seminars","item":"https:\/\/sano.science\/seminars\/"},{"@type":"ListItem","position":3,"name":"135. Deep learning-based Surgical Robots"}]},{"@type":"WebSite","@id":"https:\/\/sano.science\/#website","url":"https:\/\/sano.science\/","name":"Centre for Computational Personalized Medicine","description":"Sano \u2013 Centre for Computational Medicine","publisher":{"@id":"https:\/\/sano.science\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sano.science\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/sano.science\/#organization","name":"Sano \u2013 Centre for Computational Medicine","alternateName":"Sano","url":"https:\/\/sano.science\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sano.science\/#\/schema\/logo\/image\/","url":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/logo_sano_podstawowe.png","contentUrl":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/logo_sano_podstawowe.png","width":700,"height":265,"caption":"Sano \u2013 Centre for Computational Medicine"},"image":{"@id":"https:\/\/sano.science\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/sano.science\/","https:\/\/x.com\/sanoscience","https:\/\/www.linkedin.com\/company\/sanoscience\/","https:\/\/www.youtube.com\/channel\/UCDZ_8TcjMWUG2ZcgKKgfpwQ","https:\/\/bsky.app\/profile\/sanoscience.bsky.social"]}]}},"acf":[],"gutenberg_blocks":[{"blockName":"custom-styles","attrs":{"styles":""}},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-MwhkwT","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\">Abstract<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\">Abstract<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"40px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:40px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:40px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-RpJpHk","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The field of robotic surgery is rapidly evolving and holds immense potential for automating surgical procedures. However, traditional training approaches such as Reinforcement Learning (RL) requires extensive task repetition, presenting safety and practicality challenges in real surgical settings. This underscores the importance of simulated surgical environments that offer realism alongside computational efficiency and scalability.<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The field of robotic surgery is rapidly evolving and holds immense potential for automating surgical procedures. However, traditional training approaches such as Reinforcement Learning (RL) requires extensive task repetition, presenting safety and practicality challenges in real surgical settings. This underscores the importance of simulated surgical environments that offer realism alongside computational efficiency and scalability.<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-RpJpHk","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">In recent decades, there has been a steady increase in the adoption of Robot-Assisted Surgical Systems (RASS) [1]. Researchers are exploring the possibilities and complexities of RASS using platforms like the da Vinci Research Kit (dVRK) [2]. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">In recent decades, there has been a steady increase in the adoption of Robot-Assisted Surgical Systems (RASS) [1]. Researchers are exploring the possibilities and complexities of RASS using platforms like the da Vinci Research Kit (dVRK) [2]. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"20px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-96jGqu","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Research on RASS has explored automating various surgical tasks, from simple ones like peg transfer to complex ones like manipulating suture needles [3] and deformable tissues [4]. This research focuses on tissue retraction, essential for exposing areas of interest. More complex automation, such as Reinforcement Learning (RL), has grown in popularity. RL training is often conducted in realistic simulation environments like UnityFlexML [5] and LapGym [6], which simulate deformable objects. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Research on RASS has explored automating various surgical tasks, from simple ones like peg transfer to complex ones like manipulating suture needles [3] and deformable tissues [4]. This research focuses on tissue retraction, essential for exposing areas of interest. More complex automation, such as Reinforcement Learning (RL), has grown in popularity. RL training is often conducted in realistic simulation environments like UnityFlexML [5] and LapGym [6], which simulate deformable objects. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"20px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-DYBKxH","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">DL is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeon\u2019s cognitive workload, increased precision in critical aspects of the surgery, and fewer patient-related complications. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">DL is a viable solution for automating repetitive surgical subtasks due to its ability to learn complex behaviours in a dynamic environment. This task automation could lead to reduced surgeon\u2019s cognitive workload, increased precision in critical aspects of the surgery, and fewer patient-related complications. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"20px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-Xt2roU","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">We propose a new simulator, Fast and Flexible Surgical Reinforcement Learning (FF-SRL), which offers a fully GPU-integrated RL simulation and training approach for Robot-Assisted Surgical Systems (RASS). Unlike other simulators that rely on a combination of CPU and limited GPU acceleration, FF-SRL leverages the full power of the GPU for optimization. To manage the complexity of tissue simulation, FF-SRL uses extended Position-Based Dynamics (XPBD) [7]. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">We propose a new simulator, Fast and Flexible Surgical Reinforcement Learning (FF-SRL), which offers a fully GPU-integrated RL simulation and training approach for Robot-Assisted Surgical Systems (RASS). Unlike other simulators that rely on a combination of CPU and limited GPU acceleration, FF-SRL leverages the full power of the GPU for optimization. To manage the complexity of tissue simulation, FF-SRL uses extended Position-Based Dynamics (XPBD) [7]. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"24px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:24px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:24px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-0VuHzB","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Our focus is on tissue retraction, a crucial initial phase of surgical interventions. This involves lifting deformable tissue to expose critical areas such as organs or lesions. Tissue retraction is a common task in RASS research due to its balance of simplicity and complexity, making it ideal for testing automation approaches. Moreover, this task can be effectively learned in a simulated environment and then transferred to real-world scenarios [8]. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Our focus is on tissue retraction, a crucial initial phase of surgical interventions. This involves lifting deformable tissue to expose critical areas such as organs or lesions. Tissue retraction is a common task in RASS research due to its balance of simplicity and complexity, making it ideal for testing automation approaches. Moreover, this task can be effectively learned in a simulated environment and then transferred to real-world scenarios [8]. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"20px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-6bNOqJ","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">To enhance DRL, we additionally focused on using Stable Diffusion, which generates highly realistic images. These images can aid in visualisation and preliminary training of DRL models, improving pattern and object recognition. Furthermore, generating various scenes and situations increases data diversity. The generated images can depict different types of tissues, lighting conditions, viewing angles, etc., which helps in creating versatile models capable of generalisation. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">To enhance DRL, we additionally focused on using Stable Diffusion, which generates highly realistic images. These images can aid in visualisation and preliminary training of DRL models, improving pattern and object recognition. Furthermore, generating various scenes and situations increases data diversity. The generated images can depict different types of tissues, lighting conditions, viewing angles, etc., which helps in creating versatile models capable of generalisation. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"20px","epAnimationGeneratedClass":"edplus_anim-gTgl8W","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-TdhfBH","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[1] C. D\u2019Ettorre, A. Mariani, A. Stilli, F. R. y Baena, P. Valdastri, A. Deguet, P. Kazanzides, R. H. Taylor, G. S. Fischer, S. P. DiMaio, et al., \u201cAccelerating surgical robotics research: A review of 10 years with the da vinci research kit,\u201d IEEE Robotics &amp; Automation Magazine, vol. 28, no. 4, pp. 56\u201378, 2021. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[1] C. D\u2019Ettorre, A. Mariani, A. Stilli, F. R. y Baena, P. Valdastri, A. Deguet, P. Kazanzides, R. H. Taylor, G. S. Fischer, S. P. DiMaio, et al., \u201cAccelerating surgical robotics research: A review of 10 years with the da vinci research kit,\u201d IEEE Robotics &amp; Automation Magazine, vol. 28, no. 4, pp. 56\u201378, 2021. &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-8KkcEt","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[2] P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. P. DiMaio, \u201cAn open-source research kit for the da vinci\u00ae surgical system,\u201d in 2014 IEEE international conference on robotics&nbsp;<br>and automation (ICRA). IEEE, 2014, pp. 6434\u20136439. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[2] P. Kazanzides, Z. Chen, A. Deguet, G. S. Fischer, R. H. Taylor, and S. P. DiMaio, \u201cAn open-source research kit for the da vinci\u00ae surgical system,\u201d in 2014 IEEE international conference on robotics&nbsp;<br>and automation (ICRA). IEEE, 2014, pp. 6434\u20136439. &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-MOT1h3","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[3] Z. J. Hu, Z. Wang, Y. Huang, A. Sena, F. R. y Baena, and E. Burdet,\u201cTowards human-robot collaborative surgery: Trajectory and strategy learning in bimanual peg transfer,\u201d IEEE Robotics and Automation Letters, 2023 &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[3] Z. J. Hu, Z. Wang, Y. Huang, A. Sena, F. R. y Baena, and E. Burdet,\u201cTowards human-robot collaborative surgery: Trajectory and strategy learning in bimanual peg transfer,\u201d IEEE Robotics and Automation Letters, 2023 &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-8vBb0o","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[4] E. Tagliabue, D. Meli, D. Dall\u2019Alba, and P. Fiorini, \u201cDeliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty,\u201d in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 11 080\u201311 086. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[4] E. Tagliabue, D. Meli, D. Dall\u2019Alba, and P. Fiorini, \u201cDeliberation in autonomous robotic surgery: a framework for handling anatomical uncertainty,\u201d in 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022, pp. 11 080\u201311 086. &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-nGDF9O","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[5] E. Tagliabue, A. Pore, D. Dall\u2019Alba, E. Magnabosco, M. Piccinelli, and P. Fiorini, \u201cSoft tissue simulation environment to learn manipulation tasks in autonomous robotic surgery,\u201d in 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 3261\u20133266 &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[5] E. Tagliabue, A. Pore, D. Dall\u2019Alba, E. Magnabosco, M. Piccinelli, and P. Fiorini, \u201cSoft tissue simulation environment to learn manipulation tasks in autonomous robotic surgery,\u201d in 2020 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 3261\u20133266 &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-LWRX02","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[6] P. M. Scheikl, B. Gyenes, R. Younis, C. Haas, G. Neumann, M. Wagner, and F. Mathis-Ullrich, \u201cLapgym\u2013an open source framework for reinforcement learning in robot-assisted laparoscopic surgery,\u201d arXiv preprint arXiv:2302.09606, 2023 &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[6] P. M. Scheikl, B. Gyenes, R. Younis, C. Haas, G. Neumann, M. Wagner, and F. Mathis-Ullrich, \u201cLapgym\u2013an open source framework for reinforcement learning in robot-assisted laparoscopic surgery,\u201d arXiv preprint arXiv:2302.09606, 2023 &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-r5UP73","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[7] M. Macklin, M. M\u00fcller, and N. Chentanez, \u201cXpbd: Position-based simulation of compliant constrained dynamics,\u201d in Proceedings of the 9th International Conference on Motion in Games, 2016, p. 49\u201354. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[7] M. Macklin, M. M\u00fcller, and N. Chentanez, \u201cXpbd: Position-based simulation of compliant constrained dynamics,\u201d in Proceedings of the 9th International Conference on Motion in Games, 2016, p. 49\u201354. &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-pc0KbQ","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">[8] P. M. Scheikl, E. Tagliabue, B. Gyenes, M. Wagner, D. Dall\u2019Alba, P. Fiorini, and F. Mathis-Ullrich, \u201cSim-to-real transfer for visual reinforcement learning of deformable object manipulation for robot-&nbsp;assisted surgery,\u201d IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 560\u2013567, 2023. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">[8] P. M. Scheikl, E. Tagliabue, B. Gyenes, M. Wagner, D. Dall\u2019Alba, P. Fiorini, and F. Mathis-Ullrich, \u201cSim-to-real transfer for visual reinforcement learning of deformable object manipulation for robot-&nbsp;assisted surgery,\u201d IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 560\u2013567, 2023. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"50px","epAnimationGeneratedClass":"edplus_anim-Er8ucC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-m0rZu0","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\"><strong>About the author<\/strong><\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\"><strong>About the author<\/strong><\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-OdoKaF","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-mYP7MF","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Sabina Kami\u0144ska is a Biomedical Engineer from Poland, specialising in Medical Informatics. During her Master's program, she undertook a noteworthy project involving the development of a hand rehabilitation system, which included a hand-tracking glove and gamified training scenarios. After completing her Master's degree, she worked as a Virtual Reality (VR) developer for surgical simulation software. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Sabina Kami\u0144ska is a Biomedical Engineer from Poland, specialising in Medical Informatics. During her Master's program, she undertook a noteworthy project involving the development of a hand rehabilitation system, which included a hand-tracking glove and gamified training scenarios. After completing her Master's degree, she worked as a Virtual Reality (VR) developer for surgical simulation software. &nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-OdoKaF","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/image","attrs":{"id":16823,"sizeSlug":"large","linkDestination":"none","epAnimationGeneratedClass":"edplus_anim-HD8fVE","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<figure class=\"wp-block-image size-large eplus-wrapper\"><img src=\"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png\" alt=\"\" class=\"wp-image-16823\"\/><\/figure>\n","innerContent":["\n<figure class=\"wp-block-image size-large eplus-wrapper\"><img src=\"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/Seminarium_Sabina_Kaminska-1024x536.png\" alt=\"\" class=\"wp-image-16823\"\/><\/figure>\n"]}],"meta_data":{"event_day":"2024-05-27","event_time":"2:00-3:30 PM (CEST)","event_guest":"TBA \u2013 PhD Student, Clinical Data Science, Sano Centre for Computational Medicine, Krakow, PL","has_medias":true,"medias":[{"icon":{"ID":1144,"id":1144,"title":"clock","filename":"clock.svg","filesize":1479,"url":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","link":"https:\/\/sano.science\/seminars\/79-digital-behaviour-change-interventions-dbci-from-design-to-implementation\/clock\/","alt":"clock Sano Seminar","author":"7","description":"","caption":"Sano Seminar clock","name":"clock","status":"inherit","uploaded_to":13471,"date":"2023-06-01 13:24:42","modified":"2024-10-09 16:41:04","menu_order":0,"mime_type":"image\/svg+xml","type":"image","subtype":"svg+xml","icon":"https:\/\/sano.science\/wp-includes\/images\/media\/default.png","width":56,"height":57,"sizes":{"thumbnail":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","thumbnail-width":147,"thumbnail-height":150,"medium":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","medium-width":294,"medium-height":300,"medium_large":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","medium_large-width":768,"medium_large-height":783,"large":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","large-width":1004,"large-height":1024,"1536x1536":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","1536x1536-width":56,"1536x1536-height":57,"2048x2048":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/clock.svg","2048x2048-width":56,"2048x2048-height":57}},"title":"27th May 2024,  2:00-3:30 PM (CEST)","link":""},{"icon":{"ID":1146,"id":1146,"title":"camera","filename":"camera.svg","filesize":1129,"url":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","link":"https:\/\/sano.science\/seminars\/79-digital-behaviour-change-interventions-dbci-from-design-to-implementation\/camera\/","alt":"camera Sano Seminar","author":"7","description":"","caption":"Sano Seminar camera","name":"camera","status":"inherit","uploaded_to":13471,"date":"2023-06-01 13:25:24","modified":"2024-10-09 16:42:29","menu_order":0,"mime_type":"image\/svg+xml","type":"image","subtype":"svg+xml","icon":"https:\/\/sano.science\/wp-includes\/images\/media\/default.png","width":60,"height":38,"sizes":{"thumbnail":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","thumbnail-width":150,"thumbnail-height":95,"medium":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","medium-width":300,"medium-height":190,"medium_large":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","medium_large-width":768,"medium_large-height":486,"large":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","large-width":1024,"large-height":648,"1536x1536":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","1536x1536-width":60,"1536x1536-height":38,"2048x2048":"https:\/\/sano.science\/wp-content\/uploads\/2023\/06\/camera.svg","2048x2048-width":60,"2048x2048-height":38}},"title":"Join via ZOOM on","link":{"title":"seminar.sano.science","url":"https:\/\/us06web.zoom.us\/j\/81263292238#success","target":"_blank"}}]},"_links":{"self":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/seminars\/15608","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/seminars"}],"about":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/types\/seminars"}],"version-history":[{"count":17,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/seminars\/15608\/revisions"}],"predecessor-version":[{"id":16824,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/seminars\/15608\/revisions\/16824"}],"wp:attachment":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/media?parent=15608"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}