{"id":21524,"date":"2025-02-13T17:57:49","date_gmt":"2025-02-13T16:57:49","guid":{"rendered":"https:\/\/sano.science\/?p=21524"},"modified":"2025-03-03T22:37:16","modified_gmt":"2025-03-03T21:37:16","slug":"medical-image-processing-analysis","status":"publish","type":"post","link":"https:\/\/sano.science\/medical-image-processing-analysis\/","title":{"rendered":"Medical Image Processing &amp; Analysis\u00a0"},"content":{"rendered":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-science-nbsp\">Science&nbsp;<\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Sano Centre for Computational Medicine is dedicated to advancing medical image processing and analysis through the development of cutting-edge computational solutions. Our research focuses on improving the accuracy, efficiency, and interpretability of medical imaging data, addressing key challenges in the diagnosis, treatment planning, and monitoring of various medical conditions.&nbsp;<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">We employ state-of-the-art artificial intelligence (AI) and machine learning (ML) techniques to enhance image quality, automate segmentation, and improve classification of findings. Our methodologies leverage deep learning models, including convolutional neural networks (CNNs) and graph neural networks (GNNs), to extract meaningful insights from complex imaging modalities such as MRI, CT, PET, and ultrasound. These approaches facilitate early detection of diseases, personalized treatment strategies, and more precise monitoring of disease progression.&nbsp;<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">A key aspect of our work is the development of explainable AI methods that enhance trust and transparency in medical decision-making. By integrating computational models with expert knowledge, we aim to bridge the gap between automated analysis and clinical applicability, ensuring that our solutions align with real-world medical practice.&nbsp;<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Our research findings are regularly disseminated through peer-reviewed publications and presentations at leading international conferences in medical imaging and computational medicine. We actively collaborate with clinical and academic partners to translate our innovations into practical tools that support radiologists and healthcare professionals in delivering more accurate and efficient patient care.&nbsp;<\/p>\n\n\n\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">At Sano, we are committed to advancing the field of medical image analysis through rigorous scientific inquiry, interdisciplinary collaboration, and the continuous refinement of computational techniques that contribute to improved healthcare outcomes.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-research-based-solutions-include\">Research based solutions include:\u00a0<\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-automated-glioma-multiclass-tumor-classification-nbsp\">Automated Glioma Multiclass Tumor Classification&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cAutomated Glioma Multiclass Tumor Classification\u201d aligns with Sano\u2019s focus on AI-driven medical image analysis by exploring deep learning techniques for automated tumor grading. It employs convolutional neural networks (CNNs) and residual networks (ResNet) to classify gliomas from histopathological images, improving diagnostic accuracy and reducing reliance on manual assessment. Given the challenge of small sample sizes and imbalanced data, the study implements augmentation strategies to enhance model performance. The proposed approach has significant clinical potential, aiding in intraoperative decision-making and personalized treatment planning. By integrating AI with medical imaging, this research supports Sano\u2019s mission to develop advanced computational tools that enhance diagnostic precision and efficiency in clinical practice.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">See: Pytlarz, M., et al. &#8220;<a href=\"https:\/\/www.researchgate.net\/publication\/369547664_Automated_Glioma_Multiclass_Tumor_Classification\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Automated glioma multiclass tumor classification<\/a>.&#8221;\u202f<em>Medical Imaging 2023: Digital and Computational Pathology<\/em>. Vol. 12471. SPIE, 2023.\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-energy-efficient-ai-model-architectures-nbsp-and-compression-techniques-for-green-fetal-brain-segmentation-nbsp\">Energy-efficient AI Model Architectures &nbsp; and Compression Techniques for Green Fetal Brain Segmentation&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cInvestigation of Energy-efficient AI Model Architectures and Compression Techniques for Green Fetal Brain Segmentation\u201d examines methods to optimize deep learning models for fetal brain segmentation while minimizing energy consumption. It addresses the challenge of segmenting fetal MRI scans, which is complicated by small brain structures, motion artifacts, and limited image quality.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">The study explores various lightweight neural network architectures, such as MobileNetV3 and Attention-Squeeze-UNet, evaluating their trade-offs between segmentation accuracy and computational efficiency. Additionally, it investigates compression techniques, including quantization and pruning, to reduce model size and inference time. The research also analyzes energy-efficient training strategies, such as mixed-precision training, optimized data loading, and distributed deep learning approaches, measuring their impact on performance and energy usage.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">Results indicate that certain lightweight architectures achieve comparable segmentation accuracy to traditional models like U-Net while significantly reducing energy consumption. These findings highlight the potential for sustainable AI in medical imaging, enabling deep learning applications that are both computationally efficient and accessible for resource-limited environments.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">See: Mazurek, Szymon, et al. &#8220;<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-63772-8_5\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Investigation of Energy-Efficient AI Model Architectures and Compression Techniques for \u201cGreen\u201d Fetal Brain Segmentation<\/a>.&#8221; International Conference on Computational Science. Cham: Springer Nature Switzerland, 2024.\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-functional-and-structural-reorganization-in-brain-tumors-nbsp\">Functional and Structural Reorganization in Brain Tumors&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cFunctional and Structural Reorganization in Brain Tumors\u201d explores how brain tumors alter functional and structural brain connectivity using neuroimaging and machine learning. It analyzes desynchronized oscillations in resting-state fMRI, revealing widespread functional network disruptions. A hybrid fiber tracking pipeline improves white matter reconstruction in tumor-affected regions, while a machine learning model predicts post-surgical structural reorganization. The study advances understanding of brain plasticity in tumor patients, aiding surgical planning and post-operative assessment.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">See: Falc\u00f3-Roget, Joan, et al. &#8220;<a href=\"https:\/\/www.nature.com\/articles\/s42003-024-06119-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Functional and structural reorganization in brain tumors: a machine learning approach using desynchronized functional oscillations<\/a>.&#8221; Communications Biology 7.1 (2024): 419.\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-deep-learning-glioma-grading-nbsp-with-the-tumor-microenvironment-analysis-protocol-nbsp\">Deep Learning Glioma Grading &nbsp;with the Tumor Microenvironment Analysis Protocol&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cDeep Learning Glioma Grading with the Tumor Microenvironment Analysis Protocol\u201d explores deep learning for automated glioma classification, incorporating tumor microenvironment (TME) features. Using supervised learning, a DenseNet121 model improved glioma grading accuracy, particularly for challenging WHO grade 2 and 3 cases. Weakly supervised learning and single-cell analysis identified TME patterns, highlighting myeloid cell infiltration\u2019s role in tumor malignancy. The study demonstrates how AI-driven TME analysis can enhance glioma diagnosis, supporting intraoperative decision-making and personalized treatment.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">See: Pytlarz, Monika, et al. &#8220;<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10278-024-01008-x\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Deep Learning Glioma Grading with the Tumor Microenvironment Analysis Protocol for Comprehensive Learning, Discovering, and Quantifying Microenvironmental Features<\/a>.&#8221;\u202f<em>Journal of Imaging Informatics in Medicine<\/em>\u202f(2024): 1-17.\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-explainable-graph-neural-networks-nbsp-for-eeg-classification-and-seizure-detection-in-epileptic-patients-nbsp\">Explainable Graph Neural Networks &nbsp;for EEG Classification and Seizure Detection in Epileptic Patients&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cExplainable Graph Neural Networks for EEG Classification and Seizure Detection in Epileptic Patients\u201d explores the use of attention-based graph neural networks (GNNs) to classify EEG signals and detect epileptic seizures. It addresses the challenge of interpretable AI in clinical applications, ensuring that model predictions align with known neurophysiological patterns.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">The study transforms EEG signals into graphs, where nodes represent electrodes, and edges capture functional connectivity. Using explainability techniques, the model identifies key brain regions involved in ictal, pre-ictal, and interictal states, providing insights into seizure dynamics. The findings demonstrate that GNNs effectively capture cortical dependencies, improving both classification performance and interpretability.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">See: Mazurek, Szymon, et al. &#8220;<a href=\"https:\/\/www.researchgate.net\/publication\/378335753_EXPLAINABLE_GRAPH_NEURAL_NETWORKS_FOR_EEG_CLASSIFICATION_AND_SEIZURE_DETECTION_IN_EPILEPTIC_PATIENTS\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Explainable graph neural networks for EEG classification and seizure detection in epileptic patients<\/a>.&#8221; 2024 IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, 2024.\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-federated-image-to-image-mri-translation-nbsp-from-heterogeneous-multiple-sites-data-nbsp\">Federated Image-to-Image MRI Translation &nbsp;from Heterogeneous Multiple-Sites Data&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cFederated Image-to-Image MRI Translation from Heterogeneous Multiple-Sites Data\u201d explores the use of federated learning to improve MRI image translation while preserving patient privacy. Traditional deep learning models for medical imaging often require centralized data collection, which raises ethical and regulatory concerns. By leveraging federated learning, this research enables collaborative model training across multiple institutions without direct data sharing, addressing privacy constraints and institutional data silos.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">A key challenge in MRI analysis is data heterogeneity, as imaging protocols, scanner types, and acquisition settings vary across medical centers. The study demonstrates how federated learning can mitigate these inconsistencies, allowing deep learning models to generalize better across diverse datasets. The approach improves image-to-image translation, facilitating high-quality MRI synthesis while maintaining diagnostic fidelity.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">This work has significant implications for multi-center medical AI applications, reducing data biases and enhancing the robustness of AI-driven imaging solutions. By enabling privacy-preserving, distributed model training, it supports the development of clinically viable tools for improved diagnostic imaging and patient outcomes.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">See: Fiszer, Jan Stanis\u0142aw, et al. &#8220;<a href=\"https:\/\/archive.ismrm.org\/2024\/2221.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Federated image-to-image MRI translation from heterogeneous multiple-sites data<\/a>.&#8221;, https:\/\/archive.ismrm.org\/2024\/2221.html\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-tabattention-learning-attention-conditionally-on-tabular-data-nbsp\">TabAttention: Learning Attention Conditionally on Tabular Data&nbsp;<\/h4>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The \u201cTabAttention: Learning Attention Conditionally on Tabular Data\u201d introduces a novel attention mechanism designed to enhance deep learning models by integrating tabular data with imaging analysis.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">TabAttention extends the Convolutional Block Attention Module (CBAM) by adding a Temporal Attention Module (TAM), which incorporates multi-head self-attention to learn attention maps across spatial, channel, and temporal dimensions. Unlike traditional approaches that concatenate imaging and tabular data in the final layers, TabAttention embeds tabular information directly into attention computations, allowing the model to refine feature selection dynamically.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">The study evaluates this method on fetal birth weight (FBW) estimation, leveraging ultrasound video scans and biometric measurements. Results show that TabAttention outperforms clinicians and existing deep learning models, demonstrating its potential for improving computer-aided diagnosis where both imaging and structured clinical data are available.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">By enabling conditional attention learning, this method enhances predictive accuracy in medical imaging applications, offering a more integrated approach to multi-modal data analysis.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">Grzeszczyk, Michal K., et al. &#8220;<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-43990-2_33\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">TabAttention: Learning attention conditionally on tabular data.&#8221;\u202f<em>International Conference on Medical Image Computing and Computer-Assisted Intervention<\/em><\/a>. Cham: Springer Nature Switzerland, 2023.\u00a0<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">The interdisciplinary expertise of Sano researchers enables them to identify and address challenges raised by the medical community, leading to the development of <strong>innovative solutions in medical imaging and analysis<\/strong>. By leveraging insights from multiple disciplines, they create technologies with a <strong>broad range of applications<\/strong>, enhancing diagnostic accuracy and efficiency.\u00a0<\/p>\n\n\n\n<p class=\" eplus-wrapper\">Additionally, Sano integrates <strong>techniques adapted from other industries<\/strong>, such as <strong>gaming technology<\/strong>, to optimize imaging processes. For example, <strong>methods developed for lighting and shadow analysis in images<\/strong> have been repurposed for medical applications. These techniques are designed to <strong>reduce computational demands while maintaining high analytical precision<\/strong>, making advanced imaging solutions more accessible and efficient.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-solutions\">Solutions\u00a0<\/h2>\n\n\n\n<p class=\" eplus-wrapper\">At Sano, theoretical scientific knowledge serves as the foundation upon which practical solutions are built. What begins as cutting-edge research presented to the scientific community gradually evolves into innovations that enhance everyday medical applications. This seamless transition from theory to practice ensures that computational advancements directly contribute to improving patient care and medical decision-making.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">One of the key strengths of Sano is its ability to bridge the gap between fundamental research and real-world healthcare challenges. By integrating computational techniques with clinical applications, researchers develop solutions that not only push the boundaries of science but also have tangible benefits in medicine. Among the many initiatives at Sano, several projects stand out for their potential impact:&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-bsd4health-nbsp\">BSD4Health&nbsp;<\/h3>\n\n\n\n<p class=\" eplus-wrapper\">\u2022 <strong>BSD4Health<\/strong>, led by <strong>Rosmary Blanco<\/strong> from the <strong>Computational Neuroscience team<\/strong>, focuses on applying <strong>biomedical signal processing and AI techniques<\/strong> to analyze complex neurophysiological data. This project aims to enhance diagnostics and treatment monitoring, particularly in <strong>neurological disorders<\/strong>.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-medical-simulators-by-higs-nbsp\">Medical Simulators by HIGS&nbsp;<\/h3>\n\n\n\n<p class=\" eplus-wrapper\">\u2022 <strong>Health Informatics Group<\/strong> is actively developing <strong>medical simulators<\/strong>, leveraging <strong>computational modeling and interactive technologies<\/strong> to create realistic training environments for clinicians. These tools help medical professionals refine their skills and improve patient outcomes through <strong>simulation-based education<\/strong>.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-sneuroeye-nbsp\">SNeuroEye&nbsp;<\/h3>\n\n\n\n<p class=\" eplus-wrapper\">\u2022 <strong>SNeuronEye<\/strong>, led by <strong>Monika Pytlarz<\/strong> from <strong>Computational Neuroscience Group<\/strong>, is an AI-powered system for automatic <strong>neuroradiology<\/strong> <strong>reporting<\/strong> in brain lesion diagnosis. By integrating computer vision for lesion detection, classification, and segmentation with natural language processing (NLP) for structured report generation, the platform assists radiologists in interpreting MRI scans. Initially designed as an educational tool for radiologists in training, SNeuroEye now focuses on <strong>reducing<\/strong> <strong>workload<\/strong>, <strong>improving<\/strong> <strong>diagnostic<\/strong> <strong>accuracy<\/strong>, and <strong>streamlining<\/strong> reporting <strong>workflows<\/strong>.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">By fostering such interdisciplinary projects, Sano <strong>translates scientific discoveries into practical, patient-centered solutions<\/strong>. This approach not only accelerates <strong>technological advancements in computational medicine<\/strong> but also strengthens the collaboration between researchers, clinicians, and industry partners, ensuring that innovation remains deeply connected to real medical needs.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-networking-nbsp\">Networking&nbsp;<\/h2>\n\n\n\n<p class=\" eplus-wrapper\">Sano Centre for Computational Medicine fosters collaboration between academia, healthcare, and industry through scientific seminars, industry partnerships, and interdisciplinary events. These initiatives drive advancements in computational medicine and facilitate the translation of research into real-world applications.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-academic-and-medical-collaborations\">Academic and Medical Collaborations\u00a0<\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Sano is actively expanding its outreach and enhancing its impact in the field of computational medicine through a series of interdisciplinary initiatives. One such initiative is the AI Neuro Summer School, which served as a convergence point for experts in computational neuroscience, neuroimaging, and artificial intelligence. &nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">This effort is further amplified by Sano&#8217;s participation in prominent global conferences.&nbsp;Among these conferences is the ISMRM (International Society for Magnetic Resonance in Medicine), organized by a global non-profit organization dedicated to advancing the science and application of magnetic resonance in medicine and biology. ISMRM is instrumental in facilitating communication and knowledge sharing among a diverse membership that includes clinicians, physicists, engineers, and technologists.&nbsp; Another key conference is the IEEE International Symposium on Biomedical Imaging (ISBI), which stands as a premier event in the field. ISBI showcases the latest advancements in biomedical imaging, fostering a rich exchange of research and encouraging collaborations across academia, healthcare, and the industry.&nbsp;Additionally, Sano participates in the annual conference of the Medical Image Computing and Computer-Assisted Intervention Society (MICCAI). As a leading event, MICCAI attracts top scientists, engineers, and clinicians involved in medical imaging and computer-assisted interventions. The conference focuses on both foundational research and emerging innovations, addressing topics like inclusive machine learning, affordable imaging solutions, and image-guided surgery, particularly in resource-limited settings.&nbsp;&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">Through these engagements, Sano not only strengthens its presence in computational medicine but also fosters international collaborations and secures new research partnerships, positioning itself at the forefront of global scientific endeavours in the field.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-industry-engagement-and-commercialization-nbsp\">Industry Engagement and Commercialization&nbsp;<\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Sano actively partners with industry to accelerate AI adoption in healthcare. Through workshops and networking events, it collaborates with technology companies, pharmaceutical firms, and medical device manufacturers to develop scalable, regulatory-compliant AI solutions, including predictive analytics, federated learning, and automated medical imaging.&nbsp;<\/p>\n\n\n\n<p class=\" eplus-wrapper\">Additionally, Sano supports startups and entrepreneurs, providing access to research expertise, computational infrastructure, and commercialization pathways. Through technology transfer initiatives, cutting-edge research is transformed into practical, industry-ready solutions.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-interdisciplinary-networking-and-conferences-nbsp\">Interdisciplinary Networking and Conferences&nbsp;<\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Sano expands its outreach through interdisciplinary initiatives, such as the AI Neuro Summer School, bringing together experts in computational neuroscience, neuroimaging, and AI. By participating in global conferences like ISMRM and MICCAI, Sano strengthens its presence in computational medicine, fostering international collaborations and securing new research partnerships.&nbsp;<\/p>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-a-hub-for-ai-driven-medicine-nbsp\">A Hub for AI-Driven Medicine&nbsp;<\/h2>\n\n\n\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">Through its academic, industry, and interdisciplinary initiatives, Sano is establishing itself as a leading hub for AI-driven medical research and innovation. By bridging the gap between computational science and clinical practice, it ensures that its technologies translate into practical, ethical, and impactful solutions for healthcare, ultimately benefiting both medical professionals and patients.<\/p>\n","protected":false},"excerpt":"Science&nbsp; Sano Centre for Computational Medicine is dedicated to advancing medical image processing and analysis through the development of cutting-edge computational solutions. Our research focuses on improving the accuracy, efficiency, and interpretability of medical imaging data, addressing key challenges in the diagnosis, treatment planning, and monitoring of various medical conditions.&nbsp; We employ state-of-the-art artificial intelligence [&hellip;]","author":8,"featured_media":22150,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"editor_plus_post_options":"{}","editor_plus_copied_stylings":"{}","footnotes":""},"categories":[1],"tags":[],"class_list":["post-21524","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.4 (Yoast SEO v27.4) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Medical Image Processing &amp; Analysis\u00a0 - Centre for Computational Personalized Medicine<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sano.science\/medical-image-processing-analysis\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Medical Image Processing &amp; Analysis\u00a0\" \/>\n<meta property=\"og:description\" content=\"Science&nbsp; Sano Centre for Computational Medicine is dedicated to advancing medical image processing and analysis through the development of cutting-edge computational solutions. Our research focuses on improving the accuracy, efficiency, and interpretability of medical imaging data, addressing key challenges in the diagnosis, treatment planning, and monitoring of various medical conditions.&nbsp; We employ state-of-the-art artificial intelligence [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sano.science\/medical-image-processing-analysis\/\" \/>\n<meta property=\"og:site_name\" content=\"Centre for Computational Personalized Medicine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/sano.science\/\" \/>\n<meta property=\"article:published_time\" content=\"2025-02-13T16:57:49+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-03-03T21:37:16+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-scaled.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"2560\" \/>\n\t<meta property=\"og:image:height\" content=\"1811\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"author\" content=\"Sano\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@sanoscience\" \/>\n<meta name=\"twitter:site\" content=\"@sanoscience\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sano\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"10 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/\"},\"author\":{\"name\":\"Sano\",\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/person\\\/561b69b48b6a8f7904aed06df5d03c98\"},\"headline\":\"Medical Image Processing &amp; Analysis\u00a0\",\"datePublished\":\"2025-02-13T16:57:49+00:00\",\"dateModified\":\"2025-03-03T21:37:16+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/\"},\"wordCount\":2273,\"publisher\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2025\\\/02\\\/Medical-Image-Processing-Analysis-scaled.webp\",\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/\",\"url\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/\",\"name\":\"Medical Image Processing &amp; Analysis\u00a0 - Centre for Computational Personalized Medicine\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2025\\\/02\\\/Medical-Image-Processing-Analysis-scaled.webp\",\"datePublished\":\"2025-02-13T16:57:49+00:00\",\"dateModified\":\"2025-03-03T21:37:16+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#primaryimage\",\"url\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2025\\\/02\\\/Medical-Image-Processing-Analysis-scaled.webp\",\"contentUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2025\\\/02\\\/Medical-Image-Processing-Analysis-scaled.webp\",\"width\":2560,\"height\":1811},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/sano.science\\\/medical-image-processing-analysis\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/sano.science\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Medical Image Processing &amp; Analysis\u00a0\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/sano.science\\\/#website\",\"url\":\"https:\\\/\\\/sano.science\\\/\",\"name\":\"Centre for Computational Personalized Medicine\",\"description\":\"Sano \u2013 Centre for Computational Medicine\",\"publisher\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/sano.science\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\",\"name\":\"Sano \u2013 Centre for Computational Medicine\",\"alternateName\":\"Sano\",\"url\":\"https:\\\/\\\/sano.science\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/logo_sano_podstawowe.png\",\"contentUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/logo_sano_podstawowe.png\",\"width\":700,\"height\":265,\"caption\":\"Sano \u2013 Centre for Computational Medicine\"},\"image\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/sano.science\\\/\",\"https:\\\/\\\/x.com\\\/sanoscience\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/sanoscience\\\/\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCDZ_8TcjMWUG2ZcgKKgfpwQ\",\"https:\\\/\\\/bsky.app\\\/profile\\\/sanoscience.bsky.social\"]},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/person\\\/561b69b48b6a8f7904aed06df5d03c98\",\"name\":\"Sano\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3349c3947cb21b81e9745a6442fe4ec43961bb077198eb416cbdc66ec45ea97c?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3349c3947cb21b81e9745a6442fe4ec43961bb077198eb416cbdc66ec45ea97c?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/3349c3947cb21b81e9745a6442fe4ec43961bb077198eb416cbdc66ec45ea97c?s=96&d=mm&r=g\",\"caption\":\"Sano\"},\"url\":\"https:\\\/\\\/sano.science\\\/author\\\/s15to\\\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Medical Image Processing &amp; Analysis\u00a0 - Centre for Computational Personalized Medicine","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sano.science\/medical-image-processing-analysis\/","og_locale":"en_US","og_type":"article","og_title":"Medical Image Processing &amp; Analysis\u00a0","og_description":"Science&nbsp; Sano Centre for Computational Medicine is dedicated to advancing medical image processing and analysis through the development of cutting-edge computational solutions. Our research focuses on improving the accuracy, efficiency, and interpretability of medical imaging data, addressing key challenges in the diagnosis, treatment planning, and monitoring of various medical conditions.&nbsp; We employ state-of-the-art artificial intelligence [&hellip;]","og_url":"https:\/\/sano.science\/medical-image-processing-analysis\/","og_site_name":"Centre for Computational Personalized Medicine","article_publisher":"https:\/\/www.facebook.com\/sano.science\/","article_published_time":"2025-02-13T16:57:49+00:00","article_modified_time":"2025-03-03T21:37:16+00:00","og_image":[{"width":2560,"height":1811,"url":"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-scaled.webp","type":"image\/webp"}],"author":"Sano","twitter_card":"summary_large_image","twitter_creator":"@sanoscience","twitter_site":"@sanoscience","twitter_misc":{"Written by":"Sano","Est. reading time":"10 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#article","isPartOf":{"@id":"https:\/\/sano.science\/medical-image-processing-analysis\/"},"author":{"name":"Sano","@id":"https:\/\/sano.science\/#\/schema\/person\/561b69b48b6a8f7904aed06df5d03c98"},"headline":"Medical Image Processing &amp; Analysis\u00a0","datePublished":"2025-02-13T16:57:49+00:00","dateModified":"2025-03-03T21:37:16+00:00","mainEntityOfPage":{"@id":"https:\/\/sano.science\/medical-image-processing-analysis\/"},"wordCount":2273,"publisher":{"@id":"https:\/\/sano.science\/#organization"},"image":{"@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#primaryimage"},"thumbnailUrl":"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-scaled.webp","inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/sano.science\/medical-image-processing-analysis\/","url":"https:\/\/sano.science\/medical-image-processing-analysis\/","name":"Medical Image Processing &amp; Analysis\u00a0 - Centre for Computational Personalized Medicine","isPartOf":{"@id":"https:\/\/sano.science\/#website"},"primaryImageOfPage":{"@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#primaryimage"},"image":{"@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#primaryimage"},"thumbnailUrl":"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-scaled.webp","datePublished":"2025-02-13T16:57:49+00:00","dateModified":"2025-03-03T21:37:16+00:00","breadcrumb":{"@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sano.science\/medical-image-processing-analysis\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#primaryimage","url":"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-scaled.webp","contentUrl":"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-scaled.webp","width":2560,"height":1811},{"@type":"BreadcrumbList","@id":"https:\/\/sano.science\/medical-image-processing-analysis\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sano.science\/"},{"@type":"ListItem","position":2,"name":"Medical Image Processing &amp; Analysis\u00a0"}]},{"@type":"WebSite","@id":"https:\/\/sano.science\/#website","url":"https:\/\/sano.science\/","name":"Centre for Computational Personalized Medicine","description":"Sano \u2013 Centre for Computational Medicine","publisher":{"@id":"https:\/\/sano.science\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sano.science\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/sano.science\/#organization","name":"Sano \u2013 Centre for Computational Medicine","alternateName":"Sano","url":"https:\/\/sano.science\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sano.science\/#\/schema\/logo\/image\/","url":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/logo_sano_podstawowe.png","contentUrl":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/logo_sano_podstawowe.png","width":700,"height":265,"caption":"Sano \u2013 Centre for Computational Medicine"},"image":{"@id":"https:\/\/sano.science\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/sano.science\/","https:\/\/x.com\/sanoscience","https:\/\/www.linkedin.com\/company\/sanoscience\/","https:\/\/www.youtube.com\/channel\/UCDZ_8TcjMWUG2ZcgKKgfpwQ","https:\/\/bsky.app\/profile\/sanoscience.bsky.social"]},{"@type":"Person","@id":"https:\/\/sano.science\/#\/schema\/person\/561b69b48b6a8f7904aed06df5d03c98","name":"Sano","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/3349c3947cb21b81e9745a6442fe4ec43961bb077198eb416cbdc66ec45ea97c?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/3349c3947cb21b81e9745a6442fe4ec43961bb077198eb416cbdc66ec45ea97c?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/3349c3947cb21b81e9745a6442fe4ec43961bb077198eb416cbdc66ec45ea97c?s=96&d=mm&r=g","caption":"Sano"},"url":"https:\/\/sano.science\/author\/s15to\/"}]}},"acf":[],"gutenberg_blocks":[{"blockName":"custom-styles","attrs":{"styles":""}},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-qa0W1N","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-science-nbsp\">Science&nbsp;<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-science-nbsp\">Science&nbsp;<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-XTXrKL","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Sano Centre for Computational Medicine is dedicated to advancing medical image processing and analysis through the development of cutting-edge computational solutions. Our research focuses on improving the accuracy, efficiency, and interpretability of medical imaging data, addressing key challenges in the diagnosis, treatment planning, and monitoring of various medical conditions.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Sano Centre for Computational Medicine is dedicated to advancing medical image processing and analysis through the development of cutting-edge computational solutions. Our research focuses on improving the accuracy, efficiency, and interpretability of medical imaging data, addressing key challenges in the diagnosis, treatment planning, and monitoring of various medical conditions.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"10px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-QtY0Z5","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">We employ state-of-the-art artificial intelligence (AI) and machine learning (ML) techniques to enhance image quality, automate segmentation, and improve classification of findings. Our methodologies leverage deep learning models, including convolutional neural networks (CNNs) and graph neural networks (GNNs), to extract meaningful insights from complex imaging modalities such as MRI, CT, PET, and ultrasound. These approaches facilitate early detection of diseases, personalized treatment strategies, and more precise monitoring of disease progression.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">We employ state-of-the-art artificial intelligence (AI) and machine learning (ML) techniques to enhance image quality, automate segmentation, and improve classification of findings. Our methodologies leverage deep learning models, including convolutional neural networks (CNNs) and graph neural networks (GNNs), to extract meaningful insights from complex imaging modalities such as MRI, CT, PET, and ultrasound. These approaches facilitate early detection of diseases, personalized treatment strategies, and more precise monitoring of disease progression.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"10px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-oPgaA9","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">A key aspect of our work is the development of explainable AI methods that enhance trust and transparency in medical decision-making. By integrating computational models with expert knowledge, we aim to bridge the gap between automated analysis and clinical applicability, ensuring that our solutions align with real-world medical practice.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">A key aspect of our work is the development of explainable AI methods that enhance trust and transparency in medical decision-making. By integrating computational models with expert knowledge, we aim to bridge the gap between automated analysis and clinical applicability, ensuring that our solutions align with real-world medical practice.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"10px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-ufXS1b","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Our research findings are regularly disseminated through peer-reviewed publications and presentations at leading international conferences in medical imaging and computational medicine. We actively collaborate with clinical and academic partners to translate our innovations into practical tools that support radiologists and healthcare professionals in delivering more accurate and efficient patient care.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Our research findings are regularly disseminated through peer-reviewed publications and presentations at leading international conferences in medical imaging and computational medicine. We actively collaborate with clinical and academic partners to translate our innovations into practical tools that support radiologists and healthcare professionals in delivering more accurate and efficient patient care.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"10px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:10px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-KGiXSL","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">At Sano, we are committed to advancing the field of medical image analysis through rigorous scientific inquiry, interdisciplinary collaboration, and the continuous refinement of computational techniques that contribute to improved healthcare outcomes.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">At Sano, we are committed to advancing the field of medical image analysis through rigorous scientific inquiry, interdisciplinary collaboration, and the continuous refinement of computational techniques that contribute to improved healthcare outcomes.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-NwpyiG","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-research-based-solutions-include\">Research based solutions include:\u00a0<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-research-based-solutions-include\">Research based solutions include:\u00a0<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-rmLI8i","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-automated-glioma-multiclass-tumor-classification-nbsp\">Automated Glioma Multiclass Tumor Classification&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-automated-glioma-multiclass-tumor-classification-nbsp\">Automated Glioma Multiclass Tumor Classification&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-e5N18B","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cAutomated Glioma Multiclass Tumor Classification\u201d aligns with Sano\u2019s focus on AI-driven medical image analysis by exploring deep learning techniques for automated tumor grading. It employs convolutional neural networks (CNNs) and residual networks (ResNet) to classify gliomas from histopathological images, improving diagnostic accuracy and reducing reliance on manual assessment. Given the challenge of small sample sizes and imbalanced data, the study implements augmentation strategies to enhance model performance. The proposed approach has significant clinical potential, aiding in intraoperative decision-making and personalized treatment planning. By integrating AI with medical imaging, this research supports Sano\u2019s mission to develop advanced computational tools that enhance diagnostic precision and efficiency in clinical practice.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cAutomated Glioma Multiclass Tumor Classification\u201d aligns with Sano\u2019s focus on AI-driven medical image analysis by exploring deep learning techniques for automated tumor grading. It employs convolutional neural networks (CNNs) and residual networks (ResNet) to classify gliomas from histopathological images, improving diagnostic accuracy and reducing reliance on manual assessment. Given the challenge of small sample sizes and imbalanced data, the study implements augmentation strategies to enhance model performance. The proposed approach has significant clinical potential, aiding in intraoperative decision-making and personalized treatment planning. By integrating AI with medical imaging, this research supports Sano\u2019s mission to develop advanced computational tools that enhance diagnostic precision and efficiency in clinical practice.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-kk29tc","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">See: Pytlarz, M., et al. \"<a href=\"https:\/\/www.researchgate.net\/publication\/369547664_Automated_Glioma_Multiclass_Tumor_Classification\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Automated glioma multiclass tumor classification<\/a>.\"\u202f<em>Medical Imaging 2023: Digital and Computational Pathology<\/em>. Vol. 12471. SPIE, 2023.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">See: Pytlarz, M., et al. \"<a href=\"https:\/\/www.researchgate.net\/publication\/369547664_Automated_Glioma_Multiclass_Tumor_Classification\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Automated glioma multiclass tumor classification<\/a>.\"\u202f<em>Medical Imaging 2023: Digital and Computational Pathology<\/em>. Vol. 12471. SPIE, 2023.\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-YT4Zi1","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-energy-efficient-ai-model-architectures-nbsp-and-compression-techniques-for-green-fetal-brain-segmentation-nbsp\">Energy-efficient AI Model Architectures &nbsp; and Compression Techniques for Green Fetal Brain Segmentation&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-energy-efficient-ai-model-architectures-nbsp-and-compression-techniques-for-green-fetal-brain-segmentation-nbsp\">Energy-efficient AI Model Architectures &nbsp; and Compression Techniques for Green Fetal Brain Segmentation&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-FoznCd","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cInvestigation of Energy-efficient AI Model Architectures and Compression Techniques for Green Fetal Brain Segmentation\u201d examines methods to optimize deep learning models for fetal brain segmentation while minimizing energy consumption. It addresses the challenge of segmenting fetal MRI scans, which is complicated by small brain structures, motion artifacts, and limited image quality.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cInvestigation of Energy-efficient AI Model Architectures and Compression Techniques for Green Fetal Brain Segmentation\u201d examines methods to optimize deep learning models for fetal brain segmentation while minimizing energy consumption. It addresses the challenge of segmenting fetal MRI scans, which is complicated by small brain structures, motion artifacts, and limited image quality.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-mXtqfq","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The study explores various lightweight neural network architectures, such as MobileNetV3 and Attention-Squeeze-UNet, evaluating their trade-offs between segmentation accuracy and computational efficiency. Additionally, it investigates compression techniques, including quantization and pruning, to reduce model size and inference time. The research also analyzes energy-efficient training strategies, such as mixed-precision training, optimized data loading, and distributed deep learning approaches, measuring their impact on performance and energy usage.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The study explores various lightweight neural network architectures, such as MobileNetV3 and Attention-Squeeze-UNet, evaluating their trade-offs between segmentation accuracy and computational efficiency. Additionally, it investigates compression techniques, including quantization and pruning, to reduce model size and inference time. The research also analyzes energy-efficient training strategies, such as mixed-precision training, optimized data loading, and distributed deep learning approaches, measuring their impact on performance and energy usage.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-zgsHLy","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Results indicate that certain lightweight architectures achieve comparable segmentation accuracy to traditional models like U-Net while significantly reducing energy consumption. These findings highlight the potential for sustainable AI in medical imaging, enabling deep learning applications that are both computationally efficient and accessible for resource-limited environments.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Results indicate that certain lightweight architectures achieve comparable segmentation accuracy to traditional models like U-Net while significantly reducing energy consumption. These findings highlight the potential for sustainable AI in medical imaging, enabling deep learning applications that are both computationally efficient and accessible for resource-limited environments.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-5emWmo","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">See: Mazurek, Szymon, et al. \"<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-63772-8_5\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Investigation of Energy-Efficient AI Model Architectures and Compression Techniques for \u201cGreen\u201d Fetal Brain Segmentation<\/a>.\" International Conference on Computational Science. Cham: Springer Nature Switzerland, 2024.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">See: Mazurek, Szymon, et al. \"<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-63772-8_5\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Investigation of Energy-Efficient AI Model Architectures and Compression Techniques for \u201cGreen\u201d Fetal Brain Segmentation<\/a>.\" International Conference on Computational Science. Cham: Springer Nature Switzerland, 2024.\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-T7oGae","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-functional-and-structural-reorganization-in-brain-tumors-nbsp\">Functional and Structural Reorganization in Brain Tumors&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-functional-and-structural-reorganization-in-brain-tumors-nbsp\">Functional and Structural Reorganization in Brain Tumors&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-V0bXwi","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cFunctional and Structural Reorganization in Brain Tumors\u201d explores how brain tumors alter functional and structural brain connectivity using neuroimaging and machine learning. It analyzes desynchronized oscillations in resting-state fMRI, revealing widespread functional network disruptions. A hybrid fiber tracking pipeline improves white matter reconstruction in tumor-affected regions, while a machine learning model predicts post-surgical structural reorganization. The study advances understanding of brain plasticity in tumor patients, aiding surgical planning and post-operative assessment.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cFunctional and Structural Reorganization in Brain Tumors\u201d explores how brain tumors alter functional and structural brain connectivity using neuroimaging and machine learning. It analyzes desynchronized oscillations in resting-state fMRI, revealing widespread functional network disruptions. A hybrid fiber tracking pipeline improves white matter reconstruction in tumor-affected regions, while a machine learning model predicts post-surgical structural reorganization. The study advances understanding of brain plasticity in tumor patients, aiding surgical planning and post-operative assessment.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-JGIITF","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">See: Falc\u00f3-Roget, Joan, et al. \"<a href=\"https:\/\/www.nature.com\/articles\/s42003-024-06119-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Functional and structural reorganization in brain tumors: a machine learning approach using desynchronized functional oscillations<\/a>.\" Communications Biology 7.1 (2024): 419.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">See: Falc\u00f3-Roget, Joan, et al. \"<a href=\"https:\/\/www.nature.com\/articles\/s42003-024-06119-3\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Functional and structural reorganization in brain tumors: a machine learning approach using desynchronized functional oscillations<\/a>.\" Communications Biology 7.1 (2024): 419.\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-jgt3OW","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-deep-learning-glioma-grading-nbsp-with-the-tumor-microenvironment-analysis-protocol-nbsp\">Deep Learning Glioma Grading &nbsp;with the Tumor Microenvironment Analysis Protocol&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-deep-learning-glioma-grading-nbsp-with-the-tumor-microenvironment-analysis-protocol-nbsp\">Deep Learning Glioma Grading &nbsp;with the Tumor Microenvironment Analysis Protocol&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-EeAHqJ","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cDeep Learning Glioma Grading with the Tumor Microenvironment Analysis Protocol\u201d explores deep learning for automated glioma classification, incorporating tumor microenvironment (TME) features. Using supervised learning, a DenseNet121 model improved glioma grading accuracy, particularly for challenging WHO grade 2 and 3 cases. Weakly supervised learning and single-cell analysis identified TME patterns, highlighting myeloid cell infiltration\u2019s role in tumor malignancy. The study demonstrates how AI-driven TME analysis can enhance glioma diagnosis, supporting intraoperative decision-making and personalized treatment.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cDeep Learning Glioma Grading with the Tumor Microenvironment Analysis Protocol\u201d explores deep learning for automated glioma classification, incorporating tumor microenvironment (TME) features. Using supervised learning, a DenseNet121 model improved glioma grading accuracy, particularly for challenging WHO grade 2 and 3 cases. Weakly supervised learning and single-cell analysis identified TME patterns, highlighting myeloid cell infiltration\u2019s role in tumor malignancy. The study demonstrates how AI-driven TME analysis can enhance glioma diagnosis, supporting intraoperative decision-making and personalized treatment.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-dfPRbr","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">See: Pytlarz, Monika, et al. \"<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10278-024-01008-x\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Deep Learning Glioma Grading with the Tumor Microenvironment Analysis Protocol for Comprehensive Learning, Discovering, and Quantifying Microenvironmental Features<\/a>.\"\u202f<em>Journal of Imaging Informatics in Medicine<\/em>\u202f(2024): 1-17.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">See: Pytlarz, Monika, et al. \"<a href=\"https:\/\/link.springer.com\/article\/10.1007\/s10278-024-01008-x\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Deep Learning Glioma Grading with the Tumor Microenvironment Analysis Protocol for Comprehensive Learning, Discovering, and Quantifying Microenvironmental Features<\/a>.\"\u202f<em>Journal of Imaging Informatics in Medicine<\/em>\u202f(2024): 1-17.\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-sDfIIh","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-explainable-graph-neural-networks-nbsp-for-eeg-classification-and-seizure-detection-in-epileptic-patients-nbsp\">Explainable Graph Neural Networks &nbsp;for EEG Classification and Seizure Detection in Epileptic Patients&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-explainable-graph-neural-networks-nbsp-for-eeg-classification-and-seizure-detection-in-epileptic-patients-nbsp\">Explainable Graph Neural Networks &nbsp;for EEG Classification and Seizure Detection in Epileptic Patients&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-W9xcHc","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cExplainable Graph Neural Networks for EEG Classification and Seizure Detection in Epileptic Patients\u201d explores the use of attention-based graph neural networks (GNNs) to classify EEG signals and detect epileptic seizures. It addresses the challenge of interpretable AI in clinical applications, ensuring that model predictions align with known neurophysiological patterns.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cExplainable Graph Neural Networks for EEG Classification and Seizure Detection in Epileptic Patients\u201d explores the use of attention-based graph neural networks (GNNs) to classify EEG signals and detect epileptic seizures. It addresses the challenge of interpretable AI in clinical applications, ensuring that model predictions align with known neurophysiological patterns.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-4h8pNg","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The study transforms EEG signals into graphs, where nodes represent electrodes, and edges capture functional connectivity. Using explainability techniques, the model identifies key brain regions involved in ictal, pre-ictal, and interictal states, providing insights into seizure dynamics. The findings demonstrate that GNNs effectively capture cortical dependencies, improving both classification performance and interpretability.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The study transforms EEG signals into graphs, where nodes represent electrodes, and edges capture functional connectivity. Using explainability techniques, the model identifies key brain regions involved in ictal, pre-ictal, and interictal states, providing insights into seizure dynamics. The findings demonstrate that GNNs effectively capture cortical dependencies, improving both classification performance and interpretability.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-frDCzi","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">See: Mazurek, Szymon, et al. \"<a href=\"https:\/\/www.researchgate.net\/publication\/378335753_EXPLAINABLE_GRAPH_NEURAL_NETWORKS_FOR_EEG_CLASSIFICATION_AND_SEIZURE_DETECTION_IN_EPILEPTIC_PATIENTS\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Explainable graph neural networks for EEG classification and seizure detection in epileptic patients<\/a>.\" 2024 IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, 2024.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">See: Mazurek, Szymon, et al. \"<a href=\"https:\/\/www.researchgate.net\/publication\/378335753_EXPLAINABLE_GRAPH_NEURAL_NETWORKS_FOR_EEG_CLASSIFICATION_AND_SEIZURE_DETECTION_IN_EPILEPTIC_PATIENTS\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Explainable graph neural networks for EEG classification and seizure detection in epileptic patients<\/a>.\" 2024 IEEE International Symposium on Biomedical Imaging (ISBI). IEEE, 2024.\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-TT0Mod","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-federated-image-to-image-mri-translation-nbsp-from-heterogeneous-multiple-sites-data-nbsp\">Federated Image-to-Image MRI Translation &nbsp;from Heterogeneous Multiple-Sites Data&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-federated-image-to-image-mri-translation-nbsp-from-heterogeneous-multiple-sites-data-nbsp\">Federated Image-to-Image MRI Translation &nbsp;from Heterogeneous Multiple-Sites Data&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-lVcURf","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cFederated Image-to-Image MRI Translation from Heterogeneous Multiple-Sites Data\u201d explores the use of federated learning to improve MRI image translation while preserving patient privacy. Traditional deep learning models for medical imaging often require centralized data collection, which raises ethical and regulatory concerns. By leveraging federated learning, this research enables collaborative model training across multiple institutions without direct data sharing, addressing privacy constraints and institutional data silos.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cFederated Image-to-Image MRI Translation from Heterogeneous Multiple-Sites Data\u201d explores the use of federated learning to improve MRI image translation while preserving patient privacy. Traditional deep learning models for medical imaging often require centralized data collection, which raises ethical and regulatory concerns. By leveraging federated learning, this research enables collaborative model training across multiple institutions without direct data sharing, addressing privacy constraints and institutional data silos.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-LuSe0t","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">A key challenge in MRI analysis is data heterogeneity, as imaging protocols, scanner types, and acquisition settings vary across medical centers. The study demonstrates how federated learning can mitigate these inconsistencies, allowing deep learning models to generalize better across diverse datasets. The approach improves image-to-image translation, facilitating high-quality MRI synthesis while maintaining diagnostic fidelity.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">A key challenge in MRI analysis is data heterogeneity, as imaging protocols, scanner types, and acquisition settings vary across medical centers. The study demonstrates how federated learning can mitigate these inconsistencies, allowing deep learning models to generalize better across diverse datasets. The approach improves image-to-image translation, facilitating high-quality MRI synthesis while maintaining diagnostic fidelity.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-G8SyVr","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">This work has significant implications for multi-center medical AI applications, reducing data biases and enhancing the robustness of AI-driven imaging solutions. By enabling privacy-preserving, distributed model training, it supports the development of clinically viable tools for improved diagnostic imaging and patient outcomes.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">This work has significant implications for multi-center medical AI applications, reducing data biases and enhancing the robustness of AI-driven imaging solutions. By enabling privacy-preserving, distributed model training, it supports the development of clinically viable tools for improved diagnostic imaging and patient outcomes.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-gFCdm2","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">See: Fiszer, Jan Stanis\u0142aw, et al. \"<a href=\"https:\/\/archive.ismrm.org\/2024\/2221.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Federated image-to-image MRI translation from heterogeneous multiple-sites data<\/a>.\", https:\/\/archive.ismrm.org\/2024\/2221.html\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">See: Fiszer, Jan Stanis\u0142aw, et al. \"<a href=\"https:\/\/archive.ismrm.org\/2024\/2221.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Federated image-to-image MRI translation from heterogeneous multiple-sites data<\/a>.\", https:\/\/archive.ismrm.org\/2024\/2221.html\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":4,"epAnimationGeneratedClass":"edplus_anim-k7c2VH","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-tabattention-learning-attention-conditionally-on-tabular-data-nbsp\">TabAttention: Learning Attention Conditionally on Tabular Data&nbsp;<\/h4>\n","innerContent":["\n<h4 class=\"wp-block-heading eplus-wrapper\" id=\"h-tabattention-learning-attention-conditionally-on-tabular-data-nbsp\">TabAttention: Learning Attention Conditionally on Tabular Data&nbsp;<\/h4>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-PPcv0K","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The \u201cTabAttention: Learning Attention Conditionally on Tabular Data\u201d introduces a novel attention mechanism designed to enhance deep learning models by integrating tabular data with imaging analysis.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The \u201cTabAttention: Learning Attention Conditionally on Tabular Data\u201d introduces a novel attention mechanism designed to enhance deep learning models by integrating tabular data with imaging analysis.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-LeJ7mg","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">TabAttention extends the Convolutional Block Attention Module (CBAM) by adding a Temporal Attention Module (TAM), which incorporates multi-head self-attention to learn attention maps across spatial, channel, and temporal dimensions. Unlike traditional approaches that concatenate imaging and tabular data in the final layers, TabAttention embeds tabular information directly into attention computations, allowing the model to refine feature selection dynamically.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">TabAttention extends the Convolutional Block Attention Module (CBAM) by adding a Temporal Attention Module (TAM), which incorporates multi-head self-attention to learn attention maps across spatial, channel, and temporal dimensions. Unlike traditional approaches that concatenate imaging and tabular data in the final layers, TabAttention embeds tabular information directly into attention computations, allowing the model to refine feature selection dynamically.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-F9XOGx","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The study evaluates this method on fetal birth weight (FBW) estimation, leveraging ultrasound video scans and biometric measurements. Results show that TabAttention outperforms clinicians and existing deep learning models, demonstrating its potential for improving computer-aided diagnosis where both imaging and structured clinical data are available.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The study evaluates this method on fetal birth weight (FBW) estimation, leveraging ultrasound video scans and biometric measurements. Results show that TabAttention outperforms clinicians and existing deep learning models, demonstrating its potential for improving computer-aided diagnosis where both imaging and structured clinical data are available.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-EaZt2w","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">By enabling conditional attention learning, this method enhances predictive accuracy in medical imaging applications, offering a more integrated approach to multi-modal data analysis.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">By enabling conditional attention learning, this method enhances predictive accuracy in medical imaging applications, offering a more integrated approach to multi-modal data analysis.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-0t36lI","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Grzeszczyk, Michal K., et al. \"<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-43990-2_33\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">TabAttention: Learning attention conditionally on tabular data.\"\u202f<em>International Conference on Medical Image Computing and Computer-Assisted Intervention<\/em><\/a>. Cham: Springer Nature Switzerland, 2023.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Grzeszczyk, Michal K., et al. \"<a href=\"https:\/\/link.springer.com\/chapter\/10.1007\/978-3-031-43990-2_33\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">TabAttention: Learning attention conditionally on tabular data.\"\u202f<em>International Conference on Medical Image Computing and Computer-Assisted Intervention<\/em><\/a>. Cham: Springer Nature Switzerland, 2023.\u00a0<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-UjzNgb","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">The interdisciplinary expertise of Sano researchers enables them to identify and address challenges raised by the medical community, leading to the development of <strong>innovative solutions in medical imaging and analysis<\/strong>. By leveraging insights from multiple disciplines, they create technologies with a <strong>broad range of applications<\/strong>, enhancing diagnostic accuracy and efficiency.\u00a0<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">The interdisciplinary expertise of Sano researchers enables them to identify and address challenges raised by the medical community, leading to the development of <strong>innovative solutions in medical imaging and analysis<\/strong>. By leveraging insights from multiple disciplines, they create technologies with a <strong>broad range of applications<\/strong>, enhancing diagnostic accuracy and efficiency.\u00a0<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-1E2Pkr","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Additionally, Sano integrates <strong>techniques adapted from other industries<\/strong>, such as <strong>gaming technology<\/strong>, to optimize imaging processes. For example, <strong>methods developed for lighting and shadow analysis in images<\/strong> have been repurposed for medical applications. These techniques are designed to <strong>reduce computational demands while maintaining high analytical precision<\/strong>, making advanced imaging solutions more accessible and efficient.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Additionally, Sano integrates <strong>techniques adapted from other industries<\/strong>, such as <strong>gaming technology<\/strong>, to optimize imaging processes. For example, <strong>methods developed for lighting and shadow analysis in images<\/strong> have been repurposed for medical applications. These techniques are designed to <strong>reduce computational demands while maintaining high analytical precision<\/strong>, making advanced imaging solutions more accessible and efficient.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-H04rQm","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-solutions\">Solutions\u00a0<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-solutions\">Solutions\u00a0<\/h2>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-Ca2pQv","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">At Sano, theoretical scientific knowledge serves as the foundation upon which practical solutions are built. What begins as cutting-edge research presented to the scientific community gradually evolves into innovations that enhance everyday medical applications. This seamless transition from theory to practice ensures that computational advancements directly contribute to improving patient care and medical decision-making.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">At Sano, theoretical scientific knowledge serves as the foundation upon which practical solutions are built. What begins as cutting-edge research presented to the scientific community gradually evolves into innovations that enhance everyday medical applications. This seamless transition from theory to practice ensures that computational advancements directly contribute to improving patient care and medical decision-making.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-avg3BC","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">One of the key strengths of Sano is its ability to bridge the gap between fundamental research and real-world healthcare challenges. By integrating computational techniques with clinical applications, researchers develop solutions that not only push the boundaries of science but also have tangible benefits in medicine. Among the many initiatives at Sano, several projects stand out for their potential impact:&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">One of the key strengths of Sano is its ability to bridge the gap between fundamental research and real-world healthcare challenges. By integrating computational techniques with clinical applications, researchers develop solutions that not only push the boundaries of science but also have tangible benefits in medicine. Among the many initiatives at Sano, several projects stand out for their potential impact:&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":3,"epAnimationGeneratedClass":"edplus_anim-vAtEvA","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-bsd4health-nbsp\">BSD4Health&nbsp;<\/h3>\n","innerContent":["\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-bsd4health-nbsp\">BSD4Health&nbsp;<\/h3>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-68hVqf","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">\u2022 <strong>BSD4Health<\/strong>, led by <strong>Rosmary Blanco<\/strong> from the <strong>Computational Neuroscience team<\/strong>, focuses on applying <strong>biomedical signal processing and AI techniques<\/strong> to analyze complex neurophysiological data. This project aims to enhance diagnostics and treatment monitoring, particularly in <strong>neurological disorders<\/strong>.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">\u2022 <strong>BSD4Health<\/strong>, led by <strong>Rosmary Blanco<\/strong> from the <strong>Computational Neuroscience team<\/strong>, focuses on applying <strong>biomedical signal processing and AI techniques<\/strong> to analyze complex neurophysiological data. This project aims to enhance diagnostics and treatment monitoring, particularly in <strong>neurological disorders<\/strong>.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":3,"epAnimationGeneratedClass":"edplus_anim-mTWOT1","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-medical-simulators-by-higs-nbsp\">Medical Simulators by HIGS&nbsp;<\/h3>\n","innerContent":["\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-medical-simulators-by-higs-nbsp\">Medical Simulators by HIGS&nbsp;<\/h3>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-JJjSFd","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">\u2022 <strong>Health Informatics Group<\/strong> is actively developing <strong>medical simulators<\/strong>, leveraging <strong>computational modeling and interactive technologies<\/strong> to create realistic training environments for clinicians. These tools help medical professionals refine their skills and improve patient outcomes through <strong>simulation-based education<\/strong>.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">\u2022 <strong>Health Informatics Group<\/strong> is actively developing <strong>medical simulators<\/strong>, leveraging <strong>computational modeling and interactive technologies<\/strong> to create realistic training environments for clinicians. These tools help medical professionals refine their skills and improve patient outcomes through <strong>simulation-based education<\/strong>.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"level":3,"epAnimationGeneratedClass":"edplus_anim-JMPtW8","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-sneuroeye-nbsp\">SNeuroEye&nbsp;<\/h3>\n","innerContent":["\n<h3 class=\"wp-block-heading eplus-wrapper\" id=\"h-sneuroeye-nbsp\">SNeuroEye&nbsp;<\/h3>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-97XN8i","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">\u2022 <strong>SNeuronEye<\/strong>, led by <strong>Monika Pytlarz<\/strong> from <strong>Computational Neuroscience Group<\/strong>, is an AI-powered system for automatic <strong>neuroradiology<\/strong> <strong>reporting<\/strong> in brain lesion diagnosis. By integrating computer vision for lesion detection, classification, and segmentation with natural language processing (NLP) for structured report generation, the platform assists radiologists in interpreting MRI scans. Initially designed as an educational tool for radiologists in training, SNeuroEye now focuses on <strong>reducing<\/strong> <strong>workload<\/strong>, <strong>improving<\/strong> <strong>diagnostic<\/strong> <strong>accuracy<\/strong>, and <strong>streamlining<\/strong> reporting <strong>workflows<\/strong>.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">\u2022 <strong>SNeuronEye<\/strong>, led by <strong>Monika Pytlarz<\/strong> from <strong>Computational Neuroscience Group<\/strong>, is an AI-powered system for automatic <strong>neuroradiology<\/strong> <strong>reporting<\/strong> in brain lesion diagnosis. By integrating computer vision for lesion detection, classification, and segmentation with natural language processing (NLP) for structured report generation, the platform assists radiologists in interpreting MRI scans. Initially designed as an educational tool for radiologists in training, SNeuroEye now focuses on <strong>reducing<\/strong> <strong>workload<\/strong>, <strong>improving<\/strong> <strong>diagnostic<\/strong> <strong>accuracy<\/strong>, and <strong>streamlining<\/strong> reporting <strong>workflows<\/strong>.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-dCN4vY","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">By fostering such interdisciplinary projects, Sano <strong>translates scientific discoveries into practical, patient-centered solutions<\/strong>. This approach not only accelerates <strong>technological advancements in computational medicine<\/strong> but also strengthens the collaboration between researchers, clinicians, and industry partners, ensuring that innovation remains deeply connected to real medical needs.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">By fostering such interdisciplinary projects, Sano <strong>translates scientific discoveries into practical, patient-centered solutions<\/strong>. This approach not only accelerates <strong>technological advancements in computational medicine<\/strong> but also strengthens the collaboration between researchers, clinicians, and industry partners, ensuring that innovation remains deeply connected to real medical needs.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-j7rIHP","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-networking-nbsp\">Networking&nbsp;<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-networking-nbsp\">Networking&nbsp;<\/h2>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-O6r9LY","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Sano Centre for Computational Medicine fosters collaboration between academia, healthcare, and industry through scientific seminars, industry partnerships, and interdisciplinary events. These initiatives drive advancements in computational medicine and facilitate the translation of research into real-world applications.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Sano Centre for Computational Medicine fosters collaboration between academia, healthcare, and industry through scientific seminars, industry partnerships, and interdisciplinary events. These initiatives drive advancements in computational medicine and facilitate the translation of research into real-world applications.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-jmc8Kt","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-academic-and-medical-collaborations\">Academic and Medical Collaborations\u00a0<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-academic-and-medical-collaborations\">Academic and Medical Collaborations\u00a0<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-WQyfgL","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Sano is actively expanding its outreach and enhancing its impact in the field of computational medicine through a series of interdisciplinary initiatives. One such initiative is the AI Neuro Summer School, which served as a convergence point for experts in computational neuroscience, neuroimaging, and artificial intelligence. &nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Sano is actively expanding its outreach and enhancing its impact in the field of computational medicine through a series of interdisciplinary initiatives. One such initiative is the AI Neuro Summer School, which served as a convergence point for experts in computational neuroscience, neuroimaging, and artificial intelligence. &nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-hMPo64","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">This effort is further amplified by Sano's participation in prominent global conferences.&nbsp;Among these conferences is the ISMRM (International Society for Magnetic Resonance in Medicine), organized by a global non-profit organization dedicated to advancing the science and application of magnetic resonance in medicine and biology. ISMRM is instrumental in facilitating communication and knowledge sharing among a diverse membership that includes clinicians, physicists, engineers, and technologists.&nbsp; Another key conference is the IEEE International Symposium on Biomedical Imaging (ISBI), which stands as a premier event in the field. ISBI showcases the latest advancements in biomedical imaging, fostering a rich exchange of research and encouraging collaborations across academia, healthcare, and the industry.&nbsp;Additionally, Sano participates in the annual conference of the Medical Image Computing and Computer-Assisted Intervention Society (MICCAI). As a leading event, MICCAI attracts top scientists, engineers, and clinicians involved in medical imaging and computer-assisted interventions. The conference focuses on both foundational research and emerging innovations, addressing topics like inclusive machine learning, affordable imaging solutions, and image-guided surgery, particularly in resource-limited settings.&nbsp;&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">This effort is further amplified by Sano's participation in prominent global conferences.&nbsp;Among these conferences is the ISMRM (International Society for Magnetic Resonance in Medicine), organized by a global non-profit organization dedicated to advancing the science and application of magnetic resonance in medicine and biology. ISMRM is instrumental in facilitating communication and knowledge sharing among a diverse membership that includes clinicians, physicists, engineers, and technologists.&nbsp; Another key conference is the IEEE International Symposium on Biomedical Imaging (ISBI), which stands as a premier event in the field. ISBI showcases the latest advancements in biomedical imaging, fostering a rich exchange of research and encouraging collaborations across academia, healthcare, and the industry.&nbsp;Additionally, Sano participates in the annual conference of the Medical Image Computing and Computer-Assisted Intervention Society (MICCAI). As a leading event, MICCAI attracts top scientists, engineers, and clinicians involved in medical imaging and computer-assisted interventions. The conference focuses on both foundational research and emerging innovations, addressing topics like inclusive machine learning, affordable imaging solutions, and image-guided surgery, particularly in resource-limited settings.&nbsp;&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-hS8wzh","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Through these engagements, Sano not only strengthens its presence in computational medicine but also fosters international collaborations and secures new research partnerships, positioning itself at the forefront of global scientific endeavours in the field.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Through these engagements, Sano not only strengthens its presence in computational medicine but also fosters international collaborations and secures new research partnerships, positioning itself at the forefront of global scientific endeavours in the field.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-LjzDdP","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-industry-engagement-and-commercialization-nbsp\">Industry Engagement and Commercialization&nbsp;<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-industry-engagement-and-commercialization-nbsp\">Industry Engagement and Commercialization&nbsp;<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-movr2V","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Sano actively partners with industry to accelerate AI adoption in healthcare. Through workshops and networking events, it collaborates with technology companies, pharmaceutical firms, and medical device manufacturers to develop scalable, regulatory-compliant AI solutions, including predictive analytics, federated learning, and automated medical imaging.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Sano actively partners with industry to accelerate AI adoption in healthcare. Through workshops and networking events, it collaborates with technology companies, pharmaceutical firms, and medical device manufacturers to develop scalable, regulatory-compliant AI solutions, including predictive analytics, federated learning, and automated medical imaging.&nbsp;<\/p>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-6aV08a","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Additionally, Sano supports startups and entrepreneurs, providing access to research expertise, computational infrastructure, and commercialization pathways. Through technology transfer initiatives, cutting-edge research is transformed into practical, industry-ready solutions.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Additionally, Sano supports startups and entrepreneurs, providing access to research expertise, computational infrastructure, and commercialization pathways. Through technology transfer initiatives, cutting-edge research is transformed into practical, industry-ready solutions.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-VKIQ4C","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-interdisciplinary-networking-and-conferences-nbsp\">Interdisciplinary Networking and Conferences&nbsp;<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-interdisciplinary-networking-and-conferences-nbsp\">Interdisciplinary Networking and Conferences&nbsp;<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-tZGwcD","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Sano expands its outreach through interdisciplinary initiatives, such as the AI Neuro Summer School, bringing together experts in computational neuroscience, neuroimaging, and AI. By participating in global conferences like ISMRM and MICCAI, Sano strengthens its presence in computational medicine, fostering international collaborations and securing new research partnerships.&nbsp;<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Sano expands its outreach through interdisciplinary initiatives, such as the AI Neuro Summer School, bringing together experts in computational neuroscience, neuroimaging, and AI. By participating in global conferences like ISMRM and MICCAI, Sano strengthens its presence in computational medicine, fostering international collaborations and securing new research partnerships.&nbsp;<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-Hf0OVp","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-a-hub-for-ai-driven-medicine-nbsp\">A Hub for AI-Driven Medicine&nbsp;<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\" id=\"h-a-hub-for-ai-driven-medicine-nbsp\">A Hub for AI-Driven Medicine&nbsp;<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"30px","epAnimationGeneratedClass":"edplus_anim-CrDnai","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:30px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-WQyfgL","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">Through its academic, industry, and interdisciplinary initiatives, Sano is establishing itself as a leading hub for AI-driven medical research and innovation. By bridging the gap between computational science and clinical practice, it ensures that its technologies translate into practical, ethical, and impactful solutions for healthcare, ultimately benefiting both medical professionals and patients.<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">Through its academic, industry, and interdisciplinary initiatives, Sano is establishing itself as a leading hub for AI-driven medical research and innovation. By bridging the gap between computational science and clinical practice, it ensures that its technologies translate into practical, ethical, and impactful solutions for healthcare, ultimately benefiting both medical professionals and patients.<\/p>\n"]}],"meta_data":{"has_thumbnail_pattern":false,"share_on_social_media":{"has_social_media":false}},"featured_image":{"url":"https:\/\/sano.science\/wp-content\/uploads\/2025\/02\/Medical-Image-Processing-Analysis-1024x724.webp"},"main_category":{"name":"Uncategorized"},"prev_page":{"slug":"strengthening-ai-collaborations-agh-sano"},"next_page":{"slug":"2024-through-the-eyes-of-sanos-phd-students"},"_links":{"self":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/posts\/21524","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/users\/8"}],"replies":[{"embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/comments?post=21524"}],"version-history":[{"count":9,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/posts\/21524\/revisions"}],"predecessor-version":[{"id":21551,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/posts\/21524\/revisions\/21551"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/media\/22150"}],"wp:attachment":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/media?parent=21524"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/categories?post=21524"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/tags?post=21524"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}