{"id":15083,"date":"2024-01-18T20:49:59","date_gmt":"2024-01-18T19:49:59","guid":{"rendered":"https:\/\/sano.science\/?post_type=research&#038;p=15083"},"modified":"2024-01-18T20:49:59","modified_gmt":"2024-01-18T19:49:59","slug":"safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai","status":"publish","type":"research","link":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/","title":{"rendered":"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI"},"content":{"rendered":"\n<h2 class=\"wp-block-heading eplus-wrapper\">Ahmed Abdeen Hamed, Malgorzata Zachara-Szymanska, Xindong Wu<\/h2>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n<p class=\" eplus-wrapper\">As the influence of Transformer-based approaches in general and generative AI in particular continues to expand across various domains, concerns regarding authenticity and explainability are on the rise. Here, we share our perspective on the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science. We recognize the transformative potential of generative AI, exemplified by ChatGPT, in the scientific landscape. However, we also emphasize the urgency of addressing associated challenges, particularly in light of the risks posed by dis-information, misinformation, and unreproducible science. This perspective serves as a response to the call for concerted efforts to safeguard the authen-ticity of information in the age of AI. By prioritizing detection, fact- checking, and explainability policies, we aim to foster a climate of trust, uphold ethical standards, and harness the full potential of AI for the betterment of science and society.<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n\n\n\n\t\n    \n        \n\t\t\t<a href=\"https:\/\/www.cell.com\/iscience\/pdf\/S2589-0042(24)00003-8.pdf?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2589004224000038%3Fshowall%3Dtrue\" target=\"_blank\" rel= \"noopener noreferrer nofollow\" class=\"button primary \">\n\n\t\t\t\t<span>\n\t\t\t\t\tREAD HERE\n\t\t\t\t<\/span>\n\n\t\t\t<\/a>\n\n        \n    \n","protected":false},"excerpt":{"rendered":"<p>In: Cell Press iScience, 2024.<\/p>\n","protected":false},"featured_media":0,"template":"","research_type":[8],"research_team":[],"class_list":["post-15083","research","type-research","status-publish","hentry","research_type-publications"],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v27.5 (Yoast SEO v27.5) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI - Centre for Computational Personalized Medicine<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI\" \/>\n<meta property=\"og:description\" content=\"In: Cell Press iScience, 2024.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/\" \/>\n<meta property=\"og:site_name\" content=\"Centre for Computational Personalized Medicine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/sano.science\/\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@sanoscience\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"1 minute\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/sano.science\\\/research\\\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\\\/\",\"url\":\"https:\\\/\\\/sano.science\\\/research\\\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\\\/\",\"name\":\"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI - Centre for Computational Personalized Medicine\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#website\"},\"datePublished\":\"2024-01-18T19:49:59+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/sano.science\\\/research\\\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/sano.science\\\/research\\\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\\\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/sano.science\\\/research\\\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/sano.science\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Research\",\"item\":\"https:\\\/\\\/sano.science\\\/research\\\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Publications\",\"item\":\"https:\\\/\\\/sano.science\\\/research-type\\\/publications\\\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/sano.science\\\/#website\",\"url\":\"https:\\\/\\\/sano.science\\\/\",\"name\":\"Centre for Computational Personalized Medicine\",\"description\":\"Sano \u2013 Centre for Computational Medicine\",\"publisher\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/sano.science\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/sano.science\\\/#organization\",\"name\":\"Sano \u2013 Centre for Computational Medicine\",\"alternateName\":\"Sano\",\"url\":\"https:\\\/\\\/sano.science\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/logo_sano_podstawowe.png\",\"contentUrl\":\"https:\\\/\\\/sano.science\\\/wp-content\\\/uploads\\\/2024\\\/05\\\/logo_sano_podstawowe.png\",\"width\":700,\"height\":265,\"caption\":\"Sano \u2013 Centre for Computational Medicine\"},\"image\":{\"@id\":\"https:\\\/\\\/sano.science\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/sano.science\\\/\",\"https:\\\/\\\/x.com\\\/sanoscience\",\"https:\\\/\\\/www.linkedin.com\\\/company\\\/sanoscience\\\/\",\"https:\\\/\\\/www.youtube.com\\\/channel\\\/UCDZ_8TcjMWUG2ZcgKKgfpwQ\",\"https:\\\/\\\/bsky.app\\\/profile\\\/sanoscience.bsky.social\"]}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI - Centre for Computational Personalized Medicine","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/","og_locale":"en_US","og_type":"article","og_title":"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI","og_description":"In: Cell Press iScience, 2024.","og_url":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/","og_site_name":"Centre for Computational Personalized Medicine","article_publisher":"https:\/\/www.facebook.com\/sano.science\/","twitter_card":"summary_large_image","twitter_site":"@sanoscience","twitter_misc":{"Est. reading time":"1 minute"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/","url":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/","name":"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI - Centre for Computational Personalized Medicine","isPartOf":{"@id":"https:\/\/sano.science\/#website"},"datePublished":"2024-01-18T19:49:59+00:00","breadcrumb":{"@id":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/"]}]},{"@type":"BreadcrumbList","@id":"https:\/\/sano.science\/research\/safeguarding-authenticity-for-mitigating-the-harms-of-generative-ai-issues-research-agenda-and-policies-for-detection-fact-checking-and-ethical-ai\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/sano.science\/"},{"@type":"ListItem","position":2,"name":"Research","item":"https:\/\/sano.science\/research\/"},{"@type":"ListItem","position":3,"name":"Publications","item":"https:\/\/sano.science\/research-type\/publications\/"},{"@type":"ListItem","position":4,"name":"Safeguarding Authenticity for Mitigating the Harms of Generative AI: Issues, Research Agenda, and Policies for Detection, Fact-Checking, and Ethical AI"}]},{"@type":"WebSite","@id":"https:\/\/sano.science\/#website","url":"https:\/\/sano.science\/","name":"Centre for Computational Personalized Medicine","description":"Sano \u2013 Centre for Computational Medicine","publisher":{"@id":"https:\/\/sano.science\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/sano.science\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/sano.science\/#organization","name":"Sano \u2013 Centre for Computational Medicine","alternateName":"Sano","url":"https:\/\/sano.science\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/sano.science\/#\/schema\/logo\/image\/","url":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/logo_sano_podstawowe.png","contentUrl":"https:\/\/sano.science\/wp-content\/uploads\/2024\/05\/logo_sano_podstawowe.png","width":700,"height":265,"caption":"Sano \u2013 Centre for Computational Medicine"},"image":{"@id":"https:\/\/sano.science\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/sano.science\/","https:\/\/x.com\/sanoscience","https:\/\/www.linkedin.com\/company\/sanoscience\/","https:\/\/www.youtube.com\/channel\/UCDZ_8TcjMWUG2ZcgKKgfpwQ","https:\/\/bsky.app\/profile\/sanoscience.bsky.social"]}]}},"acf":[],"gutenberg_blocks":[{"blockName":"custom-styles","attrs":{"styles":""}},{"blockName":"core\/heading","attrs":{"epAnimationGeneratedClass":"edplus_anim-gt9bVE","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<h2 class=\"wp-block-heading eplus-wrapper\">Ahmed Abdeen Hamed, Malgorzata Zachara-Szymanska, Xindong Wu<\/h2>\n","innerContent":["\n<h2 class=\"wp-block-heading eplus-wrapper\">Ahmed Abdeen Hamed, Malgorzata Zachara-Szymanska, Xindong Wu<\/h2>\n"]},{"blockName":"core\/spacer","attrs":{"height":"50px","epAnimationGeneratedClass":"edplus_anim-ISauic","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"core\/paragraph","attrs":{"epAnimationGeneratedClass":"edplus_anim-LHiJju","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<p class=\" eplus-wrapper\">As the influence of Transformer-based approaches in general and generative AI in particular continues to expand across various domains, concerns regarding authenticity and explainability are on the rise. Here, we share our perspective on the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science. We recognize the transformative potential of generative AI, exemplified by ChatGPT, in the scientific landscape. However, we also emphasize the urgency of addressing associated challenges, particularly in light of the risks posed by dis-information, misinformation, and unreproducible science. This perspective serves as a response to the call for concerted efforts to safeguard the authen-ticity of information in the age of AI. By prioritizing detection, fact- checking, and explainability policies, we aim to foster a climate of trust, uphold ethical standards, and harness the full potential of AI for the betterment of science and society.<\/p>\n","innerContent":["\n<p class=\" eplus-wrapper\">As the influence of Transformer-based approaches in general and generative AI in particular continues to expand across various domains, concerns regarding authenticity and explainability are on the rise. Here, we share our perspective on the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science. We recognize the transformative potential of generative AI, exemplified by ChatGPT, in the scientific landscape. However, we also emphasize the urgency of addressing associated challenges, particularly in light of the risks posed by dis-information, misinformation, and unreproducible science. This perspective serves as a response to the call for concerted efforts to safeguard the authen-ticity of information in the age of AI. By prioritizing detection, fact- checking, and explainability policies, we aim to foster a climate of trust, uphold ethical standards, and harness the full potential of AI for the betterment of science and society.<\/p>\n"]},{"blockName":"core\/spacer","attrs":{"height":"50px","epAnimationGeneratedClass":"edplus_anim-ISauic","epGeneratedClass":"eplus-wrapper"},"innerBlocks":[],"innerHTML":"\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n","innerContent":["\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer eplus-wrapper\"><\/div>\n"]},{"blockName":"acf\/button","attrs":{"title":"READ HERE","button_type":"link","url":"https:\/\/www.cell.com\/iscience\/pdf\/S2589-0042(24)00003-8.pdf?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2589004224000038%3Fshowall%3Dtrue","button_style":"primary","target":"_blank","button_extra_classes":""},"innerBlocks":[],"innerHTML":"","innerContent":[]}],"meta_data":{"is_automatically_other_posts":true,"number_of_posts":"3","is_automatically_check_also_posts":true},"_links":{"self":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/research\/15083","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/research"}],"about":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/types\/research"}],"version-history":[{"count":3,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/research\/15083\/revisions"}],"predecessor-version":[{"id":15663,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/research\/15083\/revisions\/15663"}],"wp:attachment":[{"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/media?parent=15083"}],"wp:term":[{"taxonomy":"research_type","embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/research_type?post=15083"},{"taxonomy":"research_team","embeddable":true,"href":"https:\/\/sano.science\/index.php\/wp-json\/wp\/v2\/research_team?post=15083"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}