Ahmed Abdeen Hamed, Malgorzata Zachara-Szymanska, Xindong Wu

As the influence of Transformer-based approaches in general and generative AI in particular continues to expand across various domains, concerns regarding authenticity and explainability are on the rise. Here, we share our perspective on the necessity of implementing effective detection, verification, and explainability mechanisms to counteract the potential harms arising from the proliferation of AI-generated inauthentic content and science. We recognize the transformative potential of generative AI, exemplified by ChatGPT, in the scientific landscape. However, we also emphasize the urgency of addressing associated challenges, particularly in light of the risks posed by dis-information, misinformation, and unreproducible science. This perspective serves as a response to the call for concerted efforts to safeguard the authen-ticity of information in the age of AI. By prioritizing detection, fact- checking, and explainability policies, we aim to foster a climate of trust, uphold ethical standards, and harness the full potential of AI for the betterment of science and society.

READ HERE