ChatGPT’s ability to generate realistic experimental images poses a new challenge to academic integrity

Lingxuan Zhu, Yancheng Lai,Weiming Mou, Haoran Zhang, Anqi Lin, Chang Qi,Tao Yang, Liling Xu,Jian Zhang,Peng Luo

Journal of Hematology & Oncology(2024)

引用 0|浏览5
暂无评分
摘要
Abstract The rapid advancements in large language models (LLMs) such as ChatGPT have raised concerns about their potential impact on academic integrity. While initial concerns focused on ChatGPT’s writing capabilities, recent updates have integrated DALL-E 3’s image generation features, extending the risks to visual evidence in biomedical research. Our tests revealed ChatGPT’s nearly barrier-free image generation feature can be used to generate experimental result images, such as blood smears, Western Blot, immunofluorescence and so on. Although the current ability of ChatGPT to generate experimental images is limited, the risk of misuse is evident. This development underscores the need for immediate action. We suggest that AI providers restrict the generation of experimental image, develop tools to detect AI-generated images, and consider adding “invisible watermarks” to the generated images. By implementing these measures, we can better ensure the responsible use of AI technology in academic research and maintain the integrity of scientific evidence.
更多
查看译文
关键词
Academic integrity,ChatGPT,DALL-E,Large language model,Experimental images,Western Blot
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要