AI-generated 'academic papers' raise alarms in scientific community

By
Web Desk
AI invasion in academia: Concerns mount as ChatGPT creeps into reputable scientific journals.—Reuters/File
AI invasion in academia: Concerns mount as ChatGPT creeps into reputable scientific journals.—Reuters/File 

The recent uproar over the usage of AI, particularly the ChatGPT, by scientists to generate academic papers has raised an outcry of rejection and worries in academic society, Futurism reported. 

The case raises the alarm on the AI penetration to academia amid problems like dishonest publishing, corrupt admission process and inauthentic business model.

It is learned that AI-generated papers do not only disseminate in journals but also reach reputable publications. Scholars have demonstrated this occurrence on X and other social media platforms by the fact that AI-generated phrases of "According to the information I had at the time" and "I do not have the recent data" appear in the Google Scholar searches.

Some of the journals tracked by worried researchers may be predatory, which others like Surfaces and Interfaces, a recognised journal, have made an inadvertent publication of AI-generated content. 

As an example, one article, demonstrated Bellingcat researcher Koltai's work, retained behavioral traces of AI in its introduction, implying the existence of inadequate editorial control during the peer-review process.

Albeit this disclosure, editors of journal like Michael J. Naughton of Surfaces and Interfaces have expressed their sympathy to the issue and have resolved to work on it. 

Nevertheless, AIs written papers, especially in famous journals, demonstrate the sense of urgency in the regulation of publication norms which should be made to safeguard academic honesty and popularity.

Risks for the journals as AI-generated content are flying high and would rather destabilise the base of scholarly communication and knowledge sharing.