🕒 Loading time...
🌡️ Loading weather...

Ai Mainstream

‘A serious problem’: peer reviews created using AI can avoid detection

Thank you for stopping by nature.com. Your current browser version has limited CSS support. For the best experience, we suggest using a more modern browser or disabling compatibility mode in Internet Explorer. To ensure continued support, the site is being displayed without styles and JavaScript.

The use of AI to create peer reviews can present a significant challenge as these AI-generated reports are often not detected by existing tools. Researchers have raised concerns about the increasing difficulty in distinguishing between peer-review reports generated by artificial intelligence and those written by humans.

A study conducted by a research team in China utilized the Claude 2.0 large language model (LLM) developed by Anthropic, an AI company based in California, to generate peer-review reports for 20 cancer-biology papers published in eLife journal. The team compared the AI-generated reports with authentic ones from eLife.

According to Lingxuan Zhu, an oncologist and co-author of the study, the AI-written reviews appeared professional but lacked specific and insightful feedback, highlighting a serious issue in the peer-review process.

The research revealed that Claude could produce plausible citation requests and convincing rejection recommendations, which could potentially lead journals to reject valid papers based on persuasive AI-generated negative reviews.

While AI detection tools are advancing, they still struggle to accurately determine the extent of AI involvement in document creation. This poses challenges for journals trying to assess the use of AI in generating referee reports.

Opinions vary on what constitutes an acceptable use of generative AI in peer review. A Nature survey showed differing views among researchers, with a majority opposing creating reviewer reports from scratch but approving using AI to assist in answering questions about papers.

Although some believe that AI-written reports may not become a widespread issue as reviewers can decline assignments, others like Mikołaj Piniewski argue that it is a growing concern. Piniewski has encountered suspected AI-written referee reports and believes that some journal editors unknowingly accept such reports due to a shortage of peer reviewers and convenience factors.