
Upon your visit to nature.com, we appreciate your presence. Your current web browser lacks adequate CSS support. For an optimal experience, we suggest upgrading to a more recent browser version or disabling compatibility mode in Internet Explorer. In the meantime, to ensure ongoing assistance, our site is being displayed without styles and JavaScript.
A dispute has arisen following the discovery that 21% of reviews for a global artificial intelligence conference were produced by AI systems. What steps can scholars take if they suspect their papers have been evaluated by AI? Many academics have expressed concerns on social platforms regarding papers and evaluations submitted for the upcoming International Conference on Learning Representations (ICLR), an annual event for machine learning experts. They highlighted issues such as fabricated references and excessively lengthy and ambiguous feedback on their research.
This article has been updated to clarify how Pangram described the model in a preprint.