Major conference catches illicit AI use - and rejects hundreds of papers

Major conference catches illicit AI use - and rejects hundreds of papers

Nature technology

Key Points:

  • The International Conference on Machine Learning (ICML) rejected 497 papers, about 2% of submissions, due to violations of AI-use policies in peer reviews, specifically unauthorized use of large language models (LLMs).
  • ICML employed a novel detection method by embedding watermarks in papers, which triggered AI-generated peer reviews to include identifiable phrases, revealing illicit LLM use.
  • The conference implemented two separate peer-review tracks for the first time: one permitting limited LLM use and another strictly banning it, allowing authors and reviewers to select their preferred stream.
  • The case highlights the need for clearer guidance on responsible AI use in peer review, as over half of researchers reportedly use AI in peer review despite many policies forbidding it.
  • Reactions among researchers vary, with some supporting ICML’s strict enforcement and others warning that banning AI use may demotivate reviewers and lead to superficial or meaningless reviews.

Trending Business

Trending Technology

Trending Health