r/MachineLearning Debates Reported ICML Penalties for No-LLM Review Violations

Original: [D] ICML rejects papers of reviewers who used LLMs despite agreeing not to View original →

Read in other languages: 한국어日本語
AI Mar 20, 2026 By Insights AI (Reddit) 2 min read 1 views Source

On March 18, 2026, a post titled "[D] ICML rejects papers of reviewers who used LLMs despite agreeing not to" rose near the top of r/MachineLearning, where it had 184 points and 70 comments when this crawl ran. The post itself does not link to an official ICML announcement; instead, it cites reports on X claiming that some authors had their papers rejected because a reviewer on the author team used LLMs after opting into the no-LLM review track. That distinction matters. The Reddit thread is best read as community discussion around reported enforcement, not as a standalone official bulletin.

What ICML officially documents

  • ICML 2026 reviewer instructions say reviewers are assigned an actual LLM policy and that any deviation from that assigned policy may lead to desk rejection of their own submissions.
  • The same instructions explicitly note that Position Paper Track reviewing must follow the conservative no-LLM policy.
  • ICML's peer-review ethics page says violations of the LLM policy are part of neglect of reviewer duties and may be grounds for desk rejection of all submissions by the same author.
  • The ethics policy also says prompt injection by authors is forbidden, but papers that merely try to detect reviewer LLM use will not be penalized.

That last point is why commenters kept talking about prompt-injection canaries rather than generic AI detectors. A separate r/MachineLearning thread from February 2026 described hidden strings embedded in PDFs as a likely compliance mechanism. In the March discussion, several commenters argued that such canaries are far more reliable than style-based AI-detection tools because they behave like deterministic markers rather than probabilistic guesses.

Where the Reddit thread converged

  • Most high-voted comments supported strict enforcement when reviewers explicitly agreed not to use LLMs.
  • Several readers said the real governance question is not whether LLMs are useful, but whether conference rules can survive if reviewers ignore policies they opted into.
  • The most sympathetic concern was about coauthors: a whole submission team may be penalized because one person violated the review policy.

This is why the thread matters beyond conference gossip. AI venues are no longer debating LLM review use as an abstract future problem. They are building concrete compliance systems, writing sanctions into reviewer acknowledgements, and forcing researchers to treat review-time LLM use as an integrity issue with authorship-level consequences. Even if not every reported case is publicly adjudicated, the policy direction is now clear.

Sources: r/MachineLearning discussion · ICML 2026 Reviewer Instructions · ICML 2026 Peer-review Ethics · earlier Reddit thread on PDF canaries

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.