ICML 2026 enforces Policy A as Reddit debates watermarking and false positives
Original: [D] ICML rejects papers of reviewers who used LLMs despite agreeing not to View original →
On March 18, 2026, a Reddit discussion on r/MachineLearning asked whether ICML had gone too far by rejecting papers linked to reviewers who used LLMs after agreeing not to. The thread framed the move as unusually strict for a major ML conference and quickly split between support for hard enforcement and concern about false positives. That tension shaped most of the discussion: less argument about whether reviewers had obligations, and more argument about how confidently violations could be identified.
The official confirmation came from ICML's March 18 blog post, On Violations of LLM Review Policies. ICML said it desk-rejected 497 submissions tied to 506 reciprocal reviewers who had agreed to Policy A and were detected using LLMs in review writing. The same post says ICML 2026 introduced a two-policy system for reviewing:
- Policy A: no LLM use in reviewing.
- Policy B: limited use of privacy-compliant LLMs to help understand the paper, related works, and polish reviewer-written text, but not to judge strengths or weaknesses, outline the review, or write the review itself.
The conference's reviewer instructions make the sanction path explicit. Reviewers are told that any deviation from their assigned policy can lead to desk rejection of their own submissions. The instructions also put the timing in context: the reviewing period ran from February 12 to March 12, 2026, and the reviewer-author discussion period begins on March 24, 2026. In other words, the enforcement was disclosed after reviews had been filed, but before the next round of author interaction and discussion.
ICML's blog also pushed back on the idea that it relied on generic AI writing detectors. It says the conference used watermarking through hidden instructions embedded in submission PDFs, and that every flagged case was manually verified by a human. Earlier official ICML policy materials, including Introducing ICML 2026 policy for LLMs in reviews, had already said that reported violations would be penalized and that the conference planned automated tools to detect violations while respecting peer-review confidentiality. The March 18 post therefore reads as enforcement of a policy structure ICML had already signaled in advance, not a new rule announced after the fact.
Reddit comments focused on that detection method. Several commenters described the mechanism as a prompt-injection or watermarking approach that would mainly catch reviewers who pasted full PDFs into an LLM and copied the output back with little or no editing. Supporters argued this makes false positives less likely than ordinary AI-text detection and fits the conference's statement that generic detectors were not used. Critics replied that even a low false-positive rate matters when the consequence is a desk rejection that can affect coauthors, collaborations, and reputation. The debate was therefore not only about rule-breaking, but also about proportionality and evidentiary standards.
Taken together, the Reddit thread and the official ICML sources describe a conference trying to enforce a newly formalized LLM review regime in real time. The key factual point is narrower than the Reddit headline suggests: the action applied to reviewers who were assigned the no-LLM track, were warned in advance that violating the assigned policy could jeopardize their own submissions, and were then judged by ICML to have crossed that line.
Related Articles
A 184-point r/MachineLearning thread discussed reported ICML enforcement against no-LLM review violations, with commenters focusing on canary-based detection and coauthor risk.
Vercel used X on March 12, 2026 to show how Notion Workers runs agent-capable code on Vercel Sandbox. Vercel's write-up says Workers handle third-party syncs, automations, and AI agent tool calls, while Sandbox provides isolation, credential management, network controls, snapshots, and active-CPU billing.
A highly upvoted r/MachineLearning thread debates whether skyrocketing acceptance rates at top venues like CVPR and ICLR are diluting the academic value of conference publication, raising concerns about review quality.
Comments (0)
No comments yet. Be the first to comment!