Hacker News Debates Whether LLM Coding Progress Has Stalled on Maintainer Merge Rates
Original: Are LLM merge rates not getting better? View original →
Two measurements, two very different stories
A March 12 essay on Entropic Thoughts sparked discussion on Hacker News by challenging a comforting assumption in coding-agent discourse: that higher benchmark scores automatically mean more useful software patches. The HN thread reached 167 points and 155 comments at crawl time, and it built directly on a March 10 METR note that had already shown a large gap between SWE-bench Verified automated-grader success and what maintainers would actually merge into a repository.
METR's core result is concrete. Maintainers from scikit-learn, Sphinx, and pytest reviewed 296 AI-generated pull requests. After normalizing against a human-written "golden" baseline, METR found that maintainer merge decisions sat roughly 24 percentage points below automated grader results. In practical terms, the note says a 50% success horizon falls from about 50 minutes under test-passing criteria to 8 minutes under maintainer-merge criteria. That does not prove a hard capability ceiling, but it does show that test success and production acceptability are not interchangeable.
The stronger claim from the HN-linked essay
The Entropic Thoughts post pushes the interpretation further. Looking only at the merge-rate curve in METR's chart, the author argues there is little visible evidence of improvement after early 2025. To make that less subjective, the post compares a gently rising trend line against piecewise-constant and constant alternatives using leave-one-out cross-validation. The reported Brier scores are 0.0129 for the upward slope, 0.0117 for the piecewise-constant fit, and 0.0100 for the fully constant fit, implying that flat or stepwise explanations predict the observed merge-rate data better than a smooth growth narrative.
That does not overturn METR's main result, and the author is careful about that. METR itself says newer models after Sonnet 4.5 were not covered in the careful maintainer-review study, so nobody should over-read the chart into a permanent plateau. But the critique matters because it attacks a common habit in AI progress narratives: treating noisy benchmark curves as if they were direct measurements of deployment-ready value.
Why this matters for coding agents
The operational lesson is sharper than the headline. METR's rejection breakdown suggests models have been improving different failure modes at different rates. Some progress moves patches from outright test failure into the category of code-quality problems or maintainability issues, which helps benchmark optics but still fails human review. That means agent builders may be rewarding the wrong behavior if they optimize purely for benchmark pass rates.
The HN discussion matters because it sits at the intersection of evaluation design and product strategy. If maintainer mergeability is moving slowly while test-passing scores rise faster, then teams shipping coding agents need better real-world acceptance metrics, stronger elicitation loops, and perhaps more human feedback before celebrating benchmark jumps as economic progress. The broader point is not that coding models stopped improving on March 12, 2026. It is that the evidence for "steady mergeable-code progress" is weaker than the headline benchmark numbers imply.
Entropic Thoughts analysis · METR note · Hacker News discussion
Related Articles
A LocalLLaMA post reports that a simple “verify after every edit” loop raised Qwen3.5-35B-A3B from 22.2% to 37.8% on SWE-bench Verified Hard, approaching a cited 40% reference for Claude Opus 4.6.
A LocalLLaMA discussion of SWE-rebench January runs reports close top-tier results, with Claude Code leading pass@1 and pass@5 while open models narrow the gap.
METR's March 10, 2026 note argues that about half of test-passing SWE-bench Verified PRs from recent agents would still be rejected by maintainers. HN treated it as a warning that benchmark wins do not yet measure scope control, code quality, or repo fit.
Comments (0)
No comments yet. Be the first to comment!