LocalLLaMA’s reaction was almost resigned: of course the public benchmark got benchmaxxed. What mattered was seeing contamination and flawed tests laid out in numbers big enough that the old bragging rights no longer looked stable.
#coding-agents
RSS FeedGitHub is rolling GPT-5.5 into Copilot across IDEs, CLI, mobile, github.com, and the cloud agent, turning OpenAI's latest model into a daily coding option instead of a release-note headline. The catch is a 7.5x premium request multiplier, and Business or Enterprise admins must explicitly enable access.
HN piled in because this was bigger than another benchmark refresh. OpenAI said SWE-bench Verified is no longer a trustworthy frontier coding signal, and the thread immediately shifted to contamination, saturated leaderboards, and whether public coding evals can stay clean at all.
Why it matters: public coding benchmarks are getting less useful at the frontier, so a fresh product-side score can move developer attention fast. Cursor says GPT-5.5 is now its top model on CursorBench at 72.8% and is discounting usage by 50% through May 2.
Hacker News liked that Zed did more than add extra agents to a sidebar. The thread focused on worktree isolation, repo scoping, and whether Zed found a more usable shape for multi-agent coding than the usual terminal pile-up. By crawl time on April 25, 2026, the post had 278 points and 160 comments.
What energized LocalLLaMA was not just another Qwen score jump. It was the claim that changing the agent scaffold moved the same family of local models from 19% to 45% to 78.7%, making benchmark comparisons feel less settled than many assumed.
Hacker News treated Anthropic’s Claude Code write-up as a rare admission that product defaults and prompt-layer tweaks can make a model feel worse even when the API layer stays unchanged. By crawl time on April 24, 2026, the thread had 727 points and 543 comments.
Why it matters: Moonshot is turning “agent swarm” from a demo phrase into an execution claim with real scale numbers. The Kimi post says one run can coordinate 300 sub-agents across 4,000 steps and return 100-plus files instead of chat transcripts.
HN latched onto a pain every heavy coding-tool user knows: the bug is tiny, but the diff balloons anyway. A new write-up turns that annoyance into a measurable benchmark and argues that better prompting and RL can make models edit with more restraint.
Alibaba’s April 22 Qwen3.6-Max-Preview post claims top scores across six coding benchmarks and clear gains over Qwen3.6-Plus. The caveat is just as important: this is a hosted proprietary preview, not a new open-weight Qwen release.
GitHub has paused new Copilot Pro, Pro+, and Student sign-ups after agentic workflows pushed compute demand beyond the old plan structure. The sharper signal is economic: token-based session and weekly limits now matter separately from premium request counts.
HN read Kimi K2.6 as a test of whether open-weight coding agents can last through real engineering work. The 12-hour and 13-hour coding cases drew attention, while commenters immediately pressed on speed, provider accuracy, and benchmark realism.