LLM Coding Performance: Harness Design, Not Models, Is the Key

Original: Improving 15 LLMs at Coding in One Afternoon: Only the Harness Changed View original →

Read in other languages: 한국어日本語
AI Feb 12, 2026 By Insights AI (HN) 1 min read 7 views Source

Overview

Can Bölük demonstrated that edit tool (harness) design, not model selection, is the primary bottleneck in LLM coding performance. Testing 16 models across 180 React codebase tasks revealed that changing only the edit approach produces dramatic improvements.

Problems with Existing Edit Approaches

Patch format (OpenAI/Codex): Uses diff-style strings but fails catastrophically for non-GPT models. Grok 4's failure rate reached 50.7%.

String replacement (Claude Code): Requires exact character matching including whitespace, generating frequent "String to replace not found" errors.

Neural merging (Cursor): Fine-tuned a separate model solely to fix edit failures, acknowledging the problem's severity.

The Hashline Solution

The author proposes tagging each line with content hashes. Models reference hash tags rather than reproducing text. This approach:

  • Prevents corruption if files change between reads
  • Eliminates whitespace reproduction requirements
  • Shows that models aren't flaky at understanding tasks, but at expressing themselves

Benchmark Results

Grok Code Fast improved from 6.7% to 68.3% success rate—a tenfold gain. This proves "the model isn't flaky at understanding the task. It's flaky at expressing itself."

Key Takeaway

Open-source harness development benefits all models, while vendor-specific optimization creates isolated silos, ultimately hindering ecosystem progress. The highest-leverage innovation point right now is not model improvement, but harness design.

Share:

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.