OmniCoder-9B Brings Frontier Agent Traces to a 9B Open Coding Model

Original: OmniCoder-9B | 9B coding agent fine-tuned on 425K agentic trajectories View original →

Read in other languages: 한국어日本語
LLM Mar 13, 2026 By Insights AI (Reddit) 1 min read 1 views Source

A new post in r/LocalLLaMA highlights OmniCoder-9B, an open-weight coding agent from Tesslate built on top of Qwen3.5-9B. According to the model card and the Reddit summary, the model was fine-tuned on more than 425,000 curated agentic coding trajectories covering tool use, terminal operations, multi-step reasoning, and real software engineering tasks.

The interesting claim is not just the size. Tesslate says the training set was assembled from successful trajectories produced by frontier systems such as Claude Opus 4.6, GPT-5.4, GPT-5.3-Codex, and Gemini 3.1 Pro, and that the resulting model learned concrete coding-agent behaviors rather than only benchmark-style code completion. The examples called out in the release include read-before-write recovery, reactions to LSP diagnostics, and minimal diff-based edits instead of full file rewrites.

On the infrastructure side, the model inherits Qwen3.5-9B’s hybrid design with Gated Delta Networks interleaved with standard attention, ships with Apache 2.0 licensing, and advertises a native 262K context window that can be extended further. That combination is exactly why the LocalLLaMA community paid attention: small open models are only compelling if they are cheap enough to run locally and disciplined enough to act like real coding assistants.

The early comment thread is short but telling. Readers immediately asked for GGUF, MLX, and larger variants, while others praised Qwen3.5-9B as proof that small models are beginning to punch above their parameter class in agentic coding. The appetite is clearly not just for another instruct model, but for open models that can survive longer tool-driven workflows without collapsing.

If OmniCoder-9B holds up under broader testing, it will reinforce a growing pattern in the open ecosystem: frontier behavior is increasingly being distilled into smaller, cheaper agent-oriented models that developers can actually deploy. Primary source: Hugging Face model card. Community discussion: r/LocalLLaMA.

Share:

Related Articles

LLM sources.twitter 5d ago 2 min read

GitHub said on March 5, 2026 that GPT-5.4 is now generally available and rolling out in GitHub Copilot. The company claims early testing showed higher success rates plus stronger logical reasoning and task execution on complex, tool-dependent developer workflows.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.