Poolside opens Laguna XS.2, a 33B/3B coding model for one GPU

Original: Poolside opens Laguna XS.2, a 33B/3B coding model for one GPU View original →

Read in other languages: 한국어日本語
LLM Apr 29, 2026 By Insights AI 2 min read 1 views Source

Why this post matters

Open-weight coding models that can run outside the biggest closed API stacks are still rare. Poolside used its X account on April 28 to push Laguna XS.2 into that gap: a model the company described as "33B total / 3B active", built for agentic coding, able to run on a single GPU, and released under Apache 2.0. That combination matters because it lowers the cost of testing long-horizon coding agents without waiting for a closed vendor to expose the right controls.

"33B total / 3B active"

Poolside is not just dropping weights and walking away. The company’s official launch post frames Laguna XS.2 as the first public release from a lab that had mainly served government and public-sector customers with tightly controlled deployments. In the same rollout, Poolside also exposed its larger Laguna M.1 model and previewed the pool coding agent plus the Shimmer development environment. That makes the tweet more than a teaser. It is a marker that Poolside wants outside developers to test the same model family it has been training internally.

The technical context is unusually detailed for a first open-weight release. Poolside’s deeper dive says Laguna XS.2 uses a 33B total / 3B active MoE design trained on more than 30T tokens, and reports 44.5% on SWE-bench Pro and 30.1% on Terminal-Bench 2.0. The same post says the model started pretraining just five weeks before release and is distributed with an agent harness used in Poolside’s own RL workflow. Even if third-party replication takes time, those details are concrete enough to make the launch relevant to anyone tracking open agentic coding models rather than generic chatbots.

What to watch next is external validation: whether community runs can reproduce Poolside’s benchmark numbers, how well Laguna XS.2 holds up against Qwen and Gemma sized peers in real code tasks, and whether the promised XS.2-base release broadens fine-tuning work. If the model proves stable on commodity hardware, this tweet could mark Poolside’s shift from a relatively closed deployment lab to a more visible player in open coding infrastructure. Source: Poolside source tweet · launch blog · technical deep dive

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.