Mistral Medium 3.5: A Single 128B Open-Weight Model That Replaces Three Separate Models
One Model, Three Jobs
Mistral AI released Mistral Medium 3.5 on April 29, 2026 under a Modified MIT license. The model consolidates three previously separate Mistral offerings: Mistral Medium 3.1 for instruction-following, Magistral for reasoning, and Devstral 2 for coding. Users now switch between modes with a single toggle.
Specifications
- Parameters: 128B (dense, not MoE)
- Context window: 256K tokens
- License: Modified MIT
- SWE-bench Verified: 77.6%
- API input price: $1.50 per million tokens
- Self-hostable: On 4 GPUs
Ships with Vibe
Medium 3.5 launches alongside Vibe, a cloud-based coding agent that autonomously submits pull requests to GitHub. Together, they represent Mistral's push into agentic coding workflows.
Competitive Position
Among open-weight models, 77.6% on SWE-bench Verified is top-tier. The ability to self-host on four GPUs lowers the enterprise barrier significantly for teams with data residency constraints.
Source: Mistral AI, Winbuzzer
Related Articles
Poolside AI released Laguna XS.2 on April 28, 2026 under Apache 2.0 — a 33B total/3B active MoE model purpose-built for agentic coding, scoring 68.2% on SWE-bench Verified and deployable on a single consumer GPU.
Anthropic released a suite of Claude connectors enabling direct integration with Adobe Creative Cloud, Blender, Autodesk, Ableton, and five other creative tools. The Blender connector is open-source and MCP-based, meaning any LLM can plug in.
Alibaba launched Qwen 3.5 on February 16 under Apache 2.0, featuring 397B parameters with a sparse MoE architecture (17B active), 256K context, and native multimodal capabilities matching leading US proprietary models on key benchmarks.
Comments (0)
No comments yet. Be the first to comment!