Mistral Medium 3.5: A Single 128B Open-Weight Model That Replaces Three Separate Models

Read in other languages: 한국어日本語
LLM May 5, 2026 By Insights AI 1 min read 1 views Source

One Model, Three Jobs

Mistral AI released Mistral Medium 3.5 on April 29, 2026 under a Modified MIT license. The model consolidates three previously separate Mistral offerings: Mistral Medium 3.1 for instruction-following, Magistral for reasoning, and Devstral 2 for coding. Users now switch between modes with a single toggle.

Specifications

  • Parameters: 128B (dense, not MoE)
  • Context window: 256K tokens
  • License: Modified MIT
  • SWE-bench Verified: 77.6%
  • API input price: $1.50 per million tokens
  • Self-hostable: On 4 GPUs

Ships with Vibe

Medium 3.5 launches alongside Vibe, a cloud-based coding agent that autonomously submits pull requests to GitHub. Together, they represent Mistral's push into agentic coding workflows.

Competitive Position

Among open-weight models, 77.6% on SWE-bench Verified is top-tier. The ability to self-host on four GPUs lowers the enterprise barrier significantly for teams with data residency constraints.

Source: Mistral AI, Winbuzzer

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment