Azure Announces GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex Rollout in Microsoft Foundry
Original: New Azure OpenAI models are available in Microsoft Foundry: GPT‑Realtime‑1.5, GPT‑Audio‑1.5, and GPT‑5.3‑Codex. Built for low‑latency voice + long‑running engineering workflows. Learn more here: https://t.co/Z9WYzj6rYy https://t.co/yKWfFRg2Sn View original →
X announcement and Microsoft source
In a February 25, 2026 X post, Azure announced that GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex are rolling out in Microsoft Foundry. The linked Microsoft Foundry blog post (published February 24, 2026) frames the release around continuity in real-time interaction and reliability for multi-step engineering work.
What Microsoft highlights for GPT-5.3-Codex
The post describes GPT-5.3-Codex as a model aimed at longer software workflows such as refactoring, migrations, agentic development loops, and automated code review/test generation. Microsoft cites OpenAI-reported data that the model is 25% faster than predecessors and can be steered mid-task while preserving context.
Microsoft also publishes explicit pricing for GPT-5.3-Codex in Foundry: $1.75 input, $0.175 cached input, and $14.00 output per 1M tokens.
Voice model signal: Realtime and Audio
For GPT-Realtime-1.5 and GPT-Audio-1.5, Microsoft highlights low-latency voice experiences with improved instruction following and function calling support. The post cites OpenAI evaluations claiming +5% on Big Bench Audio reasoning, +10.23% alphanumeric transcription improvement, and +7% instruction-following improvement.
These claims are vendor-reported, but the notable part is packaging: Microsoft is exposing coding-focused and voice-focused model upgrades in one Foundry workflow rather than isolated product tracks.
Why this matters for enterprise teams
For organizations standardizing on a single AI platform, this release can reduce integration overhead across teams building very different surfaces: developer tooling, internal copilots, and voice interfaces. A shared control plane for evaluation, deployment, and governance means model upgrades can be tested under the same operational policy stack.
The strategic shift is from single-turn prompt optimization to systems that must keep context, call tools reliably, and run over longer horizons. Microsoft Foundry is positioning these three models as a combined answer to that requirement.
Primary sources: X post, Microsoft Foundry blog.
Related Articles
Azure says GPT-5.4 is now available in Microsoft Foundry for production-grade agent workloads. Microsoft’s supporting post adds GPT-5.4 Pro, pricing, and initial deployment options, with governance controls positioned as part of the pitch.
Microsoft says Fireworks AI is now part of Microsoft Foundry, bringing high-performance, low-latency open-model inference to Azure. The launch emphasizes day-zero access to leading open models, custom-model deployment, and enterprise controls in one place.
OpenAIDevs posted on 2026-02-24 that GPT-5.3-Codex is now available for all developers in the Responses API. The announcement moves API access from a staged rollout to general developer availability.
Comments (0)
No comments yet. Be the first to comment!