Azure Announces GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex Rollout in Microsoft Foundry

Original: New Azure OpenAI models are available in Microsoft Foundry: GPT‑Realtime‑1.5, GPT‑Audio‑1.5, and GPT‑5.3‑Codex. Built for low‑latency voice + long‑running engineering workflows. Learn more here: https://t.co/Z9WYzj6rYy https://t.co/yKWfFRg2Sn View original →

Read in other languages: 한국어日本語
LLM Feb 27, 2026 By Insights AI 1 min read 4 views Source

X announcement and Microsoft source

In a February 25, 2026 X post, Azure announced that GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex are rolling out in Microsoft Foundry. The linked Microsoft Foundry blog post (published February 24, 2026) frames the release around continuity in real-time interaction and reliability for multi-step engineering work.

What Microsoft highlights for GPT-5.3-Codex

The post describes GPT-5.3-Codex as a model aimed at longer software workflows such as refactoring, migrations, agentic development loops, and automated code review/test generation. Microsoft cites OpenAI-reported data that the model is 25% faster than predecessors and can be steered mid-task while preserving context.

Microsoft also publishes explicit pricing for GPT-5.3-Codex in Foundry: $1.75 input, $0.175 cached input, and $14.00 output per 1M tokens.

Voice model signal: Realtime and Audio

For GPT-Realtime-1.5 and GPT-Audio-1.5, Microsoft highlights low-latency voice experiences with improved instruction following and function calling support. The post cites OpenAI evaluations claiming +5% on Big Bench Audio reasoning, +10.23% alphanumeric transcription improvement, and +7% instruction-following improvement.

These claims are vendor-reported, but the notable part is packaging: Microsoft is exposing coding-focused and voice-focused model upgrades in one Foundry workflow rather than isolated product tracks.

Why this matters for enterprise teams

For organizations standardizing on a single AI platform, this release can reduce integration overhead across teams building very different surfaces: developer tooling, internal copilots, and voice interfaces. A shared control plane for evaluation, deployment, and governance means model upgrades can be tested under the same operational policy stack.

The strategic shift is from single-turn prompt optimization to systems that must keep context, call tools reliably, and run over longer horizons. Microsoft Foundry is positioning these three models as a combined answer to that requirement.

Primary sources: X post, Microsoft Foundry blog.

Share:

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.