Azure adds GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex to Microsoft Foundry
Original: New Azure OpenAI models are available in Microsoft Foundry: GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex. Built for low-latency voice + long-running engineering workflows. View original →
Announcement snapshot
On February 26, 2026, Microsoft Azure posted on X that new Azure OpenAI models are available in Microsoft Foundry: GPT-Realtime-1.5, GPT-Audio-1.5, and GPT-5.3-Codex. The post also frames intended use: low-latency voice scenarios and long-running engineering workflows. That combination is notable because it targets two operationally demanding classes of enterprise AI workloads at once.
Why enterprises should pay attention
At scale, successful AI adoption depends on reliability, governance, and cost control as much as benchmark scores. Realtime voice workloads stress latency budgets, turn-taking quality, and concurrency handling. Engineering copilots stress long-context stability, iterative code modification, test generation, and policy-compliant execution over extended sessions. Positioning these models together in Foundry suggests Microsoft is optimizing for production operations, not just demo-level capabilities.
Likely workload impact
For customer-facing teams, realtime and audio models can improve virtual agents in contact centers, technical support, and multilingual onboarding flows. For internal engineering teams, GPT-5.3-Codex may be more valuable for repository-wide reasoning, refactor planning, regression analysis, and long-horizon debugging loops than for single-shot code completion. In multi-team organizations, broader model options inside one managed platform can reduce integration friction and improve architecture flexibility.
Open questions
The X post does not provide pricing tiers, throughput ceilings, regional availability, or detailed compliance scope. Teams evaluating adoption should verify official documentation for rate limits, cost controls, data residency behavior, and auditability features before rollout. Even with those unknowns, this update is a clear signal that enterprise AI stacks are maturing toward mixed workloads: conversational interfaces on one side and deep software engineering automation on the other.
Source: Microsoft Azure on X
Related Articles
OpenAI Developers said on March 6, 2026 that Codex Security is now in research preview. The product connects to GitHub repositories, builds a threat model, validates potential issues in isolation, and proposes patches for human review.
OpenAI says GPT-5.4 Thinking is shipping in ChatGPT, with GPT-5.4 also live in the API and Codex and GPT-5.4 Pro available for harder tasks. The launch packages reasoning, coding, and native computer use into a single professional-work model with up to 1M tokens of context.
Microsoft says Fireworks AI is now part of Microsoft Foundry, bringing high-performance, low-latency open-model inference to Azure. The launch emphasizes day-zero access to leading open models, custom-model deployment, and enterprise controls in one place.
Comments (0)
No comments yet. Be the first to comment!