Azure adds GPT-5.4 to Microsoft Foundry for production-grade agent workloads
Original: GPT-5.4 is available in Microsoft Foundry. Built for production grade AI agents with more reliable reasoning, stronger instruction following, and integrated computer use capabilities. View original →
What Azure announced on X
On March 5, 2026, Azure said GPT-5.4 was available in Microsoft Foundry. The X post positioned the model for production-grade AI agents, highlighting more reliable reasoning, stronger instruction following, and integrated computer use capabilities. The notable part is not simply that another model endpoint is live, but that Microsoft is framing GPT-5.4 around longer, more complex workflows that need to finish reliably in operational environments.
What Microsoft’s post adds
Microsoft’s Tech Community article describes GPT-5.4 as a model for moving from planning work to reliably completing it in production. It emphasizes consistency, sustained context, and more stable multi-step execution. Microsoft is also offering GPT-5.4 Pro for deeper analytical work. On the deployment side, Foundry is positioned as the control plane: policy enforcement, monitoring, version management, and auditability are presented as core parts of responsible AI rollout. Microsoft also published pricing, with GPT-5.4 at $2.50 per million input tokens, $0.25 per million cached input tokens, and $15 per million output tokens, while GPT-5.4 Pro is priced at $30 input and $180 output.
Why this matters
The launch is also notable for its initial availability structure. Microsoft says GPT-5.4 is available at launch in Standard Global and Standard Data Zone (US), while GPT-5.4 Pro launches in Standard Global. Computer use capabilities are expected shortly after launch. That gives Azure customers a clearer path to adopt the model inside an enterprise governance framework rather than through a raw API-only route.
The broader takeaway is that model distribution is becoming an enterprise platform story. Buyers evaluating agent systems care not only about reasoning quality but also about policy controls, observability, and predictable cost. By packaging GPT-5.4 inside Microsoft Foundry with operational controls and published pricing, Microsoft is making the case that frontier models need enterprise infrastructure around them before they are truly deployable at scale.
Sources: Azure X post, Microsoft Tech Community
Related Articles
OpenAI Developers has updated its GPT-5.4 API prompting guide. The new guidance focuses on tool use, structured outputs, verification loops, and long-running workflows for production-grade agents.
Microsoft says Fireworks AI is now part of Microsoft Foundry, bringing high-performance, low-latency open-model inference to Azure. The launch emphasizes day-zero access to leading open models, custom-model deployment, and enterprise controls in one place.
OpenAI says GPT-5.4 Thinking is shipping in ChatGPT, with GPT-5.4 also live in the API and Codex and GPT-5.4 Pro available for harder tasks. The launch packages reasoning, coding, and native computer use into a single professional-work model with up to 1M tokens of context.
Comments (0)
No comments yet. Be the first to comment!