OpenAI brings GPT-5.5, Codex, and managed agents to AWS
Original: OpenAI models, Codex, and Managed Agents come to AWS View original →
Enterprise AI adoption usually stalls at the same point: the model works, but the procurement, security, and deployment path does not. OpenAI’s April 28, 2026 partnership expansion with AWS is built to remove exactly that bottleneck. Instead of asking large customers to carve out a separate operating lane for OpenAI, the company is putting core capabilities inside the AWS environments where those buyers already run identity, billing, compliance, and infrastructure policy.
The package has three moving parts, all entering limited preview at once: OpenAI models on AWS, Codex on AWS, and Amazon Bedrock Managed Agents powered by OpenAI. The headline item is GPT-5.5 landing on Amazon Bedrock. That gives AWS customers a way to consume OpenAI’s frontier model without leaving the control plane they already use for production systems. For buyers in regulated sectors, that matters more than a benchmark chart. It shortens the distance between pilot and approved deployment.
Codex is the second leg of the move, and it is not a small side feature anymore. OpenAI says more than 4 million people now use Codex every week. In the AWS version, customers can configure Codex through the Bedrock API, starting with Codex CLI, the desktop app, and the Visual Studio Code extension. OpenAI also says customer data is processed by Amazon Bedrock, and eligible customers can count Codex usage toward their AWS cloud commitments. That is a direct answer to the procurement friction that often kills rollout momentum inside large engineering organizations.
The third leg is the one to watch over the next few quarters: Amazon Bedrock Managed Agents powered by OpenAI. OpenAI describes these agents as able to maintain context, execute multi-step workflows, use tools, and take action across business processes. In plain terms, this pushes OpenAI further up the stack from model vendor to workflow infrastructure inside a hyperscaler’s platform. The strategic signal is hard to miss. Enterprise AI is becoming a battle over where agents live, who governs them, and which cloud gets to turn model usage into a durable platform habit.
Related Articles
This is a distribution story, not just a usage milestone. OpenAI says Codex grew from more than 3 million weekly developers in early April to more than 4 million two weeks later, and it is pairing that demand with Codex Labs plus seven global systems integrators to turn pilots into production rollouts.
The bottleneck moved from GPUs to the API layer, and OpenAI changed the transport to keep up. By adding WebSocket mode and connection-scoped caching to the Responses API, the company says agentic workflows improved by up to 40% end-to-end and GPT-5.3-Codex-Spark reached 1,000 tokens per second with bursts up to 4,000.
This matters because the next bottleneck in agent coding is human attention, not raw model speed. OpenAI says Symphony lifted landed pull requests by 500% on some teams after engineers hit a practical ceiling of roughly three to five concurrent Codex sessions.
Comments (0)
No comments yet. Be the first to comment!