OpenAIDevs Announces /fast Mode: GPT-5.4 in Codex Runs 1.5x Faster
Original: Codex got more speed. With /fast mode, GPT-5.4 runs 1.5x faster with the same intelligence and reasoning. Move through coding tasks, iteration, and debugging while staying in flow. View original →
Codex speed update on X
OpenAIDevs posted on March 5, 2026 that Codex now has a /fast mode. According to the announcement, GPT-5.4 can run about 1.5x faster in this mode while keeping the same intelligence and reasoning behavior.
The post frames the benefit around workflow continuity: coding, iteration, and debugging cycles can move faster without requiring teams to switch to a smaller capability tier for day-to-day work.
What is explicitly claimed
- The claim is speed-oriented: 1.5x faster runtime in Codex when /fast is enabled.
- The post also claims no downgrade in intelligence and reasoning quality.
- The target use case is development velocity across iterative engineering tasks.
Why this is important for engineering teams
In practical software development, latency often compounds across many small agent interactions: planning, patching, running tools, and evaluating output. Even moderate speed gains can produce larger end-to-end savings over long sessions.
If the quality claim holds in production, /fast mode could reduce the common tradeoff between speed and reliability in coding assistants. Teams that already run Codex in CI-like loops or large refactoring tasks may see immediate throughput gains.
As with all vendor-declared performance updates, teams should validate on their own repositories and task mix before revising defaults.
Source: OpenAIDevs X post
Related Articles
This is a distribution story, not just a usage milestone. OpenAI says Codex grew from more than 3 million weekly developers in early April to more than 4 million two weeks later, and it is pairing that demand with Codex Labs plus seven global systems integrators to turn pilots into production rollouts.
OpenAI is pushing harder into agentic work, not just chat. On the company's own evals, GPT-5.5 reaches 82.7% on Terminal-Bench 2.0, beats GPT-5.4 by 7.6 points, and uses fewer tokens in Codex.
OpenAI is pitching GPT-5.5 as more than a routine model refresh. With 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro, and a claim that it keeps GPT-5.4-level latency, the company is resetting expectations for long-running coding agents.
Comments (0)
No comments yet. Be the first to comment!