OpenAI brings subagents to Codex for parallel, specialized workflows
Original: Subagents are now available in Codex. You can accelerate your workflow by spinning up specialized agents to: • Keep your main context window clean • Tackle different parts of a task in parallel • Steer individual agents as work unfolds View original →
What OpenAI announced on X
On March 16, 2026, OpenAIDevs said subagents are now available in Codex. The X post framed the feature around three practical benefits: keeping the main context window clean, splitting different parts of a task across parallel workstreams, and letting the user steer each worker as the job evolves.
That framing matters because it positions Codex less as a single chat-style coding assistant and more as an orchestrator for coordinated specialist agents. Instead of forcing planning, code exploration, review, and documentation lookup into one long thread, teams can push narrower jobs into separate lanes and keep the parent session focused on synthesis and decision-making.
What the official docs add
OpenAI's Codex documentation now has a dedicated Subagents page that describes both subagents and custom agents. The docs explicitly recommend making custom agents narrow and opinionated, with a tool surface that matches a clearly defined job.
- One official example splits PR review into
pr_explorer,reviewer, anddocs_researcher, separating codebase mapping, correctness and security review, and framework documentation checks. - Another experimental workflow,
spawn_agents_on_csv, reads a CSV, launches one worker subagent per row, waits for the batch to finish, and exports the combined results back to CSV. - The same docs expose runtime controls such as
agents.max_threadsandagents.job_max_runtime_seconds, which shows OpenAI is treating subagents as operational infrastructure rather than a cosmetic UI add-on.
Why this matters
For engineering teams, the immediate value is structured parallelism. Tasks like repository audits, migration checks, PR review, and documentation validation often contain many small but distinct subproblems. A subagent model lets teams separate those concerns without losing the benefits of a coordinating parent agent.
It also changes how context is managed. One of the main failure modes in agentic coding is forcing a single thread to carry planning, tool output, code diffs, test failures, and research notes all at once. Subagents reduce that pressure by giving each worker a smaller goal and a narrower prompt surface. If adoption is strong, this update could push Codex from “one agent with many tools” toward a more explicit multi-agent development workflow.
Sources: OpenAIDevs X post · OpenAI Codex Subagents docs
Related Articles
OpenAI is pushing harder into agentic work, not just chat. On the company's own evals, GPT-5.5 reaches 82.7% on Terminal-Bench 2.0, beats GPT-5.4 by 7.6 points, and uses fewer tokens in Codex.
OpenAI is pitching GPT-5.5 as more than a routine model refresh. With 82.7% on Terminal-Bench 2.0, 58.6% on SWE-Bench Pro, and a claim that it keeps GPT-5.4-level latency, the company is resetting expectations for long-running coding agents.
OpenAIDevs said on April 4, 2026 that developers can move from project setup to deployment with the Vercel plugin in the Codex app. The post aligns with OpenAI’s Codex plugin documentation and Vercel’s late-March rollout of plugin support for OpenAI Codex and Codex CLI.
Comments (0)
No comments yet. Be the first to comment!