OpenAI brings Codex Automations to general availability with model, branch, and template controls
Original: Automations are now GA. View original →
OpenAI Developers said on X on March 12, 2026 that Codex Automations are now generally available. In the announcement, OpenAI highlighted three new control surfaces: users can choose the model and reasoning level for an automation run, decide whether the run executes in an isolated worktree or on an existing branch, and reuse recurring jobs through templates. The post positioned the feature for operational tasks that repeat on a schedule or in response to repository activity, including daily repo briefings, issue triage, and follow-up on pull-request comments.
The announcement is notable because it moves Codex further away from one-off chat-style assistance and closer to a managed workflow system. OpenAI's Codex help documentation describes the app as having built-in support for worktrees, skills, automations, and git functionality. It also frames Codex as a place to run multiple agents in parallel and to automate code review workflows on GitHub. Read together, the tweet and the docs suggest that OpenAI is turning Codex into a layer for repeatable engineering operations, not just an interactive coding assistant.
- Model and reasoning controls let teams trade off speed, cost, and depth for each recurring task.
- Worktree versus existing-branch execution changes the safety model for how an automation touches a repository.
- Templates reduce setup friction, making it easier to standardize the same workflow across projects or teams.
Those details matter because automation usually fails at the operational layer rather than the model layer. A coding agent may be able to complete a single task, but teams still need predictable repository boundaries, reusable configurations, and a clear review path before they can trust the agent to run every day or every hour. By surfacing branch strategy and reusable templates directly in the product, OpenAI is addressing that deployment problem rather than only the underlying model capability.
The broader implication is that agentic coding products are entering a phase where scheduling, isolation, and policy control matter almost as much as benchmark performance. General availability does not guarantee that every workflow is safe to fully automate, but it does signal that OpenAI believes the feature set is ready for broader operational use. For teams already experimenting with Codex, the March 12 update makes the product more usable as infrastructure for recurring software work instead of just a powerful session-based tool.
Related Articles
OpenAI says GPT-5.4 Thinking is shipping in ChatGPT, with GPT-5.4 also live in the API and Codex and GPT-5.4 Pro available for harder tasks. The launch packages reasoning, coding, and native computer use into a single professional-work model with up to 1M tokens of context.
OpenAI says GPT-5.4 Thinking and Pro are rolling out gradually across ChatGPT, the API, and Codex. The company positions GPT-5.4 as a unified frontier model for professional work with stronger coding, tool use, and 1M-token context.
OpenAI said on March 5, 2026 that GPT-5.4 is rolling out across ChatGPT, the API, and Codex. The new model combines GPT-5.3-Codex coding capability with OpenAI’s mainline reasoning stack, adds native computer-use features, and introduces experimental 1M-token context in Codex.
Comments (0)
No comments yet. Be the first to comment!