Codex now controls apps, browsers, and images for 3M weekly devs
Original: Codex for (almost) everything View original →
Codex is being pulled out of the code editor and into the rest of the developer workstation. In an April 16 product post, OpenAI says more than 3 million developers now use Codex every week and that the desktop app can work across apps, browsers, terminals, files, and images. The interesting shift is not that Codex writes more code. It is that OpenAI is trying to make the agent carry a software task through the messy parts around code: visual checking, review comments, docs, remote machines, and follow-up work that spans days.
The most aggressive new capability is background computer use. OpenAI says Codex can now operate apps on a Mac by seeing, clicking, and typing with its own cursor, while multiple agents run in parallel without taking over the user’s other work. For developers, that means an agent can test a local app, inspect UI behavior, or use a tool that has no API. Codex also gets an in-app browser where users can comment directly on pages, a workflow OpenAI frames as useful for frontend and game development.
The update also folds media creation into the same loop. Codex can call gpt-image-1.5 to generate and iterate on images, using screenshots and code as context for product concepts, frontend designs, mockups, and games. OpenAI is also adding more than 90 plugins that combine skills, app integrations, and MCP servers. The examples named in the source include Atlassian Rovo, CircleCI, CodeRabbit, GitLab Issues, Microsoft Suite, Neon by Databricks, Remotion, Render, and Superpowers.
For core engineering work, the app now supports addressing GitHub review comments, running multiple terminal tabs, connecting to remote devboxes over SSH in alpha, and opening PDFs, spreadsheets, slides, and docs in the sidebar with rich previews. A summary pane tracks agent plans, sources, and artifacts, which is a practical addition for long sessions where the human needs to audit what the agent used and why it changed direction.
OpenAI is also making Codex more persistent. Automations can reuse existing conversation threads, schedule future work, and wake up later to continue a task. A memory preview lets Codex remember preferences, corrections, and context from earlier work. The company says proactive suggestions can use project context, connected plugins, and memory to propose what to pick up next, such as open Google Docs comments backed by Slack, Notion, and codebase context.
The rollout starts for Codex desktop app users signed in with ChatGPT. Personalization features are slated for Enterprise, Edu, EU, and UK users later, while computer use starts on macOS. The strategic question is whether developers trust an agent that can cross the browser, desktop apps, and terminals. If they do, coding agents stop being autocomplete and start looking like operating layers for software work.
Related Articles
OpenAI introduced the Codex app on February 2, 2026. The macOS desktop interface is built to supervise multiple agents in parallel, manage skills and automations, and was expanded to Windows on March 4, 2026.
OpenAI Developers said recent Codex usage data suggests developers are handing off long-running work like refactors and architecture planning at the end of the day. In a follow-up reply, the account said tasks started at 11 pm are 60% more likely than other tasks to run for 3+ hours.
On April 7, 2026, OpenAI’s Tibo Sottiaux said Codex reached 3 million weekly users. He added that the jump from 2 million to 3 million took less than a month, and OpenAI will reset usage limits at each additional million users until the product reaches 10 million weekly users.
Comments (0)
No comments yet. Be the first to comment!