OpenAI brings GPT-5.4 mini to ChatGPT, Codex, and the API
Original: GPT-5.4 mini is available today in ChatGPT, Codex, and the API. Optimized for coding, computer use, multimodal understanding, and subagents. And it’s 2x faster than GPT-5 mini. https://t.co/DKh2cC5S3F View original →
On March 17, 2026, OpenAI said on X that GPT-5.4 mini was available in ChatGPT, Codex, and the API. The post describes the model as optimized for coding, computer use, multimodal understanding, and subagents, and says it is 2x faster than GPT-5 mini. OpenAI’s supporting launch post from the same day also introduced GPT-5.4 nano alongside mini.
According to OpenAI, GPT-5.4 mini improves on GPT-5 mini across coding, reasoning, multimodal understanding, and tool use while approaching the larger GPT-5.4 model on some evaluations. The company positions it for low-latency assistants, parallel subagents, and computer-use systems that need to interpret screenshots quickly without paying the cost of a full frontier model on every step.
- OpenAI says GPT-5.4 mini is available in the API, Codex, and ChatGPT.
- In the API, the official launch post says it supports text and image inputs, tool use, function calling, web search, file search, computer use, and skills with a 400k context window.
- OpenAI’s ChatGPT release notes say Free and Go users can access it through the Thinking feature, while paid users mostly see it as a fallback when GPT-5.4 Thinking rate limits are reached.
The companion GPT-5.4 nano release broadens the strategy. OpenAI says nano is the smallest and cheapest GPT-5.4 variant, aimed at API workloads such as classification, data extraction, ranking, and simpler coding subagents where latency and cost matter more than frontier-level depth. That creates a clearer split between mini as the general low-latency workhorse and nano as the lightweight support model.
The larger signal is that OpenAI is not only scaling up its flagship models, but also filling in the operational layer beneath them for ChatGPT, Codex, and API builders. In Codex in particular, the company says GPT-5.4 mini can take cheaper subagent work in parallel, which makes the launch relevant for real developer workflows rather than just benchmark comparisons. The original X post is here, the launch post is here, and ChatGPT availability details are in the release notes.
Related Articles
OpenAI said on X that GPT-5.4 mini is rolling out in ChatGPT, Codex, and the API, while GPT-5.4 nano is aimed at lower-cost API workloads. The company is positioning the pair as faster small models for coding, multimodal tasks, and agent sub-workflows.
OpenAI says GPT-5.4 Thinking is shipping in ChatGPT, with GPT-5.4 also live in the API and Codex and GPT-5.4 Pro available for harder tasks. The launch packages reasoning, coding, and native computer use into a single professional-work model with up to 1M tokens of context.
OpenAI introduced GPT-5.4 mini and nano on March 17, 2026 as smaller GPT-5.4 variants for low-latency coding, tool use, and multimodal workflows. The company positioned the models for high-volume API and subagent tasks where speed and cost matter more than maximum capability.
Comments (0)
No comments yet. Be the first to comment!