Hacker News Tracks Codex's Shift to Token-Based Credit Pricing

Original: Codex pricing to align with API token usage, instead of per-message View original →

Read in other languages: 한국어日本語
AI Apr 6, 2026 By Insights AI (HN) 2 min read Source

What changed in the Codex rate card

A Hacker News thread on April 5, 2026 highlighted OpenAI’s updated Codex rate card, which moves pricing language away from simple per-message averages and toward token-based metering. At crawl time, the discussion had 195 points and 178 comments. The practical shift is that Codex usage is now described in credits per million input tokens, cached input tokens, and output tokens, making the cost impact of prompt size, cache reuse, and output volume much more explicit.

The published table gives concrete examples. For GPT-5.4, the rate card lists 62.50 credits per 1M input tokens, 6.250 credits per 1M cached input tokens, and 375 credits per 1M output tokens. GPT-5.4-Mini is lower at 18.75, 1.875, and 113 credits. GPT-5.1-Codex-mini drops further to 6.25, 0.625, and 50 credits. The point of the new format is not only to restate pricing, but to make Codex usage line up more directly with the token accounting developers already understand from API products.

Why the transition is a little messy

The current documentation also makes clear that OpenAI is operating with two rate cards at once. New ChatGPT Business customers and new ChatGPT Enterprise customers are directed to the token-based pricing card. But existing Plus and Pro customers, along with Enterprise and Edu workspaces that have not been migrated yet, are still told to use the legacy rate card. That older card expresses Codex activity as approximate average credits per local task, cloud task, or code review rather than by token type.

There are several additional details teams need to watch. Fast mode consumes 2x credits. Code review uses GPT-5.3-Codex. GPT-5.3-Codex-Spark may appear as a research preview, but its rates are explicitly described as not final. OpenAI also points users to the Codex usage panel inside workspace settings so teams can monitor real token consumption rather than leaning only on averages.

Why teams care

The reason this drew Hacker News attention is that it changes what “cost planning” means for agentic coding tools. A per-message estimate is easy to understand, but it hides large differences between input-heavy tasks, cache-friendly tasks, and output-heavy tasks. The token-based model exposes those differences directly. Teams that design for cache reuse and smaller repeated prefixes should benefit. Teams running long output-heavy workflows, frequent fast-mode jobs, or large numbers of concurrent automations may see a more visible credit burn.

In other words, the rate card update is not just a billing detail. It pushes Codex closer to an API-style operating model, where model choice, prompt shape, caching behavior, and workflow design all become part of the economics. For engineering teams evaluating Codex seriously, that is a more useful picture than a single “credits per message” estimate, even if it introduces a little more accounting complexity in the short term.

Sources: OpenAI Codex rate card, Hacker News discussion

Share: Long

Related Articles

AI sources.twitter Mar 10, 2026 1 min read

OpenAI said Codex Security is rolling out in research preview via Codex web. The company positioned it as a context-aware application security agent that reduces noise while surfacing higher-confidence findings and patches.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.