OpenAI launches Trusted Access for Cyber with GPT-5.3-Codex and $10M in defender credits
Original: Introducing Trusted Access for Cyber View original →
OpenAI said on February 5, 2026 that it is launching Trusted Access for Cyber, a new identity- and trust-based access program designed to place stronger cyber capabilities in the hands of legitimate defenders without broadly relaxing abuse controls. The company described GPT-5.3-Codex as its most cyber-capable frontier reasoning model so far, and positioned the new program as a way to move from a one-size-fits-all safety posture toward more targeted access for vetted users.
The core idea is simple: keep baseline safeguards on by default, then create a more controlled path for security teams that need higher capability ceilings for real defensive work. OpenAI said it will use automated classifier-based monitors and identity verification to manage the pilot. Individuals can join a waitlist through the company’s dedicated cyber page, while enterprise customers can apply through their existing OpenAI representatives. The company also said the initial rollout will be invite-only and oriented toward cybersecurity organizations, open-source maintainers, academics, and civil-society groups working on security.
Why it matters
This is an important policy shift because the cyber debate around frontier models has been stuck between two unsatisfying extremes: either lock down advanced systems so aggressively that defenders cannot benefit, or open them up so widely that offensive misuse becomes easier. Trusted Access is an attempt to build a middle layer. Instead of assuming every user should have the same permissions, OpenAI is saying that trust, identity, monitoring, and mission context can determine access level.
OpenAI paired the announcement with a $10 million Cybersecurity Grant Program in API credits. The credits are intended for people with a demonstrated track record of finding and remediating vulnerabilities in open-source software and critical infrastructure. That matters because some of the users who most need advanced security tooling are not large commercial buyers. If the grant program is administered well, it could push frontier models into public-interest defense work rather than only into enterprise products.
What to watch next
- Whether the monitoring system can distinguish legitimate research from dangerous testing without blocking too much useful work.
- Whether competitors adopt similar identity-based access tiers for high-risk domains.
- Whether the grant credits translate into measurable fixes in open-source and infrastructure security.
The announcement is ultimately more about governance than raw capability. OpenAI is making access control, verification, and accountability part of the product surface for advanced AI security tooling.
Related Articles
OpenAI wants the cyber debate to shift from who owns the strongest model to who can widen defensive access first. Its April 29 action plan is built around five pillars, with the sharpest focus on broadening cyber defense while preserving visibility and control over risky deployments.
The notable shift here is not just a new model variant but a wider access lane for defensive security work. OpenAI says Trusted Access for Cyber is expanding to thousands of verified individual defenders and hundreds of teams, with the top tiers able to request GPT-5.4-Cyber.
Why it matters: OpenAI is widening access to a more cyber-permissive model instead of leaving advanced defensive workflows inside a tiny pilot. The April 14 post says top Trusted Access tiers can request GPT-5.4-Cyber, and the linked policy says TAC is being expanded to thousands of defenders and hundreds of teams.
Comments (0)
No comments yet. Be the first to comment!