OpenAI launches Trusted Access for Cyber with GPT-5.3-Codex and $10M in defender credits

Original: Introducing Trusted Access for Cyber View original →

Read in other languages: 한국어日本語
AI Mar 15, 2026 By Insights AI 2 min read 2 views Source

OpenAI said on February 5, 2026 that it is launching Trusted Access for Cyber, a new identity- and trust-based access program designed to place stronger cyber capabilities in the hands of legitimate defenders without broadly relaxing abuse controls. The company described GPT-5.3-Codex as its most cyber-capable frontier reasoning model so far, and positioned the new program as a way to move from a one-size-fits-all safety posture toward more targeted access for vetted users.

The core idea is simple: keep baseline safeguards on by default, then create a more controlled path for security teams that need higher capability ceilings for real defensive work. OpenAI said it will use automated classifier-based monitors and identity verification to manage the pilot. Individuals can join a waitlist through the company’s dedicated cyber page, while enterprise customers can apply through their existing OpenAI representatives. The company also said the initial rollout will be invite-only and oriented toward cybersecurity organizations, open-source maintainers, academics, and civil-society groups working on security.

Why it matters

This is an important policy shift because the cyber debate around frontier models has been stuck between two unsatisfying extremes: either lock down advanced systems so aggressively that defenders cannot benefit, or open them up so widely that offensive misuse becomes easier. Trusted Access is an attempt to build a middle layer. Instead of assuming every user should have the same permissions, OpenAI is saying that trust, identity, monitoring, and mission context can determine access level.

OpenAI paired the announcement with a $10 million Cybersecurity Grant Program in API credits. The credits are intended for people with a demonstrated track record of finding and remediating vulnerabilities in open-source software and critical infrastructure. That matters because some of the users who most need advanced security tooling are not large commercial buyers. If the grant program is administered well, it could push frontier models into public-interest defense work rather than only into enterprise products.

What to watch next

  • Whether the monitoring system can distinguish legitimate research from dangerous testing without blocking too much useful work.
  • Whether competitors adopt similar identity-based access tiers for high-risk domains.
  • Whether the grant credits translate into measurable fixes in open-source and infrastructure security.

The announcement is ultimately more about governance than raw capability. OpenAI is making access control, verification, and accountability part of the product surface for advanced AI security tooling.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.