OpenAI launches Trusted Access for Cyber with GPT-5.3-Codex and $10M in defender credits
Original: Introducing Trusted Access for Cyber View original →
OpenAI said on February 5, 2026 that it is launching Trusted Access for Cyber, a new identity- and trust-based access program designed to place stronger cyber capabilities in the hands of legitimate defenders without broadly relaxing abuse controls. The company described GPT-5.3-Codex as its most cyber-capable frontier reasoning model so far, and positioned the new program as a way to move from a one-size-fits-all safety posture toward more targeted access for vetted users.
The core idea is simple: keep baseline safeguards on by default, then create a more controlled path for security teams that need higher capability ceilings for real defensive work. OpenAI said it will use automated classifier-based monitors and identity verification to manage the pilot. Individuals can join a waitlist through the company’s dedicated cyber page, while enterprise customers can apply through their existing OpenAI representatives. The company also said the initial rollout will be invite-only and oriented toward cybersecurity organizations, open-source maintainers, academics, and civil-society groups working on security.
Why it matters
This is an important policy shift because the cyber debate around frontier models has been stuck between two unsatisfying extremes: either lock down advanced systems so aggressively that defenders cannot benefit, or open them up so widely that offensive misuse becomes easier. Trusted Access is an attempt to build a middle layer. Instead of assuming every user should have the same permissions, OpenAI is saying that trust, identity, monitoring, and mission context can determine access level.
OpenAI paired the announcement with a $10 million Cybersecurity Grant Program in API credits. The credits are intended for people with a demonstrated track record of finding and remediating vulnerabilities in open-source software and critical infrastructure. That matters because some of the users who most need advanced security tooling are not large commercial buyers. If the grant program is administered well, it could push frontier models into public-interest defense work rather than only into enterprise products.
What to watch next
- Whether the monitoring system can distinguish legitimate research from dangerous testing without blocking too much useful work.
- Whether competitors adopt similar identity-based access tiers for high-risk domains.
- Whether the grant credits translate into measurable fixes in open-source and infrastructure security.
The announcement is ultimately more about governance than raw capability. OpenAI is making access control, verification, and accountability part of the product surface for advanced AI security tooling.
Related Articles
OpenAI announced Frontier Alliances on February 23, 2026, positioning a partner-led model for enterprise AI transformation. The program formalizes collaboration across strategy, implementation, and domain workflows.
OpenAI announced on X that Codex Security has entered research preview. The company positions it as an application security agent that can detect, validate, and patch complex vulnerabilities with more context and less noise.
OpenAI announced $110B in new investment on February 27, 2026, alongside Amazon and NVIDIA partnerships aimed at compute scale. The company tied the move to 900M weekly ChatGPT users, 9M paying business users, and rising Codex demand.
Comments (0)
No comments yet. Be the first to comment!