OpenAI locks high-risk ChatGPT accounts behind passkeys
Original: OpenAI launched Advanced Account Security for high-risk accounts View original →
What changed for account security
OpenAI turned account hardening into a named product feature instead of leaving it scattered across settings. The company’s main X account said Advanced Account Security is now available for ChatGPT accounts as an opt-in mode for people at higher risk of digital attacks. That matters because a ChatGPT login no longer protects only chat history. It can also sit at the center of Codex sessions, connected apps, and a growing amount of personal and professional context.
“Advanced Account Security” adds “phishing-resistant sign-in and more secure account recovery.”
OpenAI’s April 30 product page adds the operational details that make the feature material. Advanced Account Security replaces password-based login with passkeys or physical security keys, turns off email and SMS recovery, shortens session lifetime, and adds clearer session management and login alerts. It also applies to Codex when the same login is used there. For especially sensitive users, OpenAI says conversations from enrolled accounts are automatically excluded from model training, removing one more setting those people would otherwise need to remember to flip.
Why OpenAI is narrowing recovery on purpose
The sharpest design choice is recovery. OpenAI says enrolled users must rely on backup passkeys, security keys, and recovery keys, and that support will not be able to restore access for them through weaker fallback paths. That creates friction, but it is the right kind of friction for the people this feature targets: journalists, researchers, political figures, security teams, and anyone whose account could become a gateway to sensitive work.
There is also a policy signal here. OpenAI says individual members of Trusted Access for Cyber will have to enable this mode beginning June 1, 2026 unless their organization can attest to phishing-resistant single sign-on. That moves the feature from optional best practice toward baseline protection for the most sensitive users on the platform. What to watch next is how quickly OpenAI extends the same controls into broader enterprise workflows and whether users tolerate the stricter recovery model once they experience its tradeoffs firsthand. Source: OpenAI source tweet · OpenAI product page
Related Articles
OpenAI said on March 31, 2026 that it closed a $122 billion funding round at an $852 billion post-money valuation. The company used the announcement to present consumer reach, enterprise growth, API usage, Codex adoption, and compute access as one reinforcing AI platform flywheel.
OpenAI said on April 10, 2026 that a compromised Axios package touched a GitHub Actions workflow used in its macOS app-signing pipeline. The company says no user data, systems, or software were compromised, but macOS users need updated builds signed with a new certificate before May 8, 2026.
OpenAI said on March 31, 2026 that it closed a $122 billion funding round at an $852 billion post-money valuation. The company tied the raise to faster compute expansion, enterprise growth, and a unified AI superapp strategy spanning ChatGPT, Codex, and broader agent workflows.
Comments (0)
No comments yet. Be the first to comment!