OpenAI Introduces Lockdown Mode and Elevated Risk Labels in ChatGPT
Original: Introducing Lockdown Mode and Elevated Risk labels in ChatGPT View original →
What OpenAI announced
On February 13, 2026, OpenAI announced two security-focused product changes for ChatGPT: Lockdown Mode and a standardized Elevated Risk labeling system. The company positioned both updates as a response to growing prompt injection risk as AI assistants gain broader access to web tools, connected apps, and enterprise data.
Prompt injection attacks attempt to steer an AI system into following malicious instructions or exposing sensitive information. OpenAI’s new approach goes beyond warnings by introducing deterministic product constraints for users who face higher operational risk. In practical terms, the design goal is to reduce viable exfiltration paths, not just to detect malicious prompts after the fact.
How Lockdown Mode changes behavior
Lockdown Mode is presented as an advanced optional setting, mainly for high-risk user groups such as executives, security teams, and similarly sensitive roles. It is not framed as a default setting for all users. When enabled, ChatGPT interactions with external systems are tightly constrained.
- Web browsing is limited to cached content instead of live network access.
- Some network-enabled capabilities are restricted or disabled when deterministic safeguards are not available.
- Workspace admins can assign Lockdown Mode through role-based controls.
- The mode layers on top of existing enterprise controls, including audit and access governance.
OpenAI’s help guidance also emphasizes that apps and connectors need careful policy configuration, because external integrations can still create exposure if misconfigured. The company’s posture is therefore a combination of product-level guardrails and admin-level governance decisions.
Why Elevated Risk labels matter
OpenAI is also rolling out consistent Elevated Risk labels across ChatGPT, ChatGPT Atlas, and Codex for capabilities that may introduce additional risk, especially around network interactions. The label is intended to improve user decision quality by making risk visibility explicit where settings are configured.
At launch, Lockdown Mode is available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers, with broader availability planned later. The announcement is notable because it treats security as a first-class product surface: users get explicit risk signaling, and organizations get deterministic controls tuned for higher-threat operating contexts.
Related Articles
OpenAI introduced Lockdown Mode and Elevated Risk labels for ChatGPT on February 13, 2026. The changes are designed to give high-risk users stronger controls and make security tradeoffs more explicit as AI products connect to the web and external apps.
OpenAI said on March 31, 2026 that it closed a $122 billion funding round at an $852 billion post-money valuation. The company paired the financing news with fresh scale claims including 900 million weekly active users, $2B in monthly revenue, and API throughput above 15 billion tokens per minute.
HN focused less on the demo reel and more on whether the model can obey dense prompts. ChatGPT Images 2.0 arrived with broader style, multilingual text, and layout examples, but the thread quickly moved into prompt adherence, pricing, and synthetic media fatigue.
Comments (0)
No comments yet. Be the first to comment!