In a March 29, 2026 X post, OpenAI Developers introduced Codex Security, a research preview aimed at identifying, validating, and remediating software vulnerabilities. The launch extends AI coding assistance into application security workflows.
#codex-security
RSS FeedOpenAI said on March 6, 2026 that Codex Security is entering research preview for ChatGPT Pro, Enterprise, Business, and Edu users in Codex web. The company says the application-security agent uses project-specific threat models, contextual validation, and patch proposals, and in beta scanned more than 1.2 million commits.
OpenAI says Codex Security is built to reason from repository behavior, not to triage a precomputed SAST report. The company argues that many important bugs come from failed invariants and transformation chains, so the agent should validate hypotheses in context before escalating them.
OpenAI says Codex Security deliberately does not start from a SAST report because many real vulnerabilities come from broken validation order, canonicalization, and other behavioral flaws rather than simple dataflow patterns. Instead, the system starts from repository behavior and validates hypotheses with focused tests in a sandbox.
OpenAI announced Codex Security on X on March 6, 2026. Public materials describe it as an application security agent that analyzes project context to detect, validate, and patch complex vulnerabilities with higher confidence and less noise.