In a March 29, 2026 X post, OpenAI Developers introduced Codex Security, a research preview aimed at identifying, validating, and remediating software vulnerabilities. The launch extends AI coding assistance into application security workflows.
#application-security
RSS FeedOpenAI said on March 6, 2026 that Codex Security is entering research preview for ChatGPT Pro, Enterprise, Business, and Edu users in Codex web. The company says the application-security agent uses project-specific threat models, contextual validation, and patch proposals, and in beta scanned more than 1.2 million commits.
OpenAI says Codex Security deliberately does not start from a SAST report because many real vulnerabilities come from broken validation order, canonicalization, and other behavioral flaws rather than simple dataflow patterns. Instead, the system starts from repository behavior and validates hypotheses with focused tests in a sandbox.
On February 20, 2026, Anthropic introduced Claude Code Security in limited research preview. The feature scans codebases for vulnerabilities and proposes patches, while keeping final remediation decisions under human review and approval.