OpenAI explains why Codex Security does not start from a SAST report
Original: Why Codex Security Doesn’t Include a SAST Report View original →
On March 16, 2026, OpenAI published a design note explaining why Codex Security does not begin from a static application security testing, or SAST, report. The company says Codex Security was built as an agent that reads a repository directly, reasons about architecture, trust boundaries, and intended behavior, and then validates what it finds before asking a human reviewer to spend time on it. The stated goal is to reduce triage work by producing higher-confidence findings instead of a large list of possible issues.
OpenAI’s core argument is that many high-impact vulnerabilities are not simple source-to-sink problems. In the post, the team describes cases where code appears to perform a security check, but that check does not actually guarantee the property the system depends on. One example is a web app that validates a redirect_url with an allowlist regex, then URL-decodes the value before passing it to a redirect handler. The visible dataflow is easy to trace, but the harder question is whether the validation still constrains the decoded value after normalization and parsing. OpenAI points to CVE-2024-29041 in Express as a real example of this pattern.
Because of that, Codex Security is designed to start from behavior and then validate. OpenAI says the system reads the relevant code path with full repository context, reduces suspicious logic into small testable slices, writes micro-fuzzers when useful, and can use tools such as z3-solver in a Python environment for constraint problems. When possible, it executes hypotheses in a sandboxed validation environment to distinguish “this could be a problem” from “this is a problem.” The company says that moving from suspicion to validated evidence is the most expensive part of modern AppSec triage, and that is the part it wants the product to optimize.
OpenAI is explicit that this is not a rejection of SAST. The post says SAST remains valuable for secure coding standards, predictable source-to-sink classes, and defense-in-depth. The reason not to seed Codex Security with a SAST report is that doing so can narrow the search space too early, inherit assumptions about trust boundaries and sanitization, and make it harder to measure what the agent discovered on its own. For security teams, the announcement positions Codex Security less as a faster wrapper around existing scanners and more as a repo-native reasoning and validation layer aimed at complex, context-heavy vulnerabilities.
Related Articles
OpenAI says Codex Security deliberately does not start from a SAST report because many real vulnerabilities come from broken validation order, canonicalization, and other behavioral flaws rather than simple dataflow patterns. Instead, the system starts from repository behavior and validates hypotheses with focused tests in a sandbox.
OpenAI announced Codex Security on X on March 6, 2026. Public materials describe it as an application security agent that analyzes project context to detect, validate, and patch complex vulnerabilities with higher confidence and less noise.
OpenAI Developers published a March 11, 2026 engineering write-up explaining how the Responses API uses a hosted computer environment for long-running agent workflows. The post centers on shell execution, hosted containers, controlled network access, reusable skills, and native compaction for context management.
Comments (0)
No comments yet. Be the first to comment!