OpenAI moves to acquire Promptfoo and fold agent security testing into Frontier
Original: OpenAI to acquire Promptfoo View original →
On March 9, 2026, OpenAI said on X that it plans to acquire Promptfoo, an AI security platform used to test and red-team LLM applications. In a separate announcement, OpenAI said the deal is intended to bring Promptfoo's evaluation, security, and governance tooling directly into OpenAI Frontier, its enterprise platform for building and operating AI coworkers. The company said the transaction is still subject to customary closing conditions.
OpenAI framed the deal around a practical bottleneck for enterprise agent deployments. As agents move from chat demos into real workflows, companies need repeatable ways to detect prompt injection, jailbreaks, data leakage, tool misuse, and other out-of-policy behavior before deployment. OpenAI said Promptfoo's technology will help make those checks part of the platform instead of an external afterthought.
According to OpenAI, Promptfoo is already used by more than 25 percent of Fortune 500 companies and maintains an open-source CLI and library for evaluating and red-teaming LLM systems. OpenAI said that open-source project will remain available under its current license while the combined team also works on deeper enterprise features inside Frontier. The announcement signals that security testing is becoming a first-class product surface for agent platforms, not just a consulting add-on.
OpenAI highlighted three areas it wants to expand after closing.
- Native security and safety testing inside Frontier
- Tighter integration between evaluation results and development workflows
- Built-in reporting and traceability for governance and compliance
For enterprise buyers, the message is straightforward. Vendors now see agent security, audit trails, and operational oversight as baseline requirements for production AI systems rather than optional extras. If the Promptfoo integration lands as described, Frontier would cover more of the build, test, and operate loop inside a single stack.
Related Articles
OpenAI is attaching cash to the hardest kind of safety failure: a single prompt that breaks all five of its bio safeguards. The new GPT-5.5 Bio Bug Bounty pays $25,000 for a universal jailbreak, limits testing to GPT-5.5 in Codex Desktop, and starts formal testing on April 28.
Why it matters: API availability is the moment a flagship model becomes something teams can actually wire into products. OpenAI’s developer account says GPT-5.5 brings fewer retries, and the official release page now lists API access with a 1M context window and updated pricing.
One of AI’s most important commercial contracts just loosened up. Microsoft keeps Azure’s first-stop role and long-dated IP access, but OpenAI can now sell across any cloud and Microsoft will no longer pay it a revenue share.
Comments (0)
No comments yet. Be the first to comment!