/ciol/media/media_files/2026/03/10/openai-plans-promptfoo-acquisition-2026-03-10-11-20-47.png)
OpenAI has announced plans to acquire Promptfoo, a platform focused on testing and securing artificial intelligence systems during development. The acquisition, once completed, will see Promptfoo’s technology integrated into OpenAI Frontier, the company’s platform designed for building and operating AI “coworkers” in enterprise environments.
The move reflects a growing concern among enterprises deploying AI agents into real business processes: how to systematically test and govern systems that interact with corporate data, internal tools, and operational workflows. As organisations shift from experimentation to production deployments, evaluation and security checks are becoming central to AI adoption strategies.
OpenAI said the integration will allow enterprises using Frontier to test AI agents for vulnerabilities earlier in the development process, detect potential security issues before deployment, and maintain records needed for governance and compliance oversight. The transaction is subject to customary closing conditions.
Security Challenges Grow As AI Agents Enter Business Workflows
Enterprise interest in AI agents—software systems capable of performing tasks autonomously—has grown rapidly over the past year. These systems can access databases, interact with enterprise software, and perform complex tasks such as generating reports or assisting with customer support operations.
However, the more connected these agents become, the greater the potential risk. Organisations must evaluate how such systems respond to prompts, whether they expose sensitive information, and how they behave when interacting with internal tools or external data sources.
This is the area Promptfoo focuses on. The platform provides tools that allow developers to test and evaluate AI systems during development. Its software is used to identify vulnerabilities such as prompt injection attacks, jailbreak attempts, unintended data exposure, and actions that fall outside defined policies.
Promptfoo has developed both enterprise tools and open-source software designed to evaluate large language model applications. According to the company, its tools are already used by more than 25 percent of Fortune 500 companies, particularly for testing and red-teaming AI applications before deployment. For OpenAI, bringing those capabilities into Frontier is intended to make evaluation and security testing part of the development lifecycle rather than a separate step carried out later.
“Promptfoo brings deep engineering expertise in evaluating, securing, and testing AI systems at enterprise scale. Their work helps businesses deploy secure and reliable AI applications, and we’re excited to bring these capabilities directly into Frontier.” — Srinivas Narayanan, CTO of B2B Applications, OpenAI
Integrating Testing Into The AI Development Lifecycle
Once the acquisition closes, Promptfoo’s capabilities are expected to be integrated directly into the Frontier platform. The goal is to enable developers building AI agents to evaluate security and safety risks while those systems are still being designed and tested.
OpenAI said automated testing and red-teaming tools will be embedded within the platform, allowing enterprises to detect issues such as prompt injections, jailbreak attempts, data leakage risks, misuse of integrated tools, and agent behaviour that violates internal policies.
The company also plans to integrate security testing more deeply into development workflows. Rather than evaluating systems only after deployment, developers will be able to identify and investigate risks earlier in the process and make changes before agents are deployed into operational environments.
Another focus will be governance and traceability. The integrated tools are expected to provide reporting and monitoring capabilities that allow organisations to document testing procedures, track changes in AI behaviour over time, and maintain records that support governance, risk management, and compliance requirements.
These capabilities are becoming increasingly important as regulators and enterprise boards seek clearer accountability mechanisms for AI deployments.
Developers Seek Practical Ways To Test AI Systems
Promptfoo was founded to address what its creators saw as a gap in the AI development ecosystem: the lack of practical tools for systematically testing large language model applications.
As generative AI systems began to move beyond experimentation into enterprise production environments, developers needed ways to simulate attacks, test outputs, and measure how systems behaved under different scenarios.
“We started Promptfoo because developers needed a practical way to secure AI systems. As AI agents become more connected to real data and systems, securing and validating them is more challenging and important than ever. Joining OpenAI lets us accelerate this work, bringing stronger security, safety, and governance capabilities to the teams building real-world AI systems.” — Ian Webster, Co-founder and CEO, Promptfoo
The Promptfoo team, led by Ian Webster and Michael D’Angelo, is expected to join OpenAI once the acquisition is complete. The companies said the open-source Promptfoo project will continue alongside the development of enterprise capabilities within the Frontier platform.
Open-source testing tools have played an important role in the generative AI ecosystem, allowing developers to evaluate models and identify weaknesses in a transparent way. Maintaining the project alongside enterprise integrations suggests OpenAI intends to continue supporting that developer community while expanding enterprise offerings.
Enterprise AI Moves Toward Governance And Accountability
The acquisition comes at a time when enterprise AI discussions are shifting from model performance to operational controls. Organisations are increasingly focused on questions around reliability, auditability, and oversight. AI systems used in enterprise settings often interact with internal data repositories, communication tools, and operational software. Without clear testing frameworks, it can be difficult to assess how these systems behave under unusual conditions or malicious inputs.
By integrating Promptfoo’s capabilities into Frontier, OpenAI appears to be positioning the platform not only as a development environment for AI agents but also as infrastructure for managing their risks. The focus on governance and traceability also reflects a broader trend in enterprise technology: the expectation that AI systems must be explainable, auditable, and monitored over time. For enterprises experimenting with AI coworkers, agents designed to assist employees or automate tasks, these requirements are becoming a prerequisite for broader deployment.
While the financial details of the acquisition have not been disclosed, the move highlights how the AI infrastructure stack is evolving. Early discussions around generative AI focused largely on model capabilities. Increasingly, however, attention is shifting toward the tools needed to deploy and manage those systems at scale.
Testing frameworks, evaluation tools, and governance controls are becoming essential parts of enterprise AI platforms. As organisations move from pilot projects to operational deployments, these capabilities may determine how quickly AI agents can be adopted in regulated or data-sensitive environments. OpenAI’s planned acquisition of Promptfoo signals an effort to address that challenge within its Frontier platform. The deal is expected to close once customary conditions are met.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us