OpenAI has made a significant update to its Agents SDK developed for AI agents. With this new feature, developers can run their agents in completely isolated and controlled environments. This step aims to address the security concerns of enterprise customers.
The main element of the update, the "sandbox" support, allows these agents to operate with limited access to specific files and code blocks rather than the entire system. This approach enhances security by separating the control logic from the computing environment. OpenAI emphasizes that this architecture makes agents more reliable and scalable.
Long-Term Control and Sub-Agent Support
The new update introduces the "long-horizon harness" feature, which allows developers to manage the interactions of agents in their working environment in greater detail. This feature plays a critical role in clarifying the boundaries of agents in multi-step tasks. OpenAI refers to such tasks as "long-horizon tasks."
Additionally, the update includes support for sub-agents, enabling agents to be directed to isolated environments and run in parallel. Support for sub-agents is expected to be available for both Python and TypeScript.
This update from OpenAI stands out not only as a technical improvement but also as a strategic move for acquiring enterprise customers. One of the biggest barriers to the adoption of AI agents in large companies is the risk of uncontrolled system access, and sandbox support directly targets this risk. Thus, agents' unpredictable behaviors can be managed to remain within isolated areas.