Perplexity Extends Agent API with Sandbox Execution as a Tool and Standalone Service

Original: Our Sandbox API will be available as a tool within Agent API, allowing the orchestration runtime to delegate to deterministic code execution. Sandbox API makes the same execution environment we use internally available as a standalone service for developers. View original →

Read in other languages: 한국어日本語
LLM Mar 14, 2026 By Insights AI 2 min read 3 views Source

On March 11, 2026, Perplexity said on X that its Sandbox API will be available both as a tool inside Agent API and as a standalone service for developers. The company described the environment as the same execution runtime it uses internally, with the orchestration layer able to delegate deterministic code execution when needed.

The current Agent API quickstart already presents Perplexity as a multi-provider interface for building LLM applications. Perplexity says developers can access models from multiple providers through one API while configuring reasoning, token budgets, and tools with consistent syntax. In the separate Tools documentation, the company explains that tools must be explicitly enabled in each request and that built-in tools currently include web_search and fetch_url, while custom functions connect the model to external systems.

What the Sandbox update changes

  • It brings deterministic code execution closer to a first-class tool in the agent runtime.
  • It exposes Perplexity’s internal execution environment as a standalone developer surface.
  • It gives builders a cleaner path to combine retrieval and code execution inside one orchestration flow.

That matters because many practical agent workloads need more than retrieval. Research-style agents often have to parse files, transform data, verify calculations, or run small programs after searching the web. Until now, Perplexity’s public docs have centered on retrieval-oriented tools and custom function calls. The March 11 X post signals that the company wants code execution to sit closer to the core of its orchestration model rather than remain an external add-on.

At the time of this announcement, the public docs still described the broader tool framework around web_search, fetch_url, and custom functions, so the X post works as an early indicator of where deterministic execution will fit into that model. If Perplexity follows through with clear limits, security controls, and pricing, Sandbox could become a practical bridge between research agents and workflows that need deterministic actions instead of text-only reasoning.

In that sense, this is more than another tool toggle. It is a sign that competition among agent platforms is shifting toward who can package model access, retrieval, and controlled execution into a single developer surface with fewer moving parts.

Share: Long

Related Articles

LLM sources.twitter 4d ago 2 min read

OpenAI Developers published a March 11, 2026 engineering write-up explaining how the Responses API uses a hosted computer environment for long-running agent workflows. The post centers on shell execution, hosted containers, controlled network access, reusable skills, and native compaction for context management.

LLM Mar 9, 2026 2 min read

GitHub Copilot CLI is now generally available, bringing Copilot into the terminal for standard subscribers. GitHub paired the release with broader Copilot changes including next edit suggestions, MCP-enabled agent mode, background agents, and a higher-end Pro+ plan.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.