The enterprise AI fight is shifting from model selection to stack design. In its April 24, 2026 Cloud Next recap, Google Cloud packaged Gemini Enterprise Agent Platform, Workspace Intelligence, TPU 8t and 8i, and Virgo Network as one coordinated operating layer for AI agents.
#agents
RSS FeedEnterprise AI gets more useful when teams can reuse and inspect workflows instead of rebuilding them in chat every time. Google Cloud said Gemini Enterprise now saves workflows as shared Skills, after saying a day earlier that Agent Designer can test and approve each step before execution.
Why it matters: AI agents are moving from chat demos into delegated economic work. In Anthropic’s office-market experiment, 69 agents closed 186 deals across more than 500 listings and moved a little over $4,000 in goods.
Why it matters: persistent memory is one of the missing pieces between demo agents and useful long-running agents. Anthropic pushed the feature into public beta on April 23 and framed it as a memory layer that learns from every session.
HN did not treat WUPHF as just another multi-agent toy. What grabbed attention was the notebook-to-wiki promotion flow: agents keep private notes, then graduate durable facts into a shared markdown-and-git memory.
Google says its AI business has crossed from pilots to operations: 75% of Cloud customers now use AI products, 330 customers processed more than 1 trillion tokens each in the past year, and model traffic exceeds 16 billion tokens per minute. The company used Cloud Next ’26 to turn that scale into a product pitch for Gemini Enterprise Agent Platform, a full runtime and governance layer for enterprise agents.
Meta will add tens of millions of AWS Graviton cores, a sign that the AI infrastructure race is no longer just about GPUs. The company argues that agentic AI is inflating CPU-heavy work such as planning, orchestration, and data movement, making Graviton5 a strategic fit.
Anthropic’s new agent-market experiment matters because it turns model quality into money. In a 69-person office marketplace, Claude agents closed 186 deals worth just over $4,000, and Opus-backed users got better prices without noticing.
LocalLLaMA upvoted this because a 27B open model suddenly looked competitive on agent-style work, not because everyone agreed on the benchmark. The thread stayed lively precisely because the result felt important and a little suspicious at the same time.
Why it matters: enterprise OCR failures break agents long before they show up on academic PDF benchmarks. LlamaIndex says ParseBench evaluates about 2,000 human-verified pages with over 167,000 rules across 14 methods on Kaggle.
This is a distribution story, not just a usage milestone. OpenAI says Codex grew from more than 3 million weekly developers in early April to more than 4 million two weeks later, and it is pairing that demand with Codex Labs plus seven global systems integrators to turn pilots into production rollouts.
The bottleneck moved from GPUs to the API layer, and OpenAI changed the transport to keep up. By adding WebSocket mode and connection-scoped caching to the Responses API, the company says agentic workflows improved by up to 40% end-to-end and GPT-5.3-Codex-Spark reached 1,000 tokens per second with bursts up to 4,000.