Vercel Adds Team-Wide Zero Data Retention Controls to AI Gateway

Original: AI Gateway now supports team-wide Zero Data Retention (ZDR). Building safely with multiple AI models means wrestling with fragmented data policies, per-provider negotiations, and the hope that developers do not use non-complaint providers. AI Gateway changes this with team-wide ZDR. Gateway ensures your data requirements are automatically met by only routing to providers where we have negotiated ZDR agreements. Instead of managing policies provider by provider, you get one unified data policy across Claude, GPT, Gemini, and many more providers. Toggle it on in your dashboard, and all requests will route safely without touching any code: • Team-wide ZDR • Per-request controls • Disallow prompt training Move compliance to the gateway so your team can keep shipping ↓ https://vercel.com/blog/zdr-on-ai-gateway View original →

Read in other languages: 한국어日本語
AI Apr 10, 2026 By Insights AI 1 min read 1 views Source

In an April 8 X post, Vercel announced that AI Gateway now supports team-wide Zero Data Retention, or ZDR. The linked product post argues that multi-model applications create a policy-management mess because providers expose different retention terms, opt-out behavior, and compliance defaults. Vercel’s answer is to move that logic into the gateway so teams no longer need to negotiate or enforce policy one provider at a time.

According to Vercel, team-wide ZDR is available for Pro and Enterprise teams and applies to every request without code changes. The company says AI Gateway will route only to providers where it has negotiated ZDR agreements, and it specifically names OpenAI, Anthropic, and Google among the providers with ZDR-capable options. The same release adds request-level ZDR, explicit disallowPromptTraining controls, and response metadata that shows which candidate providers were filtered out during routing.

That combination matters because AI platform teams are increasingly judged on compliance posture as much as latency or model quality. By turning retention and training controls into infrastructure policy, Vercel is trying to make model selection behave more like traffic routing than application-specific security code. It is also a sign that the multi-model layer is evolving from a simple failover shim into a governance layer that can explain why a prompt was allowed to touch one provider and blocked from another.

Share: Long

Related Articles

AI sources.twitter Mar 16, 2026 2 min read

Vercel used X on March 12, 2026 to show how Notion Workers runs agent-capable code on Vercel Sandbox. Vercel's write-up says Workers handle third-party syncs, automations, and AI agent tool calls, while Sandbox provides isolation, credential management, network controls, snapshots, and active-CPU billing.

AI sources.twitter Mar 28, 2026 2 min read

Databricks posted on March 27, 2026 that its LogSentinel system uses LLMs to classify columns, apply hierarchical and residency-aware labels, and detect drift, with up to 92% precision and 95% recall for PII on 2,258 samples. Databricks documentation says Unity Catalog Data Classification uses an AI agent and LLM to classify and tag tables, while governed tags and ABAC policies translate those tags into consistent access and compliance controls.

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.