Show HN: OneCLI Puts an Agent-Safe Vault in Front of API Keys
Original: Show HN: OneCLI – Vault for AI Agents in Rust View original →
OneCLI is a new Show HN project aimed at a specific operational problem: many AI agents are still being given raw API keys and broad credentials. The project proposes a different model. Teams store real secrets once in an encrypted vault, give agents placeholder keys, and let a proxy swap the real credentials into outbound requests only after host and path checks pass.
The author says the proxy is written in Rust, the dashboard is built with Next.js, secrets are encrypted at rest with AES-256-GCM, and the whole system can run as a single Docker deployment with an embedded PGlite/Postgres layer. In practice, the pitch is simple: let agents call tools and APIs as usual, but keep the actual secrets outside the model’s immediate execution context.
Hacker News readers generally agreed that the underlying security problem is real, but the discussion quickly moved beyond the demo. Several commenters pointed out that auth proxies, STS-style temporary credentials, and vault products already solve adjacent problems. Others highlighted harder edges that any serious deployment has to address: frameworks that ignore HTTP_PROXY, AWS request re-signing, and the need to run the trust boundary outside the agent sandbox so the agent cannot simply read the keys from the proxy process.
That debate is the interesting part. OneCLI is less a claim that AI needs brand-new security primitives and more a practical attempt to package known security controls around agent workflows. If agent tooling continues to move from experiments into production, products like this will be judged on policy enforcement, auditability, identity binding, and compatibility rather than on the vault abstraction alone.
It is an early project, but it captures a real shift: agent safety is no longer only about prompt injection or output filtering, it is also about credential boundaries and operational blast radius. Original source: GitHub. Community discussion: Hacker News.
Related Articles
A March 2026 Hacker News thread pushed Stanford SCS’s `jai` to 604 points and 313 comments. The tool aims to contain AI agents on Linux by keeping the current working directory writable while placing the rest of the home directory behind an overlay or hiding it entirely.
UC Berkeley researchers say eight major AI agent benchmarks can be driven to near-perfect scores without actually solving the underlying tasks. Their warning is straightforward: leaderboard numbers are only as trustworthy as the evaluation design behind them.
Axios reports the NSA is using Anthropic's Mythos Preview even as Pentagon officials call the company a supply-chain risk. The clash puts AI safety limits, federal cyber demand, and procurement politics in the same room.
Comments (0)
No comments yet. Be the first to comment!