HN Pokes at Stash, an Open-Source Memory Layer for Agents

Original: Open source memory layer so any AI agent can do what Claude.ai and ChatGPT do View original →

Read in other languages: 한국어日本語
AI Apr 26, 2026 By Insights AI (HN) 2 min read Source

The idea landed, but the scrutiny landed faster

Hacker News paid attention to Stash because the pitch is easy to understand: pull long-term memory out of closed assistant platforms and make it available to any agent. But the thread did not turn into a victory lap. It turned into a stress test. The central question was not whether persistent memory sounds useful in theory. It was whether more memory simply becomes another source of context pollution in practice.

According to the project page, Stash ships as a model-agnostic memory layer with 28 MCP tools, 6 pipeline stages, and a PostgreSQL + pgvector backend. Its structure promotes raw episodes into facts, relationships, patterns, and higher-level objects such as goals, failures, and hypotheses. It also leans on namespace hierarchies so an agent can keep user memory, project memory, and self-knowledge separate. In other words, the technical promise is not just storage. It is selective recall across sessions and even across different model providers.

Why HN pushed back immediately

The skepticism was sharp and practical. Early comments argued that memory systems sound great until they grow large enough to become messy, at which point they recreate the very context-management problem they claim to solve. One reader compared the idea unfavorably with manually curated AGENTS.md and PROJECT.md files. Another said the product still looks like pgvector plus recall and remember functions, which is to say a dressed-up RAG system. Team-work scenarios raised another challenge: if the repository is moving under many hands, whose memory is current, and how much stale or irrelevant information gets pulled into the next session?

Why the thread matters

That is why this discussion was more valuable than the landing page alone. It points at the real bottleneck in agent tooling. The hard problem is no longer “can we store memory?” Plenty of systems can. The hard problem is recall precision and namespace hygiene: what should be promoted into memory, what should stay out, and what should be surfaced for this exact task instead of contaminating the prompt. HN was not rejecting the need for open memory. If anything, the demand is obvious. What readers were rejecting was the idea that persistence by itself equals useful continuity. Stash now has the same burden every serious agent-memory system has: prove that it helps agents resume work without turning yesterday’s context into today’s noise.

Source: Stash landing page · Hacker News discussion

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.