Reddit debates an AI memory system that forgets on purpose instead of storing everything
Original: Built an AI memory system based on cognitive science instead of vector databases View original →
A developer post in r/artificial argues that a lot of AI agent memory systems eventually hit the same problem: vector-database recall gets noisier as the store grows. The author says most systems are effectively append-only plus semantic search, which works at first but degrades over time as stale or weakly relevant memories keep accumulating. Instead of that pattern, the post proposes a memory system based on cognitive-science ideas such as ACT-R activation decay, Hebbian learning, and Ebbinghaus forgetting curves.
The numbers in the thread are what made people stop. The author claims that after 30 days in production the system had handled 3,846 memories and more than 230K recalls with $0 inference cost because it runs in pure Python without embeddings. The most important claim is qualitative rather than financial: active forgetting improved retrieval quality. In the author’s telling, letting old or weak memories decay produced more relevant recalls than a flat-store baseline. The next planned features are multi-agent shared memory with namespace isolation and ACLs, plus an “emotional feedback bus.”
The comments moved quickly from enthusiasm to design questions. Some readers connected the idea to Graph-RAG plus decay-based memory. Others asked the hard question any cognitive-memory system eventually faces: where is the boundary between episodic memories and durable semantic facts? Another commenter wanted to know whether emotional signals change activation weights or flatten decay curves. There was also some skepticism, including a dry question about whether every comment in the thread had been written by an LLM.
The useful takeaway is that pruning may matter more than hoarding in long-running agent systems. If retrieval quality collapses under accumulated noise, “remember everything” is not obviously the right default. At the same time, the post is still largely a first-person production claim rather than a published benchmark against standard baselines. That makes it more interesting as a design direction than as settled evidence, but the direction itself is strong: forgetting can be a feature, not a defect. Source: r/artificial discussion.
Related Articles
Samsung says it will transition global manufacturing into AI-Driven Factories by 2030. The roadmap combines digital twin simulations, AI agents, and in-factory companion robots to optimize production site by site.
Axe argues that agent software should look more like pipes, TOML configs, and short-lived commands than like one giant chatbot runtime, an idea that resonated with Hacker News readers building automation around local and cloud models.
OneCLI proposes a proxy-and-vault pattern for AI agents so tools stay reachable while real credentials remain outside the model runtime.
Comments (0)
No comments yet. Be the first to comment!