AI Reddit 6d ago 1 min read
A LocalLLaMA post details recurring Whisper hallucinations during silence and proposes a layered mitigation stack including Silero VAD gating, prompt-history reset, and exact-string blocking.
A LocalLLaMA post details recurring Whisper hallucinations during silence and proposes a layered mitigation stack including Silero VAD gating, prompt-history reset, and exact-string blocking.
A r/singularity thread boosted attention on an arXiv paper studying hallucination-associated neurons in LLMs. The authors report that a very small subset of neurons can predict hallucination behavior and may be causally involved.
Perplexity launched Model Council, a system running multiple frontier AI models including Claude, GPT-5.2, and Gemini in parallel to generate unified, cross-validated answers. This significantly improves reasoning quality and reduces hallucination errors.