r/MachineLearning treated this less like a finished breakthrough and more like a serious challenge to the current assumptions around large-scale spike-domain training. The April 13, 2026 post reported a 1.088B pure SNN language model reaching loss 4.4 at 27K steps with 93% sparsity, while commenters pushed for more comparable metrics and longer training before drawing big conclusions.
#spiking-neural-networks
RSS FeedLLM Reddit Apr 14, 2026 2 min read
LLM Reddit Apr 14, 2026 2 min read
A research-oriented post on r/MachineLearning claimed that a pure spiking neural network language model could reach 1.088B parameters from random initialization before budget limits ended the run.
LLM Reddit Feb 27, 2026 2 min read
A r/LocalLLaMA post reports a from-scratch 144M-parameter Spiking Neural Network language model experiment named Nord. The author claims 97-98% inference sparsity, STDP-based online updates, and better prompt-level topic retention than GPT-2 Small on limited examples, while clearly noting current loss and benchmark limitations.