LLM Reddit 2d ago 1 min read
LocalLLaMA reacted with genuine wonder because the demo is simple to grasp: a 1.7B Bonsai model, about 290MB, running in a browser through WebGPU. The same thread also did the useful reality check, asking about tokens per second, hallucinations, llama.cpp support, and whether 1-bit models are ready for anything beyond narrow tasks.