The r/LocalLLaMA community is buzzing over Qwen 3.5-35B-A3B, which users report outperforms GPT-OSS-120B while being only one-third the size, making it an excellent local daily driver for development tasks.
#moe
r/LocalLLaMA Benchmarks: <code>Krasis</code> reports 3,324 tok/s prefill for 80B MoE on one RTX 5080
A r/LocalLLaMA post (score 180, 53 comments) shared benchmark data for <code>Krasis</code>, a hybrid CPU/GPU runtime aimed at large MoE models. The key claim is that GPU-heavy prefill plus CPU decode can reduce long-context waiting time even when full models do not fit in consumer VRAM.
A high-engagement r/LocalLLaMA post surfaced the Qwen3.5-35B-A3B model card on Hugging Face. The card emphasizes MoE efficiency, long context handling, and deployment paths across common open-source inference stacks.
A high-signal Hacker News post highlighted StepFun's Step 3.5 Flash launch, describing a 196B-parameter MoE foundation model with about 11B active parameters, 256K context, and vendor-reported coding/agent benchmarks.
A r/LocalLLaMA post on Qwen3.5 gained 123 upvotes and pointed directly to public weights and model documentation. The linked card confirms key specs including 397B total parameters, 17B activated, and 262,144 native context length.
NVIDIA unveiled its next-generation AI platform Vera Rubin at CES 2026, reducing GPUs needed for MoE model training by 4x and slashing inference token costs by 10x, with availability in H2 2026.
Meta has unveiled Llama 4 Scout and Maverick, the first open-weight natively multimodal models. With industry-leading 10 million token context and MoE architecture, they outperform GPT-4o and Gemini 2.0 Flash.
Z.ai unveiled GLM-5, a 744B parameter (40B active) model pre-trained on 28.5T tokens. Designed for complex systems engineering and long-horizon agentic tasks, it leads open-source models in multiple benchmarks.