NVIDIA and SGLang Claim Major DeepSeek R1 Inference Speedups
Original: NVIDIA and SGLang report 25x DeepSeek R1 inference gain on GB300 NVL72 versus H200 View original →
Performance claims in the post
In an X post on March 3, 2026, NVIDIA AI Developer said its latest collaboration with SGLang delivered major DeepSeek R1 inference gains: up to 25x faster throughput on GB300 NVL72 versus H200, plus an 8x performance increase on GB200 NVL72 in less than four months. The post also states that the optimizations lower cost per token while improving large-scale MoE serving performance.
Tech levers cited by NVIDIA and SGLang
The announcement names three technical factors: NVFP4 precision, NVIDIA Dynamo-powered disaggregation, and improved computation-communication overlap. A quoted LMSYS post presents the same directional result and frames it as InferenceXv2 progress on Blackwell-class systems. The broader implication is that system-level serving design, not only model architecture, is now a primary lever for deployment economics in production MoE workloads.
How to interpret the numbers
The reported multipliers are significant and relevant for operators planning hardware refresh cycles, but they are still vendor- and workload-specific claims. Throughput deltas can vary heavily by token-rate targets, sequence profiles, scheduling strategy, and kernel maturity. Even with that caveat, the disclosure is notable because it combines architecture-level upgrades with concrete serving-engine methods and ties them directly to real deployment cost signals.
Sources: NVIDIA AI Developer X post, LMSYS quoted X post, LMSYS blog index
Related Articles
NVIDIA AI Developer introduced Nemotron 3 Super on March 11, 2026 as an open 120B-parameter hybrid MoE model with 12B active parameters and a native 1M-token context window. NVIDIA says the model targets agentic workloads with up to 5x higher throughput than the previous Nemotron Super model.
In a February 12, 2026 post, NVIDIA said major inference providers are reducing token costs with open-source frontier models on Blackwell. The article includes partner-reported gains across healthcare, gaming, and enterprise support workloads.
A LocalLLaMA thread spotlights FlashAttention-4, which reports up to 1605 TFLOPs/s on B200 BF16 and introduces pipeline and memory-layout changes tuned for Blackwell constraints.
Comments (0)
No comments yet. Be the first to comment!