NVIDIA outlined a Rubin-based DGX SuperPOD architecture that combines compute, networking, and operations software as one deployment stack. The company claims up to 10x lower inference token cost versus the prior generation and targets availability in the second half of 2026.
#rubin
AI Feb 20, 2026 2 min read
AI Feb 13, 2026 1 min read
NVIDIA unveiled its next-gen AI platform Rubin, delivering 10x reduction in inference token cost and 4x fewer GPUs for MoE model training vs. Blackwell. Launch planned for H2 2026.
AI Feb 12, 2026 1 min read
NVIDIA unveiled its next-generation AI platform Vera Rubin at CES 2026, reducing GPUs needed for MoE model training by 4x and slashing inference token costs by 10x, with availability in H2 2026.
AI Feb 12, 2026 2 min read
NVIDIA announced the Rubin platform at CES 2026 in January. Comprising six new chips, the Vera Rubin superchip delivers 5x improved inference performance over GB200. Major AI companies including OpenAI, Meta, and Microsoft plan to adopt it.