On March 17, 2026, NVIDIADC described Groq 3 LPX on X as a new rack-scale low-latency inference accelerator for the Vera Rubin platform. NVIDIA’s March 16 press release and technical blog say LPX brings 256 LPUs, 128 GB of on-chip SRAM, and 640 TB/s of scale-up bandwidth into a heterogeneous inference path with Vera Rubin NVL72 for agentic AI workloads.
#vera-rubin
RSS FeedAI X/Twitter Apr 2, 2026 2 min read
AI Mar 26, 2026 2 min read
Thinking Machines Lab said it signed a multi-year strategic partnership with NVIDIA to deploy at least one gigawatt of next-generation Vera Rubin systems. The companies also plan to co-design training and serving systems and widen access to frontier AI and open models for enterprises, research institutions, and the scientific community.
AI Mar 13, 2026 2 min read
NVIDIA and Thinking Machines Lab said on March 10, 2026 that they will deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems under a multiyear partnership. The agreement also covers co-design of training and serving systems plus an NVIDIA investment in Thinking Machines Lab.