NVIDIA backs Thinking Machines Lab with a gigawatt-scale Vera Rubin partnership and investment
Original: NVIDIA and Thinking Machines Lab Announce Long-Term Gigawatt-Scale Strategic Partnership View original →
On March 10, 2026, NVIDIA and Thinking Machines Lab announced a multiyear strategic partnership centered on deploying at least one gigawatt of next-generation NVIDIA Vera Rubin systems. According to NVIDIA, the infrastructure will support Thinking Machines' frontier model training and platforms for customizable AI at scale, with deployment targeted for early next year.
The agreement goes beyond a straight hardware purchase. The two companies said they will also work together on training and serving systems designed for NVIDIA architectures, while broadening access to frontier AI and open models for enterprises, research institutions and the scientific community. NVIDIA added that it has made a significant investment in Thinking Machines Lab to support the company's long-term growth.
What the partnership includes
- At least one gigawatt of NVIDIA Vera Rubin systems
- Deployment targeted for early next year
- Joint work on training and serving systems for NVIDIA architectures
- Broader access to frontier AI and open models for enterprises and researchers
- A significant NVIDIA investment in Thinking Machines Lab
The announcement matters because compute commitments are becoming a defining signal in frontier AI. A new lab cannot operate at the top tier with talent alone; it also needs a credible multiyear infrastructure roadmap and supply certainty. This deal gives Thinking Machines Lab a public compute story at a time when access to large-scale training capacity is itself a competitive advantage.
For NVIDIA, the partnership reinforces its effort to lock new frontier labs into the next platform transition. Vera Rubin is not just the hardware layer here. NVIDIA is explicitly tying the relationship to the software and serving systems that will sit on top of that hardware, which expands its influence from chips into the operating model of the lab.
The next question is execution. Observers will want to see how quickly the early-next-year deployment materializes and what model or product strategy Thinking Machines Lab builds on top of this footprint. Even before those details arrive, the one-gigawatt threshold is a clear sign that the AI infrastructure race has moved into another scale bracket.
Related Articles
NVIDIA outlined a Rubin-based DGX SuperPOD architecture that combines compute, networking, and operations software as one deployment stack. The company claims up to 10x lower inference token cost versus the prior generation and targets availability in the second half of 2026.
In its February 12, 2026 post, NVIDIA describes DGX Spark as a desktop AI system now used across universities for on-prem model development and rapid iteration. The examples span South Pole neutrino analysis, medical report evaluation, and campus robotics workloads.
OpenAI announced $110B in new investment on February 27, 2026, alongside Amazon and NVIDIA partnerships aimed at compute scale. The company tied the move to 900M weekly ChatGPT users, 9M paying business users, and rising Codex demand.
Comments (0)
No comments yet. Be the first to comment!