NVIDIA backs Thinking Machines Lab with a gigawatt-scale Vera Rubin partnership and investment

Original: NVIDIA and Thinking Machines Lab Announce Long-Term Gigawatt-Scale Strategic Partnership View original →

Read in other languages: 한국어日本語
AI Mar 13, 2026 By Insights AI 2 min read 4 views Source

On March 10, 2026, NVIDIA and Thinking Machines Lab announced a multiyear strategic partnership centered on deploying at least one gigawatt of next-generation NVIDIA Vera Rubin systems. According to NVIDIA, the infrastructure will support Thinking Machines' frontier model training and platforms for customizable AI at scale, with deployment targeted for early next year.

The agreement goes beyond a straight hardware purchase. The two companies said they will also work together on training and serving systems designed for NVIDIA architectures, while broadening access to frontier AI and open models for enterprises, research institutions and the scientific community. NVIDIA added that it has made a significant investment in Thinking Machines Lab to support the company's long-term growth.

What the partnership includes

  • At least one gigawatt of NVIDIA Vera Rubin systems
  • Deployment targeted for early next year
  • Joint work on training and serving systems for NVIDIA architectures
  • Broader access to frontier AI and open models for enterprises and researchers
  • A significant NVIDIA investment in Thinking Machines Lab

The announcement matters because compute commitments are becoming a defining signal in frontier AI. A new lab cannot operate at the top tier with talent alone; it also needs a credible multiyear infrastructure roadmap and supply certainty. This deal gives Thinking Machines Lab a public compute story at a time when access to large-scale training capacity is itself a competitive advantage.

For NVIDIA, the partnership reinforces its effort to lock new frontier labs into the next platform transition. Vera Rubin is not just the hardware layer here. NVIDIA is explicitly tying the relationship to the software and serving systems that will sit on top of that hardware, which expands its influence from chips into the operating model of the lab.

The next question is execution. Observers will want to see how quickly the early-next-year deployment materializes and what model or product strategy Thinking Machines Lab builds on top of this footprint. Even before those details arrive, the one-gigawatt threshold is a clear sign that the AI infrastructure race has moved into another scale bracket.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.