Thinking Machines Lab and NVIDIA sign gigawatt-scale partnership for frontier AI systems
Original: Thinking Machines Lab and NVIDIA Announce Long-Term Gigawatt-Scale Strategic Partnership View original →
Thinking Machines Lab announced on March 10, 2026 that it has entered a multi-year strategic partnership with NVIDIA to build frontier AI infrastructure at unusually large scale. The headline commitment is to deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems for model training and production platforms, with first deployment targeted for early next year.
That scale matters because frontier model development is increasingly constrained by power, networking, cooling, and systems integration rather than by algorithms alone. A gigawatt-class buildout puts Thinking Machines Lab in the same conversation as the largest planned AI compute clusters and signals that the company intends to compete aggressively in base-model training and high-end AI platforms.
What the agreement covers
According to the official announcement, the partnership is not limited to hardware procurement. Thinking Machines Lab and NVIDIA said they will work together on the design of training and serving systems optimized for NVIDIA architectures. The companies also said the collaboration is meant to broaden access to frontier AI and open models for enterprises, research institutions, and the scientific community.
NVIDIA also disclosed a significant investment in Thinking Machines Lab as part of the relationship. That adds a financing dimension to the deal: NVIDIA is not only supplying the platform roadmap but also backing the company’s longer-term growth while it builds its research organization and product stack.
- At least one gigawatt of next-generation NVIDIA Vera Rubin systems
- Initial deployment targeted for early next year
- Joint work on training and serving system design
- Stated focus on enterprises, research institutions, and scientific users
The announcement is notable because it combines compute access, system co-design, and strategic capital in a single package. For the broader AI market, it is another sign that the next wave of competition will depend on securing power and end-to-end infrastructure at data-center scale, not just improving model quality in isolation.
Related Articles
NVIDIA and Thinking Machines Lab said on March 10, 2026 that they will deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems under a multiyear partnership. The agreement also covers co-design of training and serving systems plus an NVIDIA investment in Thinking Machines Lab.
NVIDIA and Emerald AI said on March 23, 2026 that they are working with AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra on power-flexible AI factories. The concept combines Vera Rubin DSX infrastructure with DSX Flex so AI campuses can connect faster and behave more like grid assets than passive loads.
NVIDIA said GTC 2026 will run March 16-19 in San Jose, California. The company projects 30,000+ attendees from 190+ countries and more than 1,000 sessions across the AI stack. The program includes Jensen Huang’s keynote, hands-on labs, startup showcases, and an analyst Q&A session.
Comments (0)
No comments yet. Be the first to comment!