NVIDIA and Thinking Machines Plan 1-Gigawatt Vera Rubin Buildout
Original: We’re thrilled to partner with @thinkymachines to deploy at least 1 gigawatt of NVIDIA Vera Rubin systems for frontier AI model training. View original →
NVIDIAAI said on March 10, 2026 that it is partnering with Thinking Machines to deploy at least 1 gigawatt of NVIDIA Vera Rubin systems for frontier AI model training. In the referenced Thinking Machines post, the startup said the partnership will power both frontier training and platforms aimed at delivering customizable AI.
That is a large infrastructure signal even by current frontier-lab standards. A gigawatt-scale buildout points to the level of power, cooling, networking, and capital that next-generation model training clusters now require, especially for multimodal systems and longer training cycles.
Thinking Machines has been positioning itself around customizable, generally capable AI systems and long-horizon infrastructure quality. NVIDIA’s involvement gives the company a clear hardware anchor around the Vera Rubin generation, while also highlighting how compute access remains one of the main competitive barriers in frontier AI.
Why it matters
- The announcement ties a new frontier lab directly to a massive next-generation GPU deployment plan.
- It reinforces that leading AI programs are competing on infrastructure scale as much as model architecture.
- This suggests the next wave of frontier model development will be shaped by who can secure power, supply, and deployment partners early.
The post does not spell out product timelines or model release dates. But it does show that Thinking Machines is moving quickly from research positioning to concrete infrastructure commitments, and that NVIDIA wants its Vera Rubin systems at the center of that buildout.
Primary sources: NVIDIAAI on X and Thinking Machines Lab.
Related Articles
AWS said on March 16, 2026 that it is expanding its NVIDIA collaboration from chips and networking to software, data movement, and Amazon Bedrock model services. The companies plan more than 1 million GPUs across AWS regions beginning in 2026 and are adding new Blackwell, Nemotron, and NIXL integrations aimed at production AI workloads.
OpenAI said it closed a $122 billion funding round on March 31, 2026 at an $852 billion post-money valuation. The company tied the raise to compute expansion, product development, and deeper enterprise and developer adoption.
Anthropic said on April 6, 2026 that it secured multiple gigawatts of next-generation TPU capacity from Google and Broadcom starting in 2027. The deal pairs infrastructure scale with surging demand, as run-rate revenue has passed $30 billion and million-dollar customers have doubled since February.
Comments (0)
No comments yet. Be the first to comment!