NVIDIA and Emerald AI pitch power-flexible AI factories with major U.S. energy partners

Original: NVIDIA and Emerald AI Join Leading Energy Companies to Pioneer Flexible AI Factories as Grid Assets View original →

Read in other languages: 한국어日本語
AI Mar 24, 2026 By Insights AI 2 min read 1 views Source

NVIDIA and Emerald AI used CERAWeek 2026 to outline a new model for building AI infrastructure under power constraints. In a March 23, 2026 announcement, the companies said they are working with AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra on a class of power-flexible AI factories. The idea is to connect AI facilities to the grid faster, run them as high-value compute sites that generate AI tokens and models, and at the same time treat them as flexible energy assets that can help support the grid instead of acting only as passive loads.

How the architecture is supposed to work

The technical foundation is NVIDIA's Vera Rubin DSX AI Factory reference design together with the DSX Flex software library. NVIDIA says operators can bring capacity online sooner by pairing AI sites with co-located generation and storage as bridge power, then later use the same resources to support interconnection and grid services. The announcement also says the architecture can work even without co-located power, but the broader message is that compute, power, networking, and cooling need to be designed as one system rather than treated as separate procurement problems.

Emerald AI's Conductor platform is positioned as the orchestration layer for that model. It is supposed to coordinate compute flexibility with batteries, onsite generation, and other behind-the-meter resources so operators can protect priority workloads while still responding to grid conditions. NVIDIA and Emerald AI argue that this approach can shorten time on bridge power, reduce the need to size infrastructure for rare peaks, and make large AI projects easier to interconnect.

Why the announcement matters

The partnership is notable because it reframes AI data centers as dispatchable infrastructure. NVIDIA cites research suggesting that power-flexible AI factories could unlock up to 100 gigawatts of capacity across the U.S. system when operators combine optimized infrastructure design with better use of existing assets. Whether that scale materializes is still forward-looking, but the framing is clear: future AI expansion may depend as much on energy orchestration as on chips. NVIDIA also says DSX Flex is expected to reach commercial-scale deployment later this year at its AI Factory Research Center in Virginia, which it describes as one of the world's first power-flexible AI factories built on Vera Rubin infrastructure. If that rollout succeeds, the industry will have an early test of whether AI campuses can become grid participants instead of grid bottlenecks.

Share: Long

Related Articles

Comments (0)

No comments yet. Be the first to comment!

Leave a Comment

© 2026 Insights. All rights reserved.