NVIDIA and Emerald AI pitch power-flexible AI factories with major U.S. energy partners
Original: NVIDIA and Emerald AI Join Leading Energy Companies to Pioneer Flexible AI Factories as Grid Assets View original →
NVIDIA and Emerald AI used CERAWeek 2026 to outline a new model for building AI infrastructure under power constraints. In a March 23, 2026 announcement, the companies said they are working with AES, Constellation, Invenergy, NextEra Energy, Nscale Energy & Power, and Vistra on a class of power-flexible AI factories. The idea is to connect AI facilities to the grid faster, run them as high-value compute sites that generate AI tokens and models, and at the same time treat them as flexible energy assets that can help support the grid instead of acting only as passive loads.
How the architecture is supposed to work
The technical foundation is NVIDIA's Vera Rubin DSX AI Factory reference design together with the DSX Flex software library. NVIDIA says operators can bring capacity online sooner by pairing AI sites with co-located generation and storage as bridge power, then later use the same resources to support interconnection and grid services. The announcement also says the architecture can work even without co-located power, but the broader message is that compute, power, networking, and cooling need to be designed as one system rather than treated as separate procurement problems.
Emerald AI's Conductor platform is positioned as the orchestration layer for that model. It is supposed to coordinate compute flexibility with batteries, onsite generation, and other behind-the-meter resources so operators can protect priority workloads while still responding to grid conditions. NVIDIA and Emerald AI argue that this approach can shorten time on bridge power, reduce the need to size infrastructure for rare peaks, and make large AI projects easier to interconnect.
Why the announcement matters
The partnership is notable because it reframes AI data centers as dispatchable infrastructure. NVIDIA cites research suggesting that power-flexible AI factories could unlock up to 100 gigawatts of capacity across the U.S. system when operators combine optimized infrastructure design with better use of existing assets. Whether that scale materializes is still forward-looking, but the framing is clear: future AI expansion may depend as much on energy orchestration as on chips. NVIDIA also says DSX Flex is expected to reach commercial-scale deployment later this year at its AI Factory Research Center in Virginia, which it describes as one of the world's first power-flexible AI factories built on Vera Rubin infrastructure. If that rollout succeeds, the industry will have an early test of whether AI campuses can become grid participants instead of grid bottlenecks.
Related Articles
NVIDIA says Vera is the first processor built specifically for agentic AI and reinforcement learning. On Hacker News, the announcement reached 165 points and 98 comments as readers focused on CPU-GPU coupling, rack density, and the practical value of NVIDIA's efficiency claims.
NVIDIA said GTC 2026 will run March 16-19 in San Jose, California. The company projects 30,000+ attendees from 190+ countries and more than 1,000 sessions across the AI stack. The program includes Jensen Huang’s keynote, hands-on labs, startup showcases, and an analyst Q&A session.
NVIDIA and Thinking Machines Lab said on March 10, 2026 that they will deploy at least one gigawatt of next-generation NVIDIA Vera Rubin systems under a multiyear partnership. The agreement also covers co-design of training and serving systems plus an NVIDIA investment in Thinking Machines Lab.
Comments (0)
No comments yet. Be the first to comment!