OpenAI tops its 10GW U.S. compute goal early as Stargate adds 3GW in 90 days
Original: Building the compute infrastructure for the Intelligence Age View original →
OpenAI has turned a long-range infrastructure promise into a near-term milestone. In a 29 April update, the company said Stargate has already surpassed its commitment to secure 10GW of AI infrastructure in the United States by 2029. The sharper detail was the pace: more than 3GW came online in the last 90 days. That number matters because the next phase of the AI race is no longer just about model launches. It is about who can lock down power, land, cooling, financing, and deployment capacity fast enough to keep up with demand.
OpenAI framed the announcement around a simple claim: stronger models, lower serving costs, and broader access all depend on more compute. That sounds obvious, but the timing is notable. Clearing a 2029 target in early 2026 suggests the biggest labs still see compute scarcity as a core strategic constraint, not a temporary bottleneck. The company also said it is evaluating additional data center sites across the U.S. beyond the initial 10GW goal, which implies the headline number may end up looking conservative rather than ambitious.
The update also tried to answer the local-politics question that now follows every frontier data center build. OpenAI emphasized a partner-heavy approach spanning utilities, chipmakers, cloud providers, construction firms, finance, skilled trades, and local communities. Its Abilene, Texas site was presented as the model. OpenAI said the facility uses closed-loop cooling rather than traditional evaporative towers. After initial fill, annual cooling-system water use at full buildout is expected to be comparable to a medium-sized office building, or about four average households. It also highlighted local education funding in Wisconsin and labor partnerships with North America’s Building Trades Unions.
The most revealing line came near the end: GPT-5.5 was trained at the Abilene Stargate site on Oracle Cloud Infrastructure using NVIDIA GB200 systems. That ties infrastructure directly to model capability rather than treating data centers as background plumbing. The next thing to watch is not just total gigawatts on paper, but how quickly those megaprojects convert into usable training and inference capacity, and whether that scale shows up as cheaper and faster AI in production. In frontier AI, compute is no longer support equipment. It is the product roadmap in physical form.
Related Articles
Stargate is no longer a promise on a roadmap. OpenAI says it has already surpassed the 10GW U.S. infrastructure target it had set for 2029, with more than 3GW added in the last 90 days alone.
On March 6, 2026, OpenAI reposted a message from Sachin Katti saying construction is underway in Port Washington, Wisconsin. The post turns OpenAI’s previously announced Stargate and partner-led compute strategy into a visible on-the-ground build milestone.
OpenAI said on X that it closed a $122 billion funding round, then published a March 31, 2026 company post outlining an $852 billion post-money valuation and a broader infrastructure push. The announcement reinforces that compute access is becoming as strategic as model quality in the frontier AI race.
Comments (0)
No comments yet. Be the first to comment!