r/singularity Shifts the AI Scaling Debate from GPUs to Power Infrastructure
Original: Half of planned US data center builds have been delayed or canceled, growth limited by shortages of power infrastructure and parts from China — the AI build-out flips the breakers View original →
A popular r/singularity thread drew attention to a Tom's Hardware report arguing that the next bottleneck in the AI build-out is not just GPU supply. The report says a large share of planned U.S. data center projects have been delayed or canceled because power-delivery equipment remains scarce. That framing hit a nerve in a community used to talking about model sizes and accelerator roadmaps, because it brings the conversation back to transformers, switchgear, batteries, and construction lead times.
The article, citing Bloomberg, says the trade conflict between the U.S. and China pushed server manufacturing away from China, but not the electrical equipment needed to build power infrastructure inside and around AI facilities. It notes that Canada, Mexico, and South Korea have become major suppliers of high-power transformers for AI data centers, while imports of such transformers from China rose from fewer than 1,500 units in 2022 to more than 8,000 units in 2025 through October. It also says China still accounts for more than 40% of U.S. battery imports and remains close to 30% in some transformer and switchgear categories.
Those details matter because they show how incomplete the usual AI-infrastructure narrative can be. Even if accelerator supply improves, projects still stall if substations, backup power systems, transformers, and other grid-side components do not arrive on time. Data-center growth is now entangled with industrial capacity, procurement cycles, and cross-border dependencies that move far more slowly than software releases. That is why the Reddit post resonated: it made the physical side of scaling impossible to ignore.
The broader implication is that AI expansion is increasingly constrained by the same heavy infrastructure realities that shape energy, telecom, and manufacturing projects. More compute no longer means just more chips and more capital expenditure. It also means access to grid equipment, construction crews, and resilient supply chains. For anyone trying to estimate how fast AI capacity can grow, those variables now look every bit as important as the next accelerator generation.
Related Articles
NVIDIA and Emerald AI said they are working with major energy companies to design AI factories that connect to the grid faster and can also support grid reliability. The plan centers on Vera Rubin DSX, DSX Flex, and Emerald AI's Conductor platform.
Amazon said on March 2, 2026 that it will raise its planned Spain investment to €33.7 billion to expand data center infrastructure and AI capacity across Europe. The company says the program should support 29,900 jobs annually and add €31.7 billion to Spain’s GDP through 2035.
Cloudflare said on March 24, 2026 that it is working with Arm to deploy the Arm AGI CPU across its global network. Arm's newsroom says the chip is the company's first production silicon product and is aimed at AI data center workloads such as accelerator management, control planes, and API hosting.
Comments (0)
No comments yet. Be the first to comment!